qid & accept id: (44780, 45907) query: What's the best way to implement a SQL script that will grant permissions to a database role on all the user tables in a database? soup:

Dr Zimmerman is on the right track here. I'd be looking to write a stored procedure that has a cursor looping through user objects using execute immediate to affect the grant. Something like this:

\n
 IF EXISTS (\n    SELECT 1 FROM sysobjects\n    WHERE name = 'sp_grantastic'\n    AND type = 'P'\n)\nDROP PROCEDURE sp_grantastic\nGO\nCREATE PROCEDURE sp_grantastic\nAS\nDECLARE\n @object_name VARCHAR(30)\n,@time       VARCHAR(8)\n,@rights     VARCHAR(20)\n,@role       VARCHAR(20)\n\nDECLARE c_objects CURSOR FOR\n    SELECT  name\n    FROM    sysobjects\n    WHERE   type IN ('P', 'U', 'V')\n    FOR READ ONLY\n\nBEGIN\n\n    SELECT  @rights = 'ALL'\n           ,@role = 'PUBLIC'\n\n    OPEN c_objects\n    WHILE (1=1)\n    BEGIN\n        FETCH c_objects INTO @object_name\n        IF @@SQLSTATUS <> 0 BREAK\n\n        SELECT @time = CONVERT(VARCHAR, GetDate(), 108)\n        PRINT '[%1!] hitting up object %2!', @time, @object_name\n        EXECUTE('GRANT '+ @rights +' ON '+ @object_name+' TO '+@role)\n\n    END\n\n    PRINT '[%1!] fin!', @time\n\n    CLOSE c_objects\n    DEALLOCATE CURSOR c_objects\nEND\nGO\nGRANT ALL ON sp_grantastic TO PUBLIC\nGO\n
\n

Then you can fire and forget:

\n
EXEC sp_grantastic\n
\n soup wrap:

Dr Zimmerman is on the right track here. I'd be looking to write a stored procedure that has a cursor looping through user objects using execute immediate to affect the grant. Something like this:

 IF EXISTS (
    SELECT 1 FROM sysobjects
    WHERE name = 'sp_grantastic'
    AND type = 'P'
)
DROP PROCEDURE sp_grantastic
GO
CREATE PROCEDURE sp_grantastic
AS
DECLARE
 @object_name VARCHAR(30)
,@time       VARCHAR(8)
,@rights     VARCHAR(20)
,@role       VARCHAR(20)

DECLARE c_objects CURSOR FOR
    SELECT  name
    FROM    sysobjects
    WHERE   type IN ('P', 'U', 'V')
    FOR READ ONLY

BEGIN

    SELECT  @rights = 'ALL'
           ,@role = 'PUBLIC'

    OPEN c_objects
    WHILE (1=1)
    BEGIN
        FETCH c_objects INTO @object_name
        IF @@SQLSTATUS <> 0 BREAK

        SELECT @time = CONVERT(VARCHAR, GetDate(), 108)
        PRINT '[%1!] hitting up object %2!', @time, @object_name
        EXECUTE('GRANT '+ @rights +' ON '+ @object_name+' TO '+@role)

    END

    PRINT '[%1!] fin!', @time

    CLOSE c_objects
    DEALLOCATE CURSOR c_objects
END
GO
GRANT ALL ON sp_grantastic TO PUBLIC
GO

Then you can fire and forget:

EXEC sp_grantastic
qid & accept id: (79789, 80134) query: Elegant method for drawing hourly bar chart from time-interval data? soup:

Create a table with just time in it from midnight to midnight containing each minute of the day. In the data warehouse world we would call this a time dimension. Here's an example:

\n
TIME_DIM\n -id\n -time_of_day\n -interval_15 \n -interval_30\n
\n

an example of the data in the table would be

\n
id   time_of_day    interval_15    interval_30\n1    00:00          00:00          00:00\n...\n30   00:23          00:15          00:00\n...\n100  05:44          05:30          05:30\n
\n

Then all you have to do is join your table to the time dimension and then group by interval_15. For example:

\n
SELECT b.interval_15, count(*) \nFROM my_data_table a\nINNER JOIN time_dim b ON a.time_field = b.time\nWHERE a.date_field = now()\nGROUP BY b.interval_15\n
\n soup wrap:

Create a table with just time in it from midnight to midnight containing each minute of the day. In the data warehouse world we would call this a time dimension. Here's an example:

TIME_DIM
 -id
 -time_of_day
 -interval_15 
 -interval_30

an example of the data in the table would be

id   time_of_day    interval_15    interval_30
1    00:00          00:00          00:00
...
30   00:23          00:15          00:00
...
100  05:44          05:30          05:30

Then all you have to do is join your table to the time dimension and then group by interval_15. For example:

SELECT b.interval_15, count(*) 
FROM my_data_table a
INNER JOIN time_dim b ON a.time_field = b.time
WHERE a.date_field = now()
GROUP BY b.interval_15
qid & accept id: (128623, 131595) query: Disable all table constraints in Oracle soup:

It is better to avoid writing out temporary spool files. Use a PL/SQL block. You can run this from SQL*Plus or put this thing into a package or procedure. The join to USER_TABLES is there to avoid view constraints.

\n

It's unlikely that you really want to disable all constraints (including NOT NULL, primary keys, etc). You should think about putting constraint_type in the WHERE clause.

\n
BEGIN\n  FOR c IN\n  (SELECT c.owner, c.table_name, c.constraint_name\n   FROM user_constraints c, user_tables t\n   WHERE c.table_name = t.table_name\n   AND c.status = 'ENABLED'\n   AND NOT (t.iot_type IS NOT NULL AND c.constraint_type = 'P')\n   ORDER BY c.constraint_type DESC)\n  LOOP\n    dbms_utility.exec_ddl_statement('alter table "' || c.owner || '"."' || c.table_name || '" disable constraint ' || c.constraint_name);\n  END LOOP;\nEND;\n/\n
\n

Enabling the constraints again is a bit tricker - you need to enable primary key constraints before you can reference them in a foreign key constraint. This can be done using an ORDER BY on constraint_type. 'P' = primary key, 'R' = foreign key.

\n
BEGIN\n  FOR c IN\n  (SELECT c.owner, c.table_name, c.constraint_name\n   FROM user_constraints c, user_tables t\n   WHERE c.table_name = t.table_name\n   AND c.status = 'DISABLED'\n   ORDER BY c.constraint_type)\n  LOOP\n    dbms_utility.exec_ddl_statement('alter table "' || c.owner || '"."' || c.table_name || '" enable constraint ' || c.constraint_name);\n  END LOOP;\nEND;\n/\n
\n soup wrap:

It is better to avoid writing out temporary spool files. Use a PL/SQL block. You can run this from SQL*Plus or put this thing into a package or procedure. The join to USER_TABLES is there to avoid view constraints.

It's unlikely that you really want to disable all constraints (including NOT NULL, primary keys, etc). You should think about putting constraint_type in the WHERE clause.

BEGIN
  FOR c IN
  (SELECT c.owner, c.table_name, c.constraint_name
   FROM user_constraints c, user_tables t
   WHERE c.table_name = t.table_name
   AND c.status = 'ENABLED'
   AND NOT (t.iot_type IS NOT NULL AND c.constraint_type = 'P')
   ORDER BY c.constraint_type DESC)
  LOOP
    dbms_utility.exec_ddl_statement('alter table "' || c.owner || '"."' || c.table_name || '" disable constraint ' || c.constraint_name);
  END LOOP;
END;
/

Enabling the constraints again is a bit tricker - you need to enable primary key constraints before you can reference them in a foreign key constraint. This can be done using an ORDER BY on constraint_type. 'P' = primary key, 'R' = foreign key.

BEGIN
  FOR c IN
  (SELECT c.owner, c.table_name, c.constraint_name
   FROM user_constraints c, user_tables t
   WHERE c.table_name = t.table_name
   AND c.status = 'DISABLED'
   ORDER BY c.constraint_type)
  LOOP
    dbms_utility.exec_ddl_statement('alter table "' || c.owner || '"."' || c.table_name || '" enable constraint ' || c.constraint_name);
  END LOOP;
END;
/
qid & accept id: (182130, 182255) query: SQL - state machine - reporting on historical data based on changeset soup:

This can be done, but would be a lot more efficient if you stored the end date of each log. With your model you have to do something like:

\n
select l1.userid\nfrom status_log l1\nwhere l1.status='s'\nand l1.logcreated = (select max(l2.logcreated)\n                     from status_log l2\n                     where l2.userid = l1.userid\n                     and   l2.logcreated <= date '2008-02-15'\n                    );\n
\n

With the additional column it woud be more like:

\n
select userid\nfrom status_log\nwhere status='s'\nand logcreated <= date '2008-02-15'\nand logsuperseded >= date '2008-02-15';\n
\n

(Apologies for any syntax errors, I don't know Postgresql.)

\n

To address some further issues raised by Phil:

\n
\n

A user might get moved from active, to suspended, to cancelled, to active again. This is a simplified version, in reality, there are even more states and people can be moved directly from one state to another.

\n
\n

This would appear in the table like this:

\n
userid  from       to         status\nFRED    2008-01-01 2008-01-31 s\nFRED    2008-02-01 2008-02-07 c\nFRED    2008-02-08            a\n
\n

I used a null for the "to" date of the current record. I could have used a future date like 2999-12-31 but null is preferable in some ways.

\n
\n

Additionally, there would be no "end date" for the current status either, so I think this slightly breaks your query?

\n
\n

Yes, my query would have to be re-written as

\n
select userid\nfrom status_log\nwhere status='s'\nand logcreated <= date '2008-02-15'\nand (logsuperseded is null or logsuperseded >= date '2008-02-15');\n
\n

A downside of this design is that whenever the user's status changes you have to end date their current status_log as well as create a new one. However, that isn't difficult, and I think the query advantage probably outweighs this.

\n soup wrap:

This can be done, but would be a lot more efficient if you stored the end date of each log. With your model you have to do something like:

select l1.userid
from status_log l1
where l1.status='s'
and l1.logcreated = (select max(l2.logcreated)
                     from status_log l2
                     where l2.userid = l1.userid
                     and   l2.logcreated <= date '2008-02-15'
                    );

With the additional column it woud be more like:

select userid
from status_log
where status='s'
and logcreated <= date '2008-02-15'
and logsuperseded >= date '2008-02-15';

(Apologies for any syntax errors, I don't know Postgresql.)

To address some further issues raised by Phil:

A user might get moved from active, to suspended, to cancelled, to active again. This is a simplified version, in reality, there are even more states and people can be moved directly from one state to another.

This would appear in the table like this:

userid  from       to         status
FRED    2008-01-01 2008-01-31 s
FRED    2008-02-01 2008-02-07 c
FRED    2008-02-08            a

I used a null for the "to" date of the current record. I could have used a future date like 2999-12-31 but null is preferable in some ways.

Additionally, there would be no "end date" for the current status either, so I think this slightly breaks your query?

Yes, my query would have to be re-written as

select userid
from status_log
where status='s'
and logcreated <= date '2008-02-15'
and (logsuperseded is null or logsuperseded >= date '2008-02-15');

A downside of this design is that whenever the user's status changes you have to end date their current status_log as well as create a new one. However, that isn't difficult, and I think the query advantage probably outweighs this.

qid & accept id: (192220, 192462) query: What is the most efficient/elegant way to parse a flat table into a tree? soup:

There are several ways to store tree-structured data in a relational database. What you show in your example uses two methods:

\n\n

Another solution is called Nested Sets, and it can be stored in the same table too. Read "Trees and Hierarchies in SQL for Smarties" by Joe Celko for a lot more information on these designs.

\n

I usually prefer a design called Closure Table (aka "Adjacency Relation") for storing tree-structured data. It requires another table, but then querying trees is pretty easy.

\n

I cover Closure Table in my presentation Models for Hierarchical Data with SQL and PHP and in my book SQL Antipatterns: Avoiding the Pitfalls of Database Programming.

\n
CREATE TABLE ClosureTable (\n  ancestor_id   INT NOT NULL REFERENCES FlatTable(id),\n  descendant_id INT NOT NULL REFERENCES FlatTable(id),\n  PRIMARY KEY (ancestor_id, descendant_id)\n);\n
\n

Store all paths in the Closure Table, where there is a direct ancestry from one node to another. Include a row for each node to reference itself. For example, using the data set you showed in your question:

\n
INSERT INTO ClosureTable (ancestor_id, descendant_id) VALUES\n  (1,1), (1,2), (1,4), (1,6),\n  (2,2), (2,4),\n  (3,3), (3,5),\n  (4,4),\n  (5,5),\n  (6,6);\n
\n

Now you can get a tree starting at node 1 like this:

\n
SELECT f.* \nFROM FlatTable f \n  JOIN ClosureTable a ON (f.id = a.descendant_id)\nWHERE a.ancestor_id = 1;\n
\n

The output (in MySQL client) looks like the following:

\n
+----+\n| id |\n+----+\n|  1 | \n|  2 | \n|  4 | \n|  6 | \n+----+\n
\n

In other words, nodes 3 and 5 are excluded, because they're part of a separate hierarchy, not descending from node 1.

\n
\n

Re: comment from e-satis about immediate children (or immediate parent). You can add a "path_length" column to the ClosureTable to make it easier to query specifically for an immediate child or parent (or any other distance).

\n
INSERT INTO ClosureTable (ancestor_id, descendant_id, path_length) VALUES\n  (1,1,0), (1,2,1), (1,4,2), (1,6,1),\n  (2,2,0), (2,4,1),\n  (3,3,0), (3,5,1),\n  (4,4,0),\n  (5,5,0),\n  (6,6,0);\n
\n

Then you can add a term in your search for querying the immediate children of a given node. These are descendants whose path_length is 1.

\n
SELECT f.* \nFROM FlatTable f \n  JOIN ClosureTable a ON (f.id = a.descendant_id)\nWHERE a.ancestor_id = 1\n  AND path_length = 1;\n\n+----+\n| id |\n+----+\n|  2 | \n|  6 | \n+----+\n
\n
\n

Re comment from @ashraf: "How about sorting the whole tree [by name]?"

\n

Here's an example query to return all nodes that are descendants of node 1, join them to the FlatTable that contains other node attributes such as name, and sort by the name.

\n
SELECT f.name\nFROM FlatTable f \nJOIN ClosureTable a ON (f.id = a.descendant_id)\nWHERE a.ancestor_id = 1\nORDER BY f.name;\n
\n
\n

Re comment from @Nate:

\n
SELECT f.name, GROUP_CONCAT(b.ancestor_id order by b.path_length desc) AS breadcrumbs\nFROM FlatTable f \nJOIN ClosureTable a ON (f.id = a.descendant_id) \nJOIN ClosureTable b ON (b.descendant_id = a.descendant_id) \nWHERE a.ancestor_id = 1 \nGROUP BY a.descendant_id \nORDER BY f.name\n\n+------------+-------------+\n| name       | breadcrumbs |\n+------------+-------------+\n| Node 1     | 1           |\n| Node 1.1   | 1,2         |\n| Node 1.1.1 | 1,2,4       |\n| Node 1.2   | 1,6         |\n+------------+-------------+\n
\n
\n

A user suggested an edit today. SO moderators approved the edit, but I am reversing it.

\n

The edit suggested that the ORDER BY in the last query above should be ORDER BY b.path_length, f.name, presumably to make sure the ordering matches the hierarchy. But this doesn't work, because it would order "Node 1.1.1" after "Node 1.2".

\n

If you want the ordering to match the hierarchy in a sensible way, that is possible, but not simply by ordering by the path length. For example, see my answer to MySQL Closure Table hierarchical database - How to pull information out in the correct order.

\n soup wrap:

There are several ways to store tree-structured data in a relational database. What you show in your example uses two methods:

Another solution is called Nested Sets, and it can be stored in the same table too. Read "Trees and Hierarchies in SQL for Smarties" by Joe Celko for a lot more information on these designs.

I usually prefer a design called Closure Table (aka "Adjacency Relation") for storing tree-structured data. It requires another table, but then querying trees is pretty easy.

I cover Closure Table in my presentation Models for Hierarchical Data with SQL and PHP and in my book SQL Antipatterns: Avoiding the Pitfalls of Database Programming.

CREATE TABLE ClosureTable (
  ancestor_id   INT NOT NULL REFERENCES FlatTable(id),
  descendant_id INT NOT NULL REFERENCES FlatTable(id),
  PRIMARY KEY (ancestor_id, descendant_id)
);

Store all paths in the Closure Table, where there is a direct ancestry from one node to another. Include a row for each node to reference itself. For example, using the data set you showed in your question:

INSERT INTO ClosureTable (ancestor_id, descendant_id) VALUES
  (1,1), (1,2), (1,4), (1,6),
  (2,2), (2,4),
  (3,3), (3,5),
  (4,4),
  (5,5),
  (6,6);

Now you can get a tree starting at node 1 like this:

SELECT f.* 
FROM FlatTable f 
  JOIN ClosureTable a ON (f.id = a.descendant_id)
WHERE a.ancestor_id = 1;

The output (in MySQL client) looks like the following:

+----+
| id |
+----+
|  1 | 
|  2 | 
|  4 | 
|  6 | 
+----+

In other words, nodes 3 and 5 are excluded, because they're part of a separate hierarchy, not descending from node 1.


Re: comment from e-satis about immediate children (or immediate parent). You can add a "path_length" column to the ClosureTable to make it easier to query specifically for an immediate child or parent (or any other distance).

INSERT INTO ClosureTable (ancestor_id, descendant_id, path_length) VALUES
  (1,1,0), (1,2,1), (1,4,2), (1,6,1),
  (2,2,0), (2,4,1),
  (3,3,0), (3,5,1),
  (4,4,0),
  (5,5,0),
  (6,6,0);

Then you can add a term in your search for querying the immediate children of a given node. These are descendants whose path_length is 1.

SELECT f.* 
FROM FlatTable f 
  JOIN ClosureTable a ON (f.id = a.descendant_id)
WHERE a.ancestor_id = 1
  AND path_length = 1;

+----+
| id |
+----+
|  2 | 
|  6 | 
+----+

Re comment from @ashraf: "How about sorting the whole tree [by name]?"

Here's an example query to return all nodes that are descendants of node 1, join them to the FlatTable that contains other node attributes such as name, and sort by the name.

SELECT f.name
FROM FlatTable f 
JOIN ClosureTable a ON (f.id = a.descendant_id)
WHERE a.ancestor_id = 1
ORDER BY f.name;

Re comment from @Nate:

SELECT f.name, GROUP_CONCAT(b.ancestor_id order by b.path_length desc) AS breadcrumbs
FROM FlatTable f 
JOIN ClosureTable a ON (f.id = a.descendant_id) 
JOIN ClosureTable b ON (b.descendant_id = a.descendant_id) 
WHERE a.ancestor_id = 1 
GROUP BY a.descendant_id 
ORDER BY f.name

+------------+-------------+
| name       | breadcrumbs |
+------------+-------------+
| Node 1     | 1           |
| Node 1.1   | 1,2         |
| Node 1.1.1 | 1,2,4       |
| Node 1.2   | 1,6         |
+------------+-------------+

A user suggested an edit today. SO moderators approved the edit, but I am reversing it.

The edit suggested that the ORDER BY in the last query above should be ORDER BY b.path_length, f.name, presumably to make sure the ordering matches the hierarchy. But this doesn't work, because it would order "Node 1.1.1" after "Node 1.2".

If you want the ordering to match the hierarchy in a sensible way, that is possible, but not simply by ordering by the path length. For example, see my answer to MySQL Closure Table hierarchical database - How to pull information out in the correct order.

qid & accept id: (216007, 216020) query: How to determine total number of open/active connections in ms sql server 2005 soup:

This shows the number of connections per each DB:

\n
SELECT \n    DB_NAME(dbid) as DBName, \n    COUNT(dbid) as NumberOfConnections,\n    loginame as LoginName\nFROM\n    sys.sysprocesses\nWHERE \n    dbid > 0\nGROUP BY \n    dbid, loginame\n
\n

And this gives the total:

\n
SELECT \n    COUNT(dbid) as TotalConnections\nFROM\n    sys.sysprocesses\nWHERE \n    dbid > 0\n
\n

If you need more detail, run:

\n
sp_who2 'Active'\n
\n

Note: The SQL Server account used needs the 'sysadmin' role (otherwise it will just show a single row and a count of 1 as the result)

\n soup wrap:

This shows the number of connections per each DB:

SELECT 
    DB_NAME(dbid) as DBName, 
    COUNT(dbid) as NumberOfConnections,
    loginame as LoginName
FROM
    sys.sysprocesses
WHERE 
    dbid > 0
GROUP BY 
    dbid, loginame

And this gives the total:

SELECT 
    COUNT(dbid) as TotalConnections
FROM
    sys.sysprocesses
WHERE 
    dbid > 0

If you need more detail, run:

sp_who2 'Active'

Note: The SQL Server account used needs the 'sysadmin' role (otherwise it will just show a single row and a count of 1 as the result)

qid & accept id: (289649, 289849) query: Remapping/Concatenating in SQL soup:

Assuming that the column headings "john", "lucy" etc are fixed, you can group by the address field and use if() functions combined with aggregate operators to get your results:

\n
select max(if(forename='john',surname,null)) as john,\n       max(if(forename='lucy',surname,null)) as lucy,\n       max(if(forename='jenny',surname,null)) as jenny,       \n       max(if(forename='steve',surname,null)) as steve,       \n       max(if(forename='richard',surname,null)) as richard,\n       address\nfrom tablename \ngroup by address;\n
\n

It is a bit brittle though.

\n

There is also the group_concat function that can be used (within limits) to do something similar, but it will be ordered row-wise rather than column-wise as you appear to require.

\n

eg.

\n
select address, group_concat( concat( forename, surname ) ) tenants \nfrom tablename\ngroup by address;\n
\n soup wrap:

Assuming that the column headings "john", "lucy" etc are fixed, you can group by the address field and use if() functions combined with aggregate operators to get your results:

select max(if(forename='john',surname,null)) as john,
       max(if(forename='lucy',surname,null)) as lucy,
       max(if(forename='jenny',surname,null)) as jenny,       
       max(if(forename='steve',surname,null)) as steve,       
       max(if(forename='richard',surname,null)) as richard,
       address
from tablename 
group by address;

It is a bit brittle though.

There is also the group_concat function that can be used (within limits) to do something similar, but it will be ordered row-wise rather than column-wise as you appear to require.

eg.

select address, group_concat( concat( forename, surname ) ) tenants 
from tablename
group by address;
qid & accept id: (313962, 313995) query: PHP/MySQL: Retrieving the last *full* weeks entries soup:

see the MySQL function YEARWEEK().

\n

So you could do something like

\n
SELECT * FROM table WHERE YEARWEEK(purchased) = YEARWEEK(NOW());\n
\n

You can change the starting day of the week by using a second mode parameter

\n

What might be better however is to somehow calculate the date of 'last sunday at 00:00', and then the database would not have to run a function for each row, but I couldn't see an obvious way of doing that in MySQL. You could however easily generate this in php and do something like

\n
$sunday = date(('Y-m-d H:i:s'), strtotime('last sunday 00:00'));\n$sql = "SELECT * FROM table WHERE purchased >= '$sunday'";\n
\n soup wrap:

see the MySQL function YEARWEEK().

So you could do something like

SELECT * FROM table WHERE YEARWEEK(purchased) = YEARWEEK(NOW());

You can change the starting day of the week by using a second mode parameter

What might be better however is to somehow calculate the date of 'last sunday at 00:00', and then the database would not have to run a function for each row, but I couldn't see an obvious way of doing that in MySQL. You could however easily generate this in php and do something like

$sunday = date(('Y-m-d H:i:s'), strtotime('last sunday 00:00'));
$sql = "SELECT * FROM table WHERE purchased >= '$sunday'";
qid & accept id: (318528, 321624) query: How do you identify the triggers associated with a table in a sybase database? soup:

I also found out that

\n
sp_depends  \n
\n

will show you a lot of information about a table, including all triggers associated with it. Using that, along with Ray's query can make it much easier to find the triggers. Combined with this query from Ray's linked article:

\n
sp_helptext \n
\n

and you can see the definition of the trigger:

\n
sp_depends \n
\n

will also show you all tables related to a trigger

\n soup wrap:

I also found out that

sp_depends  

will show you a lot of information about a table, including all triggers associated with it. Using that, along with Ray's query can make it much easier to find the triggers. Combined with this query from Ray's linked article:

sp_helptext 

and you can see the definition of the trigger:

sp_depends 

will also show you all tables related to a trigger

qid & accept id: (363084, 363089) query: MYSQL - How would I Export tables specifying only certain fields? soup:
SELECT A,B,C\nFROM X\nINTO OUTFILE 'file name';\n
\n

You need the FILE privilege to do this, and it won't overwrite files.

\n

INTO OUTFILE has a bunch of options to it as well, such as FIELDS ENCLOSED BY, FIELDS ESCAPED BY, etc... that you may want to look up in the manual.

\n

To produce a CSV file, you would do something like:

\n
SELECT A,B,C\nINTO OUTFILE '/tmp/result.txt'\nFIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'\nLINES TERMINATED BY '\n'\nFROM X;\n
\n

To load the data back in from the file, use the LOAD DATA INFILE command with the same options you used to dump it out. For the CSV format above, that would be

\n
LOAD DATA INFILE '/tmp/result.txt'\nINTO TABLE X\nFIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'\nLINES TERMINATED BY '\n';\n
\n soup wrap:
SELECT A,B,C
FROM X
INTO OUTFILE 'file name';

You need the FILE privilege to do this, and it won't overwrite files.

INTO OUTFILE has a bunch of options to it as well, such as FIELDS ENCLOSED BY, FIELDS ESCAPED BY, etc... that you may want to look up in the manual.

To produce a CSV file, you would do something like:

SELECT A,B,C
INTO OUTFILE '/tmp/result.txt'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
FROM X;

To load the data back in from the file, use the LOAD DATA INFILE command with the same options you used to dump it out. For the CSV format above, that would be

LOAD DATA INFILE '/tmp/result.txt'
INTO TABLE X
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n';
qid & accept id: (374079, 374191) query: Group repeated rows in TSQL soup:

This is a set-based solution for the problem. The performance will probably suck, but it works :)

\n
CREATE TABLE #LogEntries (\n  ID INT IDENTITY,\n  LogEntry VARCHAR(100)\n)\n\nINSERT INTO #LogEntries VALUES ('beans')\nINSERT INTO #LogEntries VALUES ('beans')\nINSERT INTO #LogEntries VALUES ('beans')\nINSERT INTO #LogEntries VALUES ('cabbage')\nINSERT INTO #LogEntries VALUES ('cabbage')\nINSERT INTO #LogEntries VALUES ('carrots')\nINSERT INTO #LogEntries VALUES ('beans')\nINSERT INTO #LogEntries VALUES ('beans')\nINSERT INTO #LogEntries VALUES ('carrots')\n\nSELECT logentry, COUNT(*) FROM (\n    SELECT logentry, \n    ISNULL((SELECT MAX(id) FROM #logentries l2 WHERE l1.logentry<>l2.logentry AND l2.id < l1.id), 0) AS id\n    FROM #LogEntries l1\n) AS a\nGROUP BY logentry, id\n\n\nDROP TABLE #logentries \n
\n

Results:

\n
beans   3\ncabbage 2\ncarrots 1\nbeans   2\ncarrots 1\n
\n

The ISNULL() is required for the first set of beans.

\n soup wrap:

This is a set-based solution for the problem. The performance will probably suck, but it works :)

CREATE TABLE #LogEntries (
  ID INT IDENTITY,
  LogEntry VARCHAR(100)
)

INSERT INTO #LogEntries VALUES ('beans')
INSERT INTO #LogEntries VALUES ('beans')
INSERT INTO #LogEntries VALUES ('beans')
INSERT INTO #LogEntries VALUES ('cabbage')
INSERT INTO #LogEntries VALUES ('cabbage')
INSERT INTO #LogEntries VALUES ('carrots')
INSERT INTO #LogEntries VALUES ('beans')
INSERT INTO #LogEntries VALUES ('beans')
INSERT INTO #LogEntries VALUES ('carrots')

SELECT logentry, COUNT(*) FROM (
    SELECT logentry, 
    ISNULL((SELECT MAX(id) FROM #logentries l2 WHERE l1.logentry<>l2.logentry AND l2.id < l1.id), 0) AS id
    FROM #LogEntries l1
) AS a
GROUP BY logentry, id


DROP TABLE #logentries 

Results:

beans   3
cabbage 2
carrots 1
beans   2
carrots 1

The ISNULL() is required for the first set of beans.

qid & accept id: (379556, 380440) query: Time slicing in Oracle/SQL soup:

In terms of getting the data out, you can use 'group by' and 'truncate' to slice the data into 1 minute intervals. eg:

\n
SELECT user_name, truncate(event_time, 'YYYYMMDD HH24MI'), count(*)\nFROM job_table\nWHERE event_time > TO_DATE( some start date time)\nAND user_name IN ( list of users to query )\nGROUP BY user_name, truncate(event_time, 'YYYYMMDD HH24MI') \n
\n

This will give you results like below (assuming there are 20 rows for alice between 8.00 and 8.01 and 40 rows between 8.01 and 8.02):

\n
Alice  2008-12-16 08:00   20\nAlice  2008-12-16 08:01   40\n
\n soup wrap:

In terms of getting the data out, you can use 'group by' and 'truncate' to slice the data into 1 minute intervals. eg:

SELECT user_name, truncate(event_time, 'YYYYMMDD HH24MI'), count(*)
FROM job_table
WHERE event_time > TO_DATE( some start date time)
AND user_name IN ( list of users to query )
GROUP BY user_name, truncate(event_time, 'YYYYMMDD HH24MI') 

This will give you results like below (assuming there are 20 rows for alice between 8.00 and 8.01 and 40 rows between 8.01 and 8.02):

Alice  2008-12-16 08:00   20
Alice  2008-12-16 08:01   40
qid & accept id: (439138, 439387) query: Running total by grouped records in table soup:

Do you really need the extra table?

\n

You can get that data you need with a simple query, which you can obviously create as a view if you want it to appear like a table.

\n

This will get you the data you are looking for:

\n
select \n    account, bookdate, amount, \n    sum(amount) over (partition by account order by bookdate) running_total\nfrom t\n/\n
\n

This will create a view to show you the data as if it were a table:

\n
create or replace view t2\nas\nselect \n    account, bookdate, amount, \n    sum(amount) over (partition by account order by bookdate) running_total \nfrom t\n/\n
\n

If you really need the table, do you mean that you need it constantly updated? or just a one off? Obviously if it's a one off you can just "create table as select" using the above query.

\n

Test data I used is:

\n
create table t(account number, bookdate date, amount number);\n\ninsert into t(account, bookdate, amount) values (1, to_date('20080101', 'yyyymmdd'), 100);\n\ninsert into t(account, bookdate, amount) values (1, to_date('20080102', 'yyyymmdd'), 101);\n\ninsert into t(account, bookdate, amount) values (1, to_date('20080103', 'yyyymmdd'), -200);\n\ninsert into t(account, bookdate, amount) values (2, to_date('20080102', 'yyyymmdd'), 200);\n\ncommit;\n
\n

edit:

\n

forgot to add; you specified that you wanted the table to be ordered - this doesn't really make sense, and makes me think that you really mean that you wanted the query/view - ordering is a result of the query you execute, not something that's inherant in the table (ignoring Index Organised Tables and the like).

\n soup wrap:

Do you really need the extra table?

You can get that data you need with a simple query, which you can obviously create as a view if you want it to appear like a table.

This will get you the data you are looking for:

select 
    account, bookdate, amount, 
    sum(amount) over (partition by account order by bookdate) running_total
from t
/

This will create a view to show you the data as if it were a table:

create or replace view t2
as
select 
    account, bookdate, amount, 
    sum(amount) over (partition by account order by bookdate) running_total 
from t
/

If you really need the table, do you mean that you need it constantly updated? or just a one off? Obviously if it's a one off you can just "create table as select" using the above query.

Test data I used is:

create table t(account number, bookdate date, amount number);

insert into t(account, bookdate, amount) values (1, to_date('20080101', 'yyyymmdd'), 100);

insert into t(account, bookdate, amount) values (1, to_date('20080102', 'yyyymmdd'), 101);

insert into t(account, bookdate, amount) values (1, to_date('20080103', 'yyyymmdd'), -200);

insert into t(account, bookdate, amount) values (2, to_date('20080102', 'yyyymmdd'), 200);

commit;

edit:

forgot to add; you specified that you wanted the table to be ordered - this doesn't really make sense, and makes me think that you really mean that you wanted the query/view - ordering is a result of the query you execute, not something that's inherant in the table (ignoring Index Organised Tables and the like).

qid & accept id: (501021, 501037) query: Python + SQLite query to find entries that sit in a specified time slot soup:

SQLite3 doesn't have a datetime type, though it does have date and time functions.

\n

Typically you store dates and times in your database in something like ISO 8601 format: YYYY-MM-DD HH:MM:SS. Then datetimes sort lexicographically into time order.

\n

With your datetimes stored this way, you simply use text comparisons such as

\n
SELECT * FROM tbl WHERE tbl.start = '2009-02-01 10:30:00'\n
\n

or

\n
SELECT * FROM tbl WHERE '2009-02-01 10:30:00' BETWEEN tbl.start AND tbl.end;\n
\n soup wrap:

SQLite3 doesn't have a datetime type, though it does have date and time functions.

Typically you store dates and times in your database in something like ISO 8601 format: YYYY-MM-DD HH:MM:SS. Then datetimes sort lexicographically into time order.

With your datetimes stored this way, you simply use text comparisons such as

SELECT * FROM tbl WHERE tbl.start = '2009-02-01 10:30:00'

or

SELECT * FROM tbl WHERE '2009-02-01 10:30:00' BETWEEN tbl.start AND tbl.end;
qid & accept id: (521270, 558434) query: Best way to implement a stored procedure with full text search soup:

I agreed with above, look into AND clauses

\n
SELECT TITLE\nFROM MOVIES\nWHERE CONTAINS(TITLE,'"hollywood*" AND "square*"')\n
\n

However you shouldn't have to split the input sentences, you can use variable

\n
SELECT TITLE\nFROM MOVIES\nWHERE CONTAINS(TITLE,@parameter)\n
\n

by the way\nsearch for the exact term (contains)\nsearch for any term in the phrase (freetext)

\n soup wrap:

I agreed with above, look into AND clauses

SELECT TITLE
FROM MOVIES
WHERE CONTAINS(TITLE,'"hollywood*" AND "square*"')

However you shouldn't have to split the input sentences, you can use variable

SELECT TITLE
FROM MOVIES
WHERE CONTAINS(TITLE,@parameter)

by the way search for the exact term (contains) search for any term in the phrase (freetext)

qid & accept id: (539942, 539951) query: Updating multiple rows with a value calculated from another column soup:
SELECT SUBSTRING(colDate,0,8) as 'date' \nFROM someTable\n
\n

Or am I mistaken?

\n
UPDATE someTable\nSET newDateField = SUBSTRING(colDate,0,8)\n
\n

Would likely work too. Untested.

\n soup wrap:
SELECT SUBSTRING(colDate,0,8) as 'date' 
FROM someTable

Or am I mistaken?

UPDATE someTable
SET newDateField = SUBSTRING(colDate,0,8)

Would likely work too. Untested.

qid & accept id: (556509, 556550) query: SQL : update statement with dynamic column value assignment soup:
UPDATE mytable, (\n  SELECT @loop := MAX(col1)\n  FROM\n    mytable\n  ) o\nSET col1 = (@loop := @loop + 1)\n
\n

What you encountered here is called query stability.

\n

No query can see the changes made by itself, or the following query:

\n
UPDATE mytable\nSET col1 = col2 + 1\nWHERE col1 > col2 \n
\n

would never end.

\n soup wrap:
UPDATE mytable, (
  SELECT @loop := MAX(col1)
  FROM
    mytable
  ) o
SET col1 = (@loop := @loop + 1)

What you encountered here is called query stability.

No query can see the changes made by itself, or the following query:

UPDATE mytable
SET col1 = col2 + 1
WHERE col1 > col2 

would never end.

qid & accept id: (560694, 560760) query: adding a tags field to an asp.net web page soup:

Here is an oversimplifed example. I am using c# but converting it to vb must be trivial. You will need to dig into lots of more details.

\n

Assuming that you are using webforms, you need a textbox on your page:

\n
\n
\n

Assuming that you have a submit button:

\n
\n
\n

You would have a SaveTags method that handles the click event:

\n
protected void SaveTags(object sender, EventArgs e)\n{\n    string[] tags = txtTags.Text.Split(' ');\n\n    SqlConnection connection = new SqlConnection("Your connection string");\n    SqlCommand command = connection.CreateCommand("Insert Into Tags(tag) Values(@tag)");\n    foreach (string tag in tags)\n    {\n        command.Parameters.Clear();\n        command.Parameters.AddWithValue("@tag", tag);\n        command.ExecuteNonQuery();\n    }\n    connection.Close();\n}\n
\n soup wrap:

Here is an oversimplifed example. I am using c# but converting it to vb must be trivial. You will need to dig into lots of more details.

Assuming that you are using webforms, you need a textbox on your page:


Assuming that you have a submit button:


You would have a SaveTags method that handles the click event:

protected void SaveTags(object sender, EventArgs e)
{
    string[] tags = txtTags.Text.Split(' ');

    SqlConnection connection = new SqlConnection("Your connection string");
    SqlCommand command = connection.CreateCommand("Insert Into Tags(tag) Values(@tag)");
    foreach (string tag in tags)
    {
        command.Parameters.Clear();
        command.Parameters.AddWithValue("@tag", tag);
        command.ExecuteNonQuery();
    }
    connection.Close();
}
qid & accept id: (576071, 576147) query: coverage percentage using a complex sql query...? soup:
SELECT AVG(covered)\nFROM (\n  SELECT CASE WHEN COUNT(*) >= 2 THEN 1 ELSE 0 END AS covered\n  FROM app a\n  LEFT JOIN skill s ON (s.id_app = a.id AND s.lvl >= 2)\n  GROUP BY a.id\n)\n
\n

More efficient way for MySQL:

\n
SELECT AVG\n       (\n         IFNULL\n         (\n           (\n           SELECT 1\n           FROM skill s\n           WHERE s.id_app = a.id\n           AND s.lvl >= 2\n           LIMIT 1, 1\n           ), 0\n         )\n       )\nFROM app a\n
\n

This will stop counting as soon as it finds the second skilled person for each app.

\n

Efficient if you have a few app's but lots of person's.

\n soup wrap:
SELECT AVG(covered)
FROM (
  SELECT CASE WHEN COUNT(*) >= 2 THEN 1 ELSE 0 END AS covered
  FROM app a
  LEFT JOIN skill s ON (s.id_app = a.id AND s.lvl >= 2)
  GROUP BY a.id
)

More efficient way for MySQL:

SELECT AVG
       (
         IFNULL
         (
           (
           SELECT 1
           FROM skill s
           WHERE s.id_app = a.id
           AND s.lvl >= 2
           LIMIT 1, 1
           ), 0
         )
       )
FROM app a

This will stop counting as soon as it finds the second skilled person for each app.

Efficient if you have a few app's but lots of person's.

qid & accept id: (603504, 603531) query: How do you convert SYS_GUID() to varchar? soup:

Don't forget to use HEXTORAW(varchar2) when comparing this value to the RAW columns.

\n

There is no implicit convesion from VARCHAR2 to RAW. That means that this clause:

\n
WHERE raw_column = :varchar_value\n
\n

will be impicitly converted into:

\n
WHERE RAWTOHEX(raw_column) = :varchar_value\n
\n

, thus making indices on raw_column unusable.

\n

Use:

\n
WHERE raw_column = HEXTORAW(:varchar_value)\n
\n

instead.

\n soup wrap:

Don't forget to use HEXTORAW(varchar2) when comparing this value to the RAW columns.

There is no implicit convesion from VARCHAR2 to RAW. That means that this clause:

WHERE raw_column = :varchar_value

will be impicitly converted into:

WHERE RAWTOHEX(raw_column) = :varchar_value

, thus making indices on raw_column unusable.

Use:

WHERE raw_column = HEXTORAW(:varchar_value)

instead.

qid & accept id: (674776, 674801) query: Unified records for database query with Sql soup:

You will need to join to your sub requester attribute table to the query twice. One with the attribute of Urgent and one with the attribute of Close.

\n

You will need to LEFT join to these for the instances where they may be null and then reference each of the tables in your SELECT to show the relevent attribute.

\n

I also wouldn't reccomend the cross join. You should perform your "OR" join on the personnel table in the FROM clause rather than doing a cross join and filtering in the WHERE clause.

\n

EDIT: Sorry, my first response was a bit rushed. Have now had a chance to look further. Due to the sub requester and the sub requester attribute both being duplicates you need to split them both up into a subquery. Also, your modified date could be different for both values. So i've doubled that up. This is completely untested, and by no means the "optimum" solution. It's quite tricky to write the query without the actual database to check against. Hopefully it will explain what I meant though.

\n
SELECT\n    r.RequesterID,\n    p.FirstName + ' ' + p.LastName AS RequesterName,\n    sra1.ModifiedDate as UrgentModifiedDate,\n    sra1.AttributeValue as Urgent,\n    sra2.ModifiedDate as ClosedModifiedDate,\n    sra2.AttributeValue as Closed\nFROM\n    Personnel AS p\nINNER JOIN\n    Requester AS r \nON\n(\n    r.UserID = p.ContractorID\nOR\n    r.UserID = p.EmployeeID\n)\nLEFT OUTER JOIN\n(\n    SELECT\n        sr1.RequesterID,\n        sr1.ModifiedDate,\n        sa1.Attribute,\n        sa1.AttributeValue\n    FROM\n        SubRequester AS sr1\n    INNER JOIN\n        SubRequesterAttribute AS sa1\n    ON\n        sr1.SubRequesterID = sa1.SubRequesterID\n    AND\n        sa1.Attribute = 'Urgent'\n) sra1\nON\n    sra1.RequesterID = r.RequesterID\nLEFT OUTER JOIN\n(\n    SELECT\n        sr2.RequesterID,\n        sr2.ModifiedDate,\n        sa2.Attribute,\n        sa2.AttributeValue\n    FROM\n        SubRequester AS sr2\n    INNER JOIN\n        SubRequesterAttribute AS sa2\n    ON\n        sr2.SubRequesterID = sa2.SubRequesterID\n    AND\n        sa2.Attribute = 'Closed'\n) sra1\nON\n    sra2.RequesterID = r.RequesterID\n
\n

SECOND EDIT: My last edit was that there were multiple SubRequesters as well as multiple Attribute, from your last comment you want to show all SubRequesters and the two relevent attributes? You can achieve this as follows.

\n
SELECT\n    r.RequesterID,\n    p.FirstName + ' ' + p.LastName AS RequesterName,\n    sr.ModifiedDate,\n    sa1.AttributeValue as Urgent,\n    sa2.AttributeValue as Closed\nFROM\n    Personnel AS p\nINNER JOIN\n    Requester AS r \nON\n(\n    r.UserID = p.ContractorID\nOR\n    r.UserID = p.EmployeeID\n)\nINNER JOI N\n    SubRequester as sr\nON\n    sr.RequesterID = r.RequesterID\nLEFT OUTER JOIN\n    SubRequesterAttribute AS sa1\nON\n    sa1.SubRequesterID = sr.SubRequesterID\nAND\n    sa1.Attribute = 'Urgent'\nLEFT OUTER JOIN\n    SubRequesterAttribute AS sa2\nON\n    sa2.SubRequesterID = sr.SubRequesterID\nAND\n    sa2.Attribute = 'Closed'\n
\n soup wrap:

You will need to join to your sub requester attribute table to the query twice. One with the attribute of Urgent and one with the attribute of Close.

You will need to LEFT join to these for the instances where they may be null and then reference each of the tables in your SELECT to show the relevent attribute.

I also wouldn't reccomend the cross join. You should perform your "OR" join on the personnel table in the FROM clause rather than doing a cross join and filtering in the WHERE clause.

EDIT: Sorry, my first response was a bit rushed. Have now had a chance to look further. Due to the sub requester and the sub requester attribute both being duplicates you need to split them both up into a subquery. Also, your modified date could be different for both values. So i've doubled that up. This is completely untested, and by no means the "optimum" solution. It's quite tricky to write the query without the actual database to check against. Hopefully it will explain what I meant though.

SELECT
    r.RequesterID,
    p.FirstName + ' ' + p.LastName AS RequesterName,
    sra1.ModifiedDate as UrgentModifiedDate,
    sra1.AttributeValue as Urgent,
    sra2.ModifiedDate as ClosedModifiedDate,
    sra2.AttributeValue as Closed
FROM
    Personnel AS p
INNER JOIN
    Requester AS r 
ON
(
    r.UserID = p.ContractorID
OR
    r.UserID = p.EmployeeID
)
LEFT OUTER JOIN
(
    SELECT
        sr1.RequesterID,
        sr1.ModifiedDate,
        sa1.Attribute,
        sa1.AttributeValue
    FROM
        SubRequester AS sr1
    INNER JOIN
        SubRequesterAttribute AS sa1
    ON
        sr1.SubRequesterID = sa1.SubRequesterID
    AND
        sa1.Attribute = 'Urgent'
) sra1
ON
    sra1.RequesterID = r.RequesterID
LEFT OUTER JOIN
(
    SELECT
        sr2.RequesterID,
        sr2.ModifiedDate,
        sa2.Attribute,
        sa2.AttributeValue
    FROM
        SubRequester AS sr2
    INNER JOIN
        SubRequesterAttribute AS sa2
    ON
        sr2.SubRequesterID = sa2.SubRequesterID
    AND
        sa2.Attribute = 'Closed'
) sra1
ON
    sra2.RequesterID = r.RequesterID

SECOND EDIT: My last edit was that there were multiple SubRequesters as well as multiple Attribute, from your last comment you want to show all SubRequesters and the two relevent attributes? You can achieve this as follows.

SELECT
    r.RequesterID,
    p.FirstName + ' ' + p.LastName AS RequesterName,
    sr.ModifiedDate,
    sa1.AttributeValue as Urgent,
    sa2.AttributeValue as Closed
FROM
    Personnel AS p
INNER JOIN
    Requester AS r 
ON
(
    r.UserID = p.ContractorID
OR
    r.UserID = p.EmployeeID
)
INNER JOI N
    SubRequester as sr
ON
    sr.RequesterID = r.RequesterID
LEFT OUTER JOIN
    SubRequesterAttribute AS sa1
ON
    sa1.SubRequesterID = sr.SubRequesterID
AND
    sa1.Attribute = 'Urgent'
LEFT OUTER JOIN
    SubRequesterAttribute AS sa2
ON
    sa2.SubRequesterID = sr.SubRequesterID
AND
    sa2.Attribute = 'Closed'
qid & accept id: (684106, 684158) query: Find the smallest unused number in SQL Server soup:

Find the first row where there does not exist a row with Id + 1

\n
SELECT TOP 1 t1.Id+1 \nFROM table t1\nWHERE NOT EXISTS(SELECT * FROM table t2 WHERE t2.Id = t1.Id + 1)\nORDER BY t1.Id\n
\n

Edit:

\n

To handle the special case where the lowest existing id is not 1, here is a ugly solution:

\n
SELECT TOP 1 * FROM (\n    SELECT t1.Id+1 AS Id\n    FROM table t1\n    WHERE NOT EXISTS(SELECT * FROM table t2 WHERE t2.Id = t1.Id + 1 )\n    UNION \n    SELECT 1 AS Id\n    WHERE NOT EXISTS (SELECT * FROM table t3 WHERE t3.Id = 1)) ot\nORDER BY 1\n
\n soup wrap:

Find the first row where there does not exist a row with Id + 1

SELECT TOP 1 t1.Id+1 
FROM table t1
WHERE NOT EXISTS(SELECT * FROM table t2 WHERE t2.Id = t1.Id + 1)
ORDER BY t1.Id

Edit:

To handle the special case where the lowest existing id is not 1, here is a ugly solution:

SELECT TOP 1 * FROM (
    SELECT t1.Id+1 AS Id
    FROM table t1
    WHERE NOT EXISTS(SELECT * FROM table t2 WHERE t2.Id = t1.Id + 1 )
    UNION 
    SELECT 1 AS Id
    WHERE NOT EXISTS (SELECT * FROM table t3 WHERE t3.Id = 1)) ot
ORDER BY 1
qid & accept id: (706664, 16797460) query: Generate SQL Create Scripts for existing tables with Query soup:

Possible this be helpful for you. This script generate indexes, FK's, PK and common structure for any table.

\n

For example -

\n

DDL:

\n
CREATE TABLE [dbo].[WorkOut](\n    [WorkOutID] [bigint] IDENTITY(1,1) NOT NULL,\n    [TimeSheetDate] [datetime] NOT NULL,\n    [DateOut] [datetime] NOT NULL,\n    [EmployeeID] [int] NOT NULL,\n    [IsMainWorkPlace] [bit] NOT NULL,\n    [DepartmentUID] [uniqueidentifier] NOT NULL,\n    [WorkPlaceUID] [uniqueidentifier] NULL,\n    [TeamUID] [uniqueidentifier] NULL,\n    [WorkShiftCD] [nvarchar](10) NULL,\n    [WorkHours] [real] NULL,\n    [AbsenceCode] [varchar](25) NULL,\n    [PaymentType] [char](2) NULL,\n    [CategoryID] [int] NULL,\n    [Year]  AS (datepart(year,[TimeSheetDate])),\n CONSTRAINT [PK_WorkOut] PRIMARY KEY CLUSTERED \n(\n    [WorkOutID] ASC\n)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]\n) ON [PRIMARY]\n\nALTER TABLE [dbo].[WorkOut] ADD  \nCONSTRAINT [DF__WorkOut__IsMainW__2C1E8537]  DEFAULT ((1)) FOR [IsMainWorkPlace]\n\nALTER TABLE [dbo].[WorkOut]  WITH CHECK ADD  CONSTRAINT [FK_WorkOut_Employee_EmployeeID] FOREIGN KEY([EmployeeID])\nREFERENCES [dbo].[Employee] ([EmployeeID])\n\nALTER TABLE [dbo].[WorkOut] CHECK CONSTRAINT [FK_WorkOut_Employee_EmployeeID]\n
\n

Query:

\n
DECLARE @table_name SYSNAME\nSELECT @table_name = 'dbo.WorkOut'\n\nDECLARE \n      @object_name SYSNAME\n    , @object_id INT\n\nSELECT \n      @object_name = '[' + s.name + '].[' + o.name + ']'\n    , @object_id = o.[object_id]\nFROM sys.objects o WITH (NOWAIT)\nJOIN sys.schemas s WITH (NOWAIT) ON o.[schema_id] = s.[schema_id]\nWHERE s.name + '.' + o.name = @table_name\n    AND o.[type] = 'U'\n    AND o.is_ms_shipped = 0\n\nDECLARE @SQL NVARCHAR(MAX) = ''\n\n;WITH index_column AS \n(\n    SELECT \n          ic.[object_id]\n        , ic.index_id\n        , ic.is_descending_key\n        , ic.is_included_column\n        , c.name\n    FROM sys.index_columns ic WITH (NOWAIT)\n    JOIN sys.columns c WITH (NOWAIT) ON ic.[object_id] = c.[object_id] AND ic.column_id = c.column_id\n    WHERE ic.[object_id] = @object_id\n),\nfk_columns AS \n(\n     SELECT \n          k.constraint_object_id\n        , cname = c.name\n        , rcname = rc.name\n    FROM sys.foreign_key_columns k WITH (NOWAIT)\n    JOIN sys.columns rc WITH (NOWAIT) ON rc.[object_id] = k.referenced_object_id AND rc.column_id = k.referenced_column_id \n    JOIN sys.columns c WITH (NOWAIT) ON c.[object_id] = k.parent_object_id AND c.column_id = k.parent_column_id\n    WHERE k.parent_object_id = @object_id\n)\nSELECT @SQL = 'CREATE TABLE ' + @object_name + CHAR(13) + '(' + CHAR(13) + STUFF((\n    SELECT CHAR(9) + ', [' + c.name + '] ' + \n        CASE WHEN c.is_computed = 1\n            THEN 'AS ' + cc.[definition] \n            ELSE UPPER(tp.name) + \n                CASE WHEN tp.name IN ('varchar', 'char', 'varbinary', 'binary', 'text')\n                       THEN '(' + CASE WHEN c.max_length = -1 THEN 'MAX' ELSE CAST(c.max_length AS VARCHAR(5)) END + ')'\n                     WHEN tp.name IN ('nvarchar', 'nchar', 'ntext')\n                       THEN '(' + CASE WHEN c.max_length = -1 THEN 'MAX' ELSE CAST(c.max_length / 2 AS VARCHAR(5)) END + ')'\n                     WHEN tp.name IN ('datetime2', 'time2', 'datetimeoffset') \n                       THEN '(' + CAST(c.scale AS VARCHAR(5)) + ')'\n                     WHEN tp.name = 'decimal' \n                       THEN '(' + CAST(c.[precision] AS VARCHAR(5)) + ',' + CAST(c.scale AS VARCHAR(5)) + ')'\n                    ELSE ''\n                END +\n                CASE WHEN c.collation_name IS NOT NULL THEN ' COLLATE ' + c.collation_name ELSE '' END +\n                CASE WHEN c.is_nullable = 1 THEN ' NULL' ELSE ' NOT NULL' END +\n                CASE WHEN dc.[definition] IS NOT NULL THEN ' DEFAULT' + dc.[definition] ELSE '' END + \n                CASE WHEN ic.is_identity = 1 THEN ' IDENTITY(' + CAST(ISNULL(ic.seed_value, '0') AS CHAR(1)) + ',' + CAST(ISNULL(ic.increment_value, '1') AS CHAR(1)) + ')' ELSE '' END \n        END + CHAR(13)\n    FROM sys.columns c WITH (NOWAIT)\n    JOIN sys.types tp WITH (NOWAIT) ON c.user_type_id = tp.user_type_id\n    LEFT JOIN sys.computed_columns cc WITH (NOWAIT) ON c.[object_id] = cc.[object_id] AND c.column_id = cc.column_id\n    LEFT JOIN sys.default_constraints dc WITH (NOWAIT) ON c.default_object_id != 0 AND c.[object_id] = dc.parent_object_id AND c.column_id = dc.parent_column_id\n    LEFT JOIN sys.identity_columns ic WITH (NOWAIT) ON c.is_identity = 1 AND c.[object_id] = ic.[object_id] AND c.column_id = ic.column_id\n    WHERE c.[object_id] = @object_id\n    ORDER BY c.column_id\n    FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 2, CHAR(9) + ' ')\n    + ISNULL((SELECT CHAR(9) + ', CONSTRAINT [' + k.name + '] PRIMARY KEY (' + \n                    (SELECT STUFF((\n                         SELECT ', [' + c.name + '] ' + CASE WHEN ic.is_descending_key = 1 THEN 'DESC' ELSE 'ASC' END\n                         FROM sys.index_columns ic WITH (NOWAIT)\n                         JOIN sys.columns c WITH (NOWAIT) ON c.[object_id] = ic.[object_id] AND c.column_id = ic.column_id\n                         WHERE ic.is_included_column = 0\n                             AND ic.[object_id] = k.parent_object_id \n                             AND ic.index_id = k.unique_index_id     \n                         FOR XML PATH(N''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 2, ''))\n            + ')' + CHAR(13)\n            FROM sys.key_constraints k WITH (NOWAIT)\n            WHERE k.parent_object_id = @object_id \n                AND k.[type] = 'PK'), '') + ')'  + CHAR(13)\n    + ISNULL((SELECT (\n        SELECT CHAR(13) +\n             'ALTER TABLE ' + @object_name + ' WITH' \n            + CASE WHEN fk.is_not_trusted = 1 \n                THEN ' NOCHECK' \n                ELSE ' CHECK' \n              END + \n              ' ADD CONSTRAINT [' + fk.name  + '] FOREIGN KEY(' \n              + STUFF((\n                SELECT ', [' + k.cname + ']'\n                FROM fk_columns k\n                WHERE k.constraint_object_id = fk.[object_id]\n                FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 2, '')\n               + ')' +\n              ' REFERENCES [' + SCHEMA_NAME(ro.[schema_id]) + '].[' + ro.name + '] ('\n              + STUFF((\n                SELECT ', [' + k.rcname + ']'\n                FROM fk_columns k\n                WHERE k.constraint_object_id = fk.[object_id]\n                FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 2, '')\n               + ')'\n            + CASE \n                WHEN fk.delete_referential_action = 1 THEN ' ON DELETE CASCADE' \n                WHEN fk.delete_referential_action = 2 THEN ' ON DELETE SET NULL'\n                WHEN fk.delete_referential_action = 3 THEN ' ON DELETE SET DEFAULT' \n                ELSE '' \n              END\n            + CASE \n                WHEN fk.update_referential_action = 1 THEN ' ON UPDATE CASCADE'\n                WHEN fk.update_referential_action = 2 THEN ' ON UPDATE SET NULL'\n                WHEN fk.update_referential_action = 3 THEN ' ON UPDATE SET DEFAULT'  \n                ELSE '' \n              END \n            + CHAR(13) + 'ALTER TABLE ' + @object_name + ' CHECK CONSTRAINT [' + fk.name  + ']' + CHAR(13)\n        FROM sys.foreign_keys fk WITH (NOWAIT)\n        JOIN sys.objects ro WITH (NOWAIT) ON ro.[object_id] = fk.referenced_object_id\n        WHERE fk.parent_object_id = @object_id\n        FOR XML PATH(N''), TYPE).value('.', 'NVARCHAR(MAX)')), '')\n    + ISNULL(((SELECT\n         CHAR(13) + 'CREATE' + CASE WHEN i.is_unique = 1 THEN ' UNIQUE' ELSE '' END \n                + ' NONCLUSTERED INDEX [' + i.name + '] ON ' + @object_name + ' (' +\n                STUFF((\n                SELECT ', [' + c.name + ']' + CASE WHEN c.is_descending_key = 1 THEN ' DESC' ELSE ' ASC' END\n                FROM index_column c\n                WHERE c.is_included_column = 0\n                    AND c.index_id = i.index_id\n                FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 2, '') + ')'  \n                + ISNULL(CHAR(13) + 'INCLUDE (' + \n                    STUFF((\n                    SELECT ', [' + c.name + ']'\n                    FROM index_column c\n                    WHERE c.is_included_column = 1\n                        AND c.index_id = i.index_id\n                    FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 2, '') + ')', '')  + CHAR(13)\n        FROM sys.indexes i WITH (NOWAIT)\n        WHERE i.[object_id] = @object_id\n            AND i.is_primary_key = 0\n            AND i.[type] = 2\n        FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)')\n    ), '')\n\nPRINT @SQL\n--EXEC sys.sp_executesql @SQL\n
\n

Output:

\n
CREATE TABLE [dbo].[WorkOut]\n(\n      [WorkOutID] BIGINT NOT NULL IDENTITY(1,1)\n    , [TimeSheetDate] DATETIME NOT NULL\n    , [DateOut] DATETIME NOT NULL\n    , [EmployeeID] INT NOT NULL\n    , [IsMainWorkPlace] BIT NOT NULL DEFAULT((1))\n    , [DepartmentUID] UNIQUEIDENTIFIER NOT NULL\n    , [WorkPlaceUID] UNIQUEIDENTIFIER NULL\n    , [TeamUID] UNIQUEIDENTIFIER NULL\n    , [WorkShiftCD] NVARCHAR(10) COLLATE Cyrillic_General_CI_AS NULL\n    , [WorkHours] REAL NULL\n    , [AbsenceCode] VARCHAR(25) COLLATE Cyrillic_General_CI_AS NULL\n    , [PaymentType] CHAR(2) COLLATE Cyrillic_General_CI_AS NULL\n    , [CategoryID] INT NULL\n    , [Year] AS (datepart(year,[TimeSheetDate]))\n    , CONSTRAINT [PK_WorkOut] PRIMARY KEY ([WorkOutID] ASC)\n)\n\nALTER TABLE [dbo].[WorkOut] WITH CHECK ADD CONSTRAINT [FK_WorkOut_Employee_EmployeeID] FOREIGN KEY([EmployeeID]) REFERENCES [dbo].[Employee] ([EmployeeID])\nALTER TABLE [dbo].[WorkOut] CHECK CONSTRAINT [FK_WorkOut_Employee_EmployeeID]\n\nCREATE NONCLUSTERED INDEX [IX_WorkOut_WorkShiftCD_AbsenceCode] ON [dbo].[WorkOut] ([WorkShiftCD] ASC, [AbsenceCode] ASC)\nINCLUDE ([WorkOutID], [WorkHours])\n
\n

Also check this article -

\n

How to Generate a CREATE TABLE Script For an Existing Table: Part 1

\n soup wrap:

Possible this be helpful for you. This script generate indexes, FK's, PK and common structure for any table.

For example -

DDL:

CREATE TABLE [dbo].[WorkOut](
    [WorkOutID] [bigint] IDENTITY(1,1) NOT NULL,
    [TimeSheetDate] [datetime] NOT NULL,
    [DateOut] [datetime] NOT NULL,
    [EmployeeID] [int] NOT NULL,
    [IsMainWorkPlace] [bit] NOT NULL,
    [DepartmentUID] [uniqueidentifier] NOT NULL,
    [WorkPlaceUID] [uniqueidentifier] NULL,
    [TeamUID] [uniqueidentifier] NULL,
    [WorkShiftCD] [nvarchar](10) NULL,
    [WorkHours] [real] NULL,
    [AbsenceCode] [varchar](25) NULL,
    [PaymentType] [char](2) NULL,
    [CategoryID] [int] NULL,
    [Year]  AS (datepart(year,[TimeSheetDate])),
 CONSTRAINT [PK_WorkOut] PRIMARY KEY CLUSTERED 
(
    [WorkOutID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]

ALTER TABLE [dbo].[WorkOut] ADD  
CONSTRAINT [DF__WorkOut__IsMainW__2C1E8537]  DEFAULT ((1)) FOR [IsMainWorkPlace]

ALTER TABLE [dbo].[WorkOut]  WITH CHECK ADD  CONSTRAINT [FK_WorkOut_Employee_EmployeeID] FOREIGN KEY([EmployeeID])
REFERENCES [dbo].[Employee] ([EmployeeID])

ALTER TABLE [dbo].[WorkOut] CHECK CONSTRAINT [FK_WorkOut_Employee_EmployeeID]

Query:

DECLARE @table_name SYSNAME
SELECT @table_name = 'dbo.WorkOut'

DECLARE 
      @object_name SYSNAME
    , @object_id INT

SELECT 
      @object_name = '[' + s.name + '].[' + o.name + ']'
    , @object_id = o.[object_id]
FROM sys.objects o WITH (NOWAIT)
JOIN sys.schemas s WITH (NOWAIT) ON o.[schema_id] = s.[schema_id]
WHERE s.name + '.' + o.name = @table_name
    AND o.[type] = 'U'
    AND o.is_ms_shipped = 0

DECLARE @SQL NVARCHAR(MAX) = ''

;WITH index_column AS 
(
    SELECT 
          ic.[object_id]
        , ic.index_id
        , ic.is_descending_key
        , ic.is_included_column
        , c.name
    FROM sys.index_columns ic WITH (NOWAIT)
    JOIN sys.columns c WITH (NOWAIT) ON ic.[object_id] = c.[object_id] AND ic.column_id = c.column_id
    WHERE ic.[object_id] = @object_id
),
fk_columns AS 
(
     SELECT 
          k.constraint_object_id
        , cname = c.name
        , rcname = rc.name
    FROM sys.foreign_key_columns k WITH (NOWAIT)
    JOIN sys.columns rc WITH (NOWAIT) ON rc.[object_id] = k.referenced_object_id AND rc.column_id = k.referenced_column_id 
    JOIN sys.columns c WITH (NOWAIT) ON c.[object_id] = k.parent_object_id AND c.column_id = k.parent_column_id
    WHERE k.parent_object_id = @object_id
)
SELECT @SQL = 'CREATE TABLE ' + @object_name + CHAR(13) + '(' + CHAR(13) + STUFF((
    SELECT CHAR(9) + ', [' + c.name + '] ' + 
        CASE WHEN c.is_computed = 1
            THEN 'AS ' + cc.[definition] 
            ELSE UPPER(tp.name) + 
                CASE WHEN tp.name IN ('varchar', 'char', 'varbinary', 'binary', 'text')
                       THEN '(' + CASE WHEN c.max_length = -1 THEN 'MAX' ELSE CAST(c.max_length AS VARCHAR(5)) END + ')'
                     WHEN tp.name IN ('nvarchar', 'nchar', 'ntext')
                       THEN '(' + CASE WHEN c.max_length = -1 THEN 'MAX' ELSE CAST(c.max_length / 2 AS VARCHAR(5)) END + ')'
                     WHEN tp.name IN ('datetime2', 'time2', 'datetimeoffset') 
                       THEN '(' + CAST(c.scale AS VARCHAR(5)) + ')'
                     WHEN tp.name = 'decimal' 
                       THEN '(' + CAST(c.[precision] AS VARCHAR(5)) + ',' + CAST(c.scale AS VARCHAR(5)) + ')'
                    ELSE ''
                END +
                CASE WHEN c.collation_name IS NOT NULL THEN ' COLLATE ' + c.collation_name ELSE '' END +
                CASE WHEN c.is_nullable = 1 THEN ' NULL' ELSE ' NOT NULL' END +
                CASE WHEN dc.[definition] IS NOT NULL THEN ' DEFAULT' + dc.[definition] ELSE '' END + 
                CASE WHEN ic.is_identity = 1 THEN ' IDENTITY(' + CAST(ISNULL(ic.seed_value, '0') AS CHAR(1)) + ',' + CAST(ISNULL(ic.increment_value, '1') AS CHAR(1)) + ')' ELSE '' END 
        END + CHAR(13)
    FROM sys.columns c WITH (NOWAIT)
    JOIN sys.types tp WITH (NOWAIT) ON c.user_type_id = tp.user_type_id
    LEFT JOIN sys.computed_columns cc WITH (NOWAIT) ON c.[object_id] = cc.[object_id] AND c.column_id = cc.column_id
    LEFT JOIN sys.default_constraints dc WITH (NOWAIT) ON c.default_object_id != 0 AND c.[object_id] = dc.parent_object_id AND c.column_id = dc.parent_column_id
    LEFT JOIN sys.identity_columns ic WITH (NOWAIT) ON c.is_identity = 1 AND c.[object_id] = ic.[object_id] AND c.column_id = ic.column_id
    WHERE c.[object_id] = @object_id
    ORDER BY c.column_id
    FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 2, CHAR(9) + ' ')
    + ISNULL((SELECT CHAR(9) + ', CONSTRAINT [' + k.name + '] PRIMARY KEY (' + 
                    (SELECT STUFF((
                         SELECT ', [' + c.name + '] ' + CASE WHEN ic.is_descending_key = 1 THEN 'DESC' ELSE 'ASC' END
                         FROM sys.index_columns ic WITH (NOWAIT)
                         JOIN sys.columns c WITH (NOWAIT) ON c.[object_id] = ic.[object_id] AND c.column_id = ic.column_id
                         WHERE ic.is_included_column = 0
                             AND ic.[object_id] = k.parent_object_id 
                             AND ic.index_id = k.unique_index_id     
                         FOR XML PATH(N''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 2, ''))
            + ')' + CHAR(13)
            FROM sys.key_constraints k WITH (NOWAIT)
            WHERE k.parent_object_id = @object_id 
                AND k.[type] = 'PK'), '') + ')'  + CHAR(13)
    + ISNULL((SELECT (
        SELECT CHAR(13) +
             'ALTER TABLE ' + @object_name + ' WITH' 
            + CASE WHEN fk.is_not_trusted = 1 
                THEN ' NOCHECK' 
                ELSE ' CHECK' 
              END + 
              ' ADD CONSTRAINT [' + fk.name  + '] FOREIGN KEY(' 
              + STUFF((
                SELECT ', [' + k.cname + ']'
                FROM fk_columns k
                WHERE k.constraint_object_id = fk.[object_id]
                FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 2, '')
               + ')' +
              ' REFERENCES [' + SCHEMA_NAME(ro.[schema_id]) + '].[' + ro.name + '] ('
              + STUFF((
                SELECT ', [' + k.rcname + ']'
                FROM fk_columns k
                WHERE k.constraint_object_id = fk.[object_id]
                FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 2, '')
               + ')'
            + CASE 
                WHEN fk.delete_referential_action = 1 THEN ' ON DELETE CASCADE' 
                WHEN fk.delete_referential_action = 2 THEN ' ON DELETE SET NULL'
                WHEN fk.delete_referential_action = 3 THEN ' ON DELETE SET DEFAULT' 
                ELSE '' 
              END
            + CASE 
                WHEN fk.update_referential_action = 1 THEN ' ON UPDATE CASCADE'
                WHEN fk.update_referential_action = 2 THEN ' ON UPDATE SET NULL'
                WHEN fk.update_referential_action = 3 THEN ' ON UPDATE SET DEFAULT'  
                ELSE '' 
              END 
            + CHAR(13) + 'ALTER TABLE ' + @object_name + ' CHECK CONSTRAINT [' + fk.name  + ']' + CHAR(13)
        FROM sys.foreign_keys fk WITH (NOWAIT)
        JOIN sys.objects ro WITH (NOWAIT) ON ro.[object_id] = fk.referenced_object_id
        WHERE fk.parent_object_id = @object_id
        FOR XML PATH(N''), TYPE).value('.', 'NVARCHAR(MAX)')), '')
    + ISNULL(((SELECT
         CHAR(13) + 'CREATE' + CASE WHEN i.is_unique = 1 THEN ' UNIQUE' ELSE '' END 
                + ' NONCLUSTERED INDEX [' + i.name + '] ON ' + @object_name + ' (' +
                STUFF((
                SELECT ', [' + c.name + ']' + CASE WHEN c.is_descending_key = 1 THEN ' DESC' ELSE ' ASC' END
                FROM index_column c
                WHERE c.is_included_column = 0
                    AND c.index_id = i.index_id
                FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 2, '') + ')'  
                + ISNULL(CHAR(13) + 'INCLUDE (' + 
                    STUFF((
                    SELECT ', [' + c.name + ']'
                    FROM index_column c
                    WHERE c.is_included_column = 1
                        AND c.index_id = i.index_id
                    FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 2, '') + ')', '')  + CHAR(13)
        FROM sys.indexes i WITH (NOWAIT)
        WHERE i.[object_id] = @object_id
            AND i.is_primary_key = 0
            AND i.[type] = 2
        FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)')
    ), '')

PRINT @SQL
--EXEC sys.sp_executesql @SQL

Output:

CREATE TABLE [dbo].[WorkOut]
(
      [WorkOutID] BIGINT NOT NULL IDENTITY(1,1)
    , [TimeSheetDate] DATETIME NOT NULL
    , [DateOut] DATETIME NOT NULL
    , [EmployeeID] INT NOT NULL
    , [IsMainWorkPlace] BIT NOT NULL DEFAULT((1))
    , [DepartmentUID] UNIQUEIDENTIFIER NOT NULL
    , [WorkPlaceUID] UNIQUEIDENTIFIER NULL
    , [TeamUID] UNIQUEIDENTIFIER NULL
    , [WorkShiftCD] NVARCHAR(10) COLLATE Cyrillic_General_CI_AS NULL
    , [WorkHours] REAL NULL
    , [AbsenceCode] VARCHAR(25) COLLATE Cyrillic_General_CI_AS NULL
    , [PaymentType] CHAR(2) COLLATE Cyrillic_General_CI_AS NULL
    , [CategoryID] INT NULL
    , [Year] AS (datepart(year,[TimeSheetDate]))
    , CONSTRAINT [PK_WorkOut] PRIMARY KEY ([WorkOutID] ASC)
)

ALTER TABLE [dbo].[WorkOut] WITH CHECK ADD CONSTRAINT [FK_WorkOut_Employee_EmployeeID] FOREIGN KEY([EmployeeID]) REFERENCES [dbo].[Employee] ([EmployeeID])
ALTER TABLE [dbo].[WorkOut] CHECK CONSTRAINT [FK_WorkOut_Employee_EmployeeID]

CREATE NONCLUSTERED INDEX [IX_WorkOut_WorkShiftCD_AbsenceCode] ON [dbo].[WorkOut] ([WorkShiftCD] ASC, [AbsenceCode] ASC)
INCLUDE ([WorkOutID], [WorkHours])

Also check this article -

How to Generate a CREATE TABLE Script For an Existing Table: Part 1

qid & accept id: (713960, 714247) query: How to drop IDENTITY property of column in SQL Server 2005 soup:

IF you are just processing rows as you describe, wouldn't it be better to just select the top N primary key values into a temp table like:

\n
CREATE TABLE #KeysToProcess\n(\n     TempID    int  not null primary key identity(1,1)\n    ,YourKey1  int  not null\n    ,YourKey2  int  not null\n)\n\nINSERT INTO #KeysToProcess (YourKey1,YourKey2)\nSELECT TOP n YourKey1,YourKey2  FROM MyTable\n
\n

The keys should not change very often (I hope) but other columns can with no harm to doing it this way.

\n

get the @@ROWCOUNT of the insert and you can do a easy loop on TempID where it will be from 1 to @@ROWCOUNT

\n

and/or

\n

just join #KeysToProcess to your MyKeys table and be on your way, with no need to duplicate all the data.
\n

\n

This runs fine on my SQL Server 2005, where MyTable.MyKey is an identity column.

\n
-- Create empty temp table\nSELECT *\nINTO #TmpMikeMike\nFROM (SELECT\n      m1.*\n      FROM MyTable                 m1\n          LEFT OUTER JOIN MyTable  m2 ON m1.MyKey=m2.MyKey\n      WHERE 1=0\n ) dt\n\nINSERT INTO #TmpMike\nSELECT TOP 1 * FROM MyTable\n\nSELECT * from #TmpMike\n
\n



\nEDIT
\nTHIS WORKS, with no errors...

\n
-- Create empty temp table\nSELECT *\nINTO #Tmp_MyTable\nFROM (SELECT\n          m1.*\n          FROM MyTable                 m1\n              LEFT OUTER JOIN MyTable  m2 ON m1.KeyValue=m2.KeyValue\n          WHERE 1=0\n     ) dt\n...\nWHILE ...\nBEGIN\n    ...\n    INSERT INTO #Tmp_MyTable\n    SELECT TOP (@n) *\n    FROM MyTable\n    ...\n\nEND\n
\n

however, what is your real problem? Why do you need to loop while inserting "*" into this temp table? You may be able to shift strategy and come up with a much better algorithm overall.

\n soup wrap:

IF you are just processing rows as you describe, wouldn't it be better to just select the top N primary key values into a temp table like:

CREATE TABLE #KeysToProcess
(
     TempID    int  not null primary key identity(1,1)
    ,YourKey1  int  not null
    ,YourKey2  int  not null
)

INSERT INTO #KeysToProcess (YourKey1,YourKey2)
SELECT TOP n YourKey1,YourKey2  FROM MyTable

The keys should not change very often (I hope) but other columns can with no harm to doing it this way.

get the @@ROWCOUNT of the insert and you can do a easy loop on TempID where it will be from 1 to @@ROWCOUNT

and/or

just join #KeysToProcess to your MyKeys table and be on your way, with no need to duplicate all the data.

This runs fine on my SQL Server 2005, where MyTable.MyKey is an identity column.

-- Create empty temp table
SELECT *
INTO #TmpMikeMike
FROM (SELECT
      m1.*
      FROM MyTable                 m1
          LEFT OUTER JOIN MyTable  m2 ON m1.MyKey=m2.MyKey
      WHERE 1=0
 ) dt

INSERT INTO #TmpMike
SELECT TOP 1 * FROM MyTable

SELECT * from #TmpMike



EDIT
THIS WORKS, with no errors...

-- Create empty temp table
SELECT *
INTO #Tmp_MyTable
FROM (SELECT
          m1.*
          FROM MyTable                 m1
              LEFT OUTER JOIN MyTable  m2 ON m1.KeyValue=m2.KeyValue
          WHERE 1=0
     ) dt
...
WHILE ...
BEGIN
    ...
    INSERT INTO #Tmp_MyTable
    SELECT TOP (@n) *
    FROM MyTable
    ...

END

however, what is your real problem? Why do you need to loop while inserting "*" into this temp table? You may be able to shift strategy and come up with a much better algorithm overall.

qid & accept id: (726582, 840879) query: Updates on PIVOTs in SQL Server 2008 soup:

This will only really work if the pivoted columns form a unique identifier. So let's take Buggy's example; here is the original table:

\n
TaskID    Date    Hours\n
\n

and we want to pivot it into a table that looks like this:

\n
TaskID    11/15/1980    11/16/1980    11/17/1980 ... etc.\n
\n

In order to create the pivot, you would do something like this:

\n
DECLARE @FieldList NVARCHAR(MAX)\n\nSELECT\n    @FieldList =\n    CASE WHEN @FieldList <> '' THEN \n        @FieldList + ', [' + [Date] + ']' \n    ELSE \n        '[' + [Date] + ']' \n    END\nFROM\n    Tasks\n\n\n\nDECLARE @PivotSQL NVARCHAR(MAX)\nSET @PivotSQL = \n    '\n        SELECT \n            TaskID\n            , ' + @FieldList + '\n        INTO\n            ##Pivoted\n        FROM \n            (\n                SELECT * FROM Tasks\n            ) AS T\n        PIVOT\n            (\n                MAX(Hours) FOR T.[Date] IN (' + @FieldList + ') \n            ) AS PVT\n    '\n\nEXEC(@PivotSQL)\n
\n

So then you have your pivoted table in ##Pivoted. Now you perform an update to one of the hours fields:

\n
UPDATE\n    ##Pivoted\nSET\n    [11/16/1980 00:00:00] = 10\nWHERE\n    TaskID = 1234\n
\n

Now ##Pivoted has an updated version of the hours for a task that took place on 11/16/1980 and we want to save that back to the original table, so we use an UNPIVOT:

\n
DECLARE @UnPivotSQL NVarChar(MAX)\nSET @UnPivotSQL = \n    '\n        SELECT\n              TaskID\n            , [Date]\n            , [Hours]\n        INTO \n            ##UnPivoted\n        FROM\n            ##Pivoted\n        UNPIVOT\n        (\n            Value FOR [Date] IN (' + @FieldList + ')\n        ) AS UP\n\n    '\n\nEXEC(@UnPivotSQL)\n\nUPDATE\n    Tasks\nSET\n    [Hours] = UP.[Hours]\nFROM\n    Tasks T\nINNER JOIN\n    ##UnPivoted UP\nON\n    T.TaskID = UP.TaskID\n
\n

You'll notice that I modified Buggy's example to remove aggregation by day-of-week. That's because there's no going back and updating if you perform any sort of aggregation. If I update the SUNHours field, how do I know which Sunday's hours I'm updating? This will only work if there is no aggregation. I hope this helps!

\n soup wrap:

This will only really work if the pivoted columns form a unique identifier. So let's take Buggy's example; here is the original table:

TaskID    Date    Hours

and we want to pivot it into a table that looks like this:

TaskID    11/15/1980    11/16/1980    11/17/1980 ... etc.

In order to create the pivot, you would do something like this:

DECLARE @FieldList NVARCHAR(MAX)

SELECT
    @FieldList =
    CASE WHEN @FieldList <> '' THEN 
        @FieldList + ', [' + [Date] + ']' 
    ELSE 
        '[' + [Date] + ']' 
    END
FROM
    Tasks



DECLARE @PivotSQL NVARCHAR(MAX)
SET @PivotSQL = 
    '
        SELECT 
            TaskID
            , ' + @FieldList + '
        INTO
            ##Pivoted
        FROM 
            (
                SELECT * FROM Tasks
            ) AS T
        PIVOT
            (
                MAX(Hours) FOR T.[Date] IN (' + @FieldList + ') 
            ) AS PVT
    '

EXEC(@PivotSQL)

So then you have your pivoted table in ##Pivoted. Now you perform an update to one of the hours fields:

UPDATE
    ##Pivoted
SET
    [11/16/1980 00:00:00] = 10
WHERE
    TaskID = 1234

Now ##Pivoted has an updated version of the hours for a task that took place on 11/16/1980 and we want to save that back to the original table, so we use an UNPIVOT:

DECLARE @UnPivotSQL NVarChar(MAX)
SET @UnPivotSQL = 
    '
        SELECT
              TaskID
            , [Date]
            , [Hours]
        INTO 
            ##UnPivoted
        FROM
            ##Pivoted
        UNPIVOT
        (
            Value FOR [Date] IN (' + @FieldList + ')
        ) AS UP

    '

EXEC(@UnPivotSQL)

UPDATE
    Tasks
SET
    [Hours] = UP.[Hours]
FROM
    Tasks T
INNER JOIN
    ##UnPivoted UP
ON
    T.TaskID = UP.TaskID

You'll notice that I modified Buggy's example to remove aggregation by day-of-week. That's because there's no going back and updating if you perform any sort of aggregation. If I update the SUNHours field, how do I know which Sunday's hours I'm updating? This will only work if there is no aggregation. I hope this helps!

qid & accept id: (778909, 778922) query: Most efficent method for adding leading 0's to an int in sql soup:

That is pretty much the way: Adding Leading Zeros To Integer Values

\n

So, to save following the link, the query looks like this, where #Numbers is the table and Num is the column:

\n
   SELECT RIGHT('000000000' + CONVERT(VARCHAR(8),Num), 8) FROM #Numbers\n
\n

for negative or positive values

\n
declare @v varchar(6)\nselect @v = -5\n\nSELECT case  when @v < 0 \nthen '-' else '' end + RIGHT('00000' + replace(@v,'-',''), 5) \n
\n soup wrap:

That is pretty much the way: Adding Leading Zeros To Integer Values

So, to save following the link, the query looks like this, where #Numbers is the table and Num is the column:

   SELECT RIGHT('000000000' + CONVERT(VARCHAR(8),Num), 8) FROM #Numbers

for negative or positive values

declare @v varchar(6)
select @v = -5

SELECT case  when @v < 0 
then '-' else '' end + RIGHT('00000' + replace(@v,'-',''), 5) 
qid & accept id: (802027, 802046) query: In SQL, how do you get the top N rows ordered by a certain column? soup:

Definition: Limit is used to limit your MySQL query results to those that fall within a specified range. You can use it to show the first X number of results, or to show a range from X - Y results. It is phrased as Limit X, Y and included at the end of your query. X is the starting point (remember the first record is 0) and Y is the duration (how many records to display).\nAlso Known As: Range Results\nExamples:

\n
SELECT * FROM `your_table` LIMIT 0, 10 \n
\n

This will display the first 10 results from the database.

\n
SELECT * FROM `your_table` LIMIT 5, 5 \n
\n

This will show records 6, 7, 8, 9, and 10

\n

More from About.com

\n soup wrap:

Definition: Limit is used to limit your MySQL query results to those that fall within a specified range. You can use it to show the first X number of results, or to show a range from X - Y results. It is phrased as Limit X, Y and included at the end of your query. X is the starting point (remember the first record is 0) and Y is the duration (how many records to display). Also Known As: Range Results Examples:

SELECT * FROM `your_table` LIMIT 0, 10 

This will display the first 10 results from the database.

SELECT * FROM `your_table` LIMIT 5, 5 

This will show records 6, 7, 8, 9, and 10

More from About.com

qid & accept id: (852225, 852247) query: persons where the children are grouped for their parent soup:

I recommend you to split this into two queries.

\n

First, get a list of parents:

\n
SELECT *\nFROM Persons\nWHERE id IN (SELECT parent FROM Persons)\nORDER BY (age, id)\n
\n

Then get a properly sorted list of children:

\n
SELECT Child.*\nFROM Persons AS Child\n     JOIN Persons AS Parent ON (Parent.id = Child.parent)\nORDER BY (Parent.age, Parent.id, Child.age, Child.id)\n
\n

The two lists can then easily be merged on the id/parent since they are both sorted first by parent's age.

\n soup wrap:

I recommend you to split this into two queries.

First, get a list of parents:

SELECT *
FROM Persons
WHERE id IN (SELECT parent FROM Persons)
ORDER BY (age, id)

Then get a properly sorted list of children:

SELECT Child.*
FROM Persons AS Child
     JOIN Persons AS Parent ON (Parent.id = Child.parent)
ORDER BY (Parent.age, Parent.id, Child.age, Child.id)

The two lists can then easily be merged on the id/parent since they are both sorted first by parent's age.

qid & accept id: (895876, 895897) query: How can I count the non-unique combinations of values in MySQL? soup:
select list_id, address_id, count(*) as count\nfrom LIST_MEMBERSHIPS\ngroup by 1, 2\norder by 3 desc\n
\n

You may find it useful to add

\n
having count > 1\n
\n soup wrap:
select list_id, address_id, count(*) as count
from LIST_MEMBERSHIPS
group by 1, 2
order by 3 desc

You may find it useful to add

having count > 1
qid & accept id: (938232, 938272) query: SQL Pivot on subset soup:

Here's an attempt at PIVOT:

\n
select *\nfrom YourTable\nPIVOT (sum(amount) FOR Method in (Cash,Check)) as Y\n
\n

Given that it's just two columns, could try with a join:

\n
select\n    type\n,   cash = a.amount\n,   check = b.amount\nfrom yourtable a\nfull join yourtable b on a.type = b.type\nwhere a.method = 'cash' or b.method = 'Check'\n
\n soup wrap:

Here's an attempt at PIVOT:

select *
from YourTable
PIVOT (sum(amount) FOR Method in (Cash,Check)) as Y

Given that it's just two columns, could try with a join:

select
    type
,   cash = a.amount
,   check = b.amount
from yourtable a
full join yourtable b on a.type = b.type
where a.method = 'cash' or b.method = 'Check'
qid & accept id: (951401, 951768) query: SQL 2005 Split Comma Separated Column on Delimiter soup:

Yes, it's possible with CROSS APPLY (SQL 2005+):

\n
with testdata (CommaColumn, ValueColumn1, ValueColumn2) as (\n  select 'ABC,123', 1, 2 union all\n  select 'XYZ, 789', 2, 3\n  ) \nselect \n  b.items as SplitValue\n, a.ValueColumn1\n, a.ValueColumn2\nfrom testdata a\ncross apply dbo.Split(a.CommaColumn,',') b\n
\n

Notes:

\n
    \n
  1. You should add an index to the result set of your split column, so that it returns two columns, IndexNumber and Value.

  2. \n
  3. In-line implementations with a numbers table are generally faster than your procedural version here.

  4. \n
\n

eg:

\n
create function [dbo].[Split] (@list nvarchar(max), @delimiter nchar(1) = N',')\nreturns table\nas\nreturn (\n  select \n    Number = row_number() over (order by Number)\n  , [Value] = ltrim(rtrim(convert(nvarchar(4000),\n        substring(@list, Number\n        , charindex(@delimiter, @list+@delimiter, Number)-Number\n        )\n    )))\n  from dbo.Numbers\n  where Number <= convert(int, len(@list))\n    and substring(@delimiter + @list, Number, 1) = @delimiter\n  )\n
\n

Erland Sommarskog has the definitive page on this, I think: http://www.sommarskog.se/arrays-in-sql-2005.html

\n soup wrap:

Yes, it's possible with CROSS APPLY (SQL 2005+):

with testdata (CommaColumn, ValueColumn1, ValueColumn2) as (
  select 'ABC,123', 1, 2 union all
  select 'XYZ, 789', 2, 3
  ) 
select 
  b.items as SplitValue
, a.ValueColumn1
, a.ValueColumn2
from testdata a
cross apply dbo.Split(a.CommaColumn,',') b

Notes:

  1. You should add an index to the result set of your split column, so that it returns two columns, IndexNumber and Value.

  2. In-line implementations with a numbers table are generally faster than your procedural version here.

eg:

create function [dbo].[Split] (@list nvarchar(max), @delimiter nchar(1) = N',')
returns table
as
return (
  select 
    Number = row_number() over (order by Number)
  , [Value] = ltrim(rtrim(convert(nvarchar(4000),
        substring(@list, Number
        , charindex(@delimiter, @list+@delimiter, Number)-Number
        )
    )))
  from dbo.Numbers
  where Number <= convert(int, len(@list))
    and substring(@delimiter + @list, Number, 1) = @delimiter
  )

Erland Sommarskog has the definitive page on this, I think: http://www.sommarskog.se/arrays-in-sql-2005.html

qid & accept id: (955927, 955972) query: What SQL would I need to use to list all the stored procedures on an Oracle database? soup:

The DBA_OBJECTS view will list the procedures (as well as almost any other object):

\n
SELECT owner, object_name\nFROM dba_objects \nWHERE object_type = 'PROCEDURE'\n
\n

The DBA_SOURCE view will list the lines of source code for a procedure in question:

\n
SELECT line, text\nFROM dba_source\nWHERE owner = ?\n  AND name = ?\n  AND type = 'PROCEDURE'\nORDER BY line\n
\n

Note: Depending on your privileges, you may not be able to query the DBA_OBJECTS and DBA_SOURCE views. In this case, you can use ALL_OBJECTS and ALL_SOURCE instead. The DBA_ views contain all objects in the database, whereas the ALL_ views contain only those objects that you may access.

\n soup wrap:

The DBA_OBJECTS view will list the procedures (as well as almost any other object):

SELECT owner, object_name
FROM dba_objects 
WHERE object_type = 'PROCEDURE'

The DBA_SOURCE view will list the lines of source code for a procedure in question:

SELECT line, text
FROM dba_source
WHERE owner = ?
  AND name = ?
  AND type = 'PROCEDURE'
ORDER BY line

Note: Depending on your privileges, you may not be able to query the DBA_OBJECTS and DBA_SOURCE views. In this case, you can use ALL_OBJECTS and ALL_SOURCE instead. The DBA_ views contain all objects in the database, whereas the ALL_ views contain only those objects that you may access.

qid & accept id: (970357, 970399) query: How can I change an URL inside a field in MySQL? soup:

This can be easily achieved with a simple SQL statement using MySQL's replace() function. Before we do that, you should definitely do a database dump or whatever you use for backups. It's not only that it's The Right Thing To Do™, but if you make a mistake on your substitution, it might prove difficult to undo it (yes, you could rollback, but you might only figure out your mistake later on.)

\n

To create a database dump from MySQL, you can run something like this --

\n
mysqldump -h hostname -u username -p databasename > my_sql_dump.sql\n
\n

Where (and you probably know this, but for the sake of completeness for future generations...) --

\n\n

Now that we got that out of the way, you can log in to the MySQL database using:

\n
mysql -h hostname -u username -p databasename\n
\n

And simply run this statement:

\n
UPDATE `wp-posts` SET `post-content` = REPLACE(`post-content`, "http://oldurl.com", "http://newurl.com");\n
\n

And that should do it!

\n

If you make a mistake, you can often rerun the statement with the original and new texts inverted (if the new text -- in your case the new URL -- didn't already exist in the text before you did the replace.) Sometimes this is not possible depending on what the new text was (again, not likely in your case.) Anyway, you can always try recovering the sql dump --

\n
cat my_sql_dump.sql | mysql -h hostname -u username -p databasename\n
\n

And voilà.

\n soup wrap:

This can be easily achieved with a simple SQL statement using MySQL's replace() function. Before we do that, you should definitely do a database dump or whatever you use for backups. It's not only that it's The Right Thing To Do™, but if you make a mistake on your substitution, it might prove difficult to undo it (yes, you could rollback, but you might only figure out your mistake later on.)

To create a database dump from MySQL, you can run something like this --

mysqldump -h hostname -u username -p databasename > my_sql_dump.sql

Where (and you probably know this, but for the sake of completeness for future generations...) --

Now that we got that out of the way, you can log in to the MySQL database using:

mysql -h hostname -u username -p databasename

And simply run this statement:

UPDATE `wp-posts` SET `post-content` = REPLACE(`post-content`, "http://oldurl.com", "http://newurl.com");

And that should do it!

If you make a mistake, you can often rerun the statement with the original and new texts inverted (if the new text -- in your case the new URL -- didn't already exist in the text before you did the replace.) Sometimes this is not possible depending on what the new text was (again, not likely in your case.) Anyway, you can always try recovering the sql dump --

cat my_sql_dump.sql | mysql -h hostname -u username -p databasename

And voilà.

qid & accept id: (1019661, 1019944) query: Finding Start and End Dates from Date Numbers Table (Date Durations) soup:

Assuming the Day IDs are always sequential for a partial solution...

\n
select *\n  from employee_schedule a                    \n where not exists( select *                          \n                     from employee_schedule b        \n                    where a.employeeid = b.employeeid\n                      and a.projectid  = b.projectid \n                      and (a.dayid - 1) = b.dayid )\n
\n

lists the start day IDs:

\n
 ID      EMPLOYEEID       PROJECTID           DAYID \n 1              64               2             168 \n 5              64               1             169 \n 9              64               2             182 \n\n\n\nselect *\n  from employee_schedule a                   \n where not exists( select *                         \n                     from employee_schedule b       \n                    where a.employeeid = b.employeei\n                      and a.projectid  = b.projectid\n                      and (a.dayid + 1) = b.dayid )\n
\n

lists the end day IDs:

\n
  ID      EMPLOYEEID       PROJECTID           DAYID \n  4              64               2             171 \n  8              64               1             172 \n 11              64               2             184 \n
\n soup wrap:

Assuming the Day IDs are always sequential for a partial solution...

select *
  from employee_schedule a                    
 where not exists( select *                          
                     from employee_schedule b        
                    where a.employeeid = b.employeeid
                      and a.projectid  = b.projectid 
                      and (a.dayid - 1) = b.dayid )

lists the start day IDs:

 ID      EMPLOYEEID       PROJECTID           DAYID 
 1              64               2             168 
 5              64               1             169 
 9              64               2             182 



select *
  from employee_schedule a                   
 where not exists( select *                         
                     from employee_schedule b       
                    where a.employeeid = b.employeei
                      and a.projectid  = b.projectid
                      and (a.dayid + 1) = b.dayid )

lists the end day IDs:

  ID      EMPLOYEEID       PROJECTID           DAYID 
  4              64               2             171 
  8              64               1             172 
 11              64               2             184 
qid & accept id: (1069311, 1069388) query: Passing an array of parameters to a stored procedure soup:

Use a stored procedure:

\n

EDIT:\nA complement for serialize List (or anything else):

\n
List testList = new List();\n\ntestList.Add(1);\ntestList.Add(2);\ntestList.Add(3);\n\nXmlSerializer xs = new XmlSerializer(typeof(List));\nMemoryStream ms = new MemoryStream();\nxs.Serialize(ms, testList);\n\nstring resultXML = UTF8Encoding.UTF8.GetString(ms.ToArray());\n
\n

The result (ready to use with XML parameter):

\n
\n\n  1\n  2\n  3\n\n
\n
\n

ORIGINAL POST:

\n

Passing XML as parameter:

\n
\n    1\n    2\n\n
\n
\n
CREATE PROCEDURE [dbo].[DeleteAllData]\n(\n    @XMLDoc XML\n)\nAS\nBEGIN\n\nDECLARE @handle INT\n\nEXEC sp_xml_preparedocument @handle OUTPUT, @XMLDoc\n\nDELETE FROM\n    YOURTABLE\nWHERE\n    YOUR_ID_COLUMN NOT IN (\n        SELECT * FROM OPENXML (@handle, '/ids/id') WITH (id INT '.') \n    )\nEXEC sp_xml_removedocument @handle\n
\n
\n soup wrap:

Use a stored procedure:

EDIT: A complement for serialize List (or anything else):

List testList = new List();

testList.Add(1);
testList.Add(2);
testList.Add(3);

XmlSerializer xs = new XmlSerializer(typeof(List));
MemoryStream ms = new MemoryStream();
xs.Serialize(ms, testList);

string resultXML = UTF8Encoding.UTF8.GetString(ms.ToArray());

The result (ready to use with XML parameter):



  1
  2
  3


ORIGINAL POST:

Passing XML as parameter:


    1
    2


CREATE PROCEDURE [dbo].[DeleteAllData]
(
    @XMLDoc XML
)
AS
BEGIN

DECLARE @handle INT

EXEC sp_xml_preparedocument @handle OUTPUT, @XMLDoc

DELETE FROM
    YOURTABLE
WHERE
    YOUR_ID_COLUMN NOT IN (
        SELECT * FROM OPENXML (@handle, '/ids/id') WITH (id INT '.') 
    )
EXEC sp_xml_removedocument @handle

qid & accept id: (1070122, 1070233) query: How can I make a LINQ query with subqueries in the from statement? soup:

Just do the order details condition in the usual way:

\n
from o in orders\njoin od from orderdetails on o.id = od.orderid\n  into details\nwhere details.status == 'A'\nselect new { Order = o, Details = details}\n
\n

(NB. Details is a sequence, with each matching details record, LINQ operators like First and FirstOrDefault can be use to extract just one.)

\n

Or use an expression as the data source

\n
from o in orders\njoin od from orderdetails.Where(d => d.Status == 'A') on o.id = od.orderid\n  into details\nselect new { Order = o, Details = details}\n
\n

Or even, use another comprehension expression as the source expression:

\n
from o in orders\njoin od from (from d in orderdetails\n              where d.Status == 'A'\n              select d)\n  on o.id = od.orderid\n  into details\nselect new { Order = o, Details = details}\n
\n

(Setting you DataContext's Log property allows you to see the SQL so you can compare what SQL is actually generated.)

\n

EDIT: Change to use Group Join (... into var) to get the outer join (rather than an inner join).

\n soup wrap:

Just do the order details condition in the usual way:

from o in orders
join od from orderdetails on o.id = od.orderid
  into details
where details.status == 'A'
select new { Order = o, Details = details}

(NB. Details is a sequence, with each matching details record, LINQ operators like First and FirstOrDefault can be use to extract just one.)

Or use an expression as the data source

from o in orders
join od from orderdetails.Where(d => d.Status == 'A') on o.id = od.orderid
  into details
select new { Order = o, Details = details}

Or even, use another comprehension expression as the source expression:

from o in orders
join od from (from d in orderdetails
              where d.Status == 'A'
              select d)
  on o.id = od.orderid
  into details
select new { Order = o, Details = details}

(Setting you DataContext's Log property allows you to see the SQL so you can compare what SQL is actually generated.)

EDIT: Change to use Group Join (... into var) to get the outer join (rather than an inner join).

qid & accept id: (1074529, 1210625) query: SOAP call with query on result (SSRS, Sharepoint) soup:

A post at:

\n

http://social.msdn.microsoft.com/forums/en-US/sqlreportingservices/thread/1562bc7c-8348-441d-8b59-245d70c3d967/

\n

Suggested using this syntax for placement of the node (this example is to retrieve the item with an ID of 1):

\n
\n  http://schemas.microsoft.com/sharepoint/soap/GetListItems\n  \n    \n      \n        {CE7A4C2E-D03A-4AF3-BCA3-BA2A0ADCADC7}\n      \n      \n        \n          \n            \n              \n                \n                1\n              \n            \n          \n        \n      \n    \n  \n  *\n\n
\n

However this would give me the following error:

\n

Failed to execute web request for the specified URL

\n

With the following in the details:

\n

Element <Query> of parameter query is missing or invalid

\n

From looking at the SOAP message with Microsoft Network Monitor, it looks as though the node is getting escaped to <Query> etc, which is why it fails.

\n

However, I was able to get this to work using the method described in Martin Kurek's response at:

\n

http://www.sharepointblogs.com/dwise/archive/2007/11/28/connecting-sql-reporting-services-to-a-sharepoint-list-redux.aspx

\n

So, I used this as my query:

\n
\n  http://schemas.microsoft.com/sharepoint/soap/GetListItems\n   \n      \n         \n            {CE7A4C2E-D03A-4AF3-BCA3-BA2A0ADCADC7}\n         \n         \n           \n      \n   \n   *\n\n
\n

And then defined a parameter on the dataset named query, with the following value:

\n
1\n
\n

I was also able to make my query dependent on a report parameter, by setting the query dataset parameter to the following expression:

\n
="" & \nParameters!TaskID.Value & \n""\n
\n soup wrap:

A post at:

http://social.msdn.microsoft.com/forums/en-US/sqlreportingservices/thread/1562bc7c-8348-441d-8b59-245d70c3d967/

Suggested using this syntax for placement of the node (this example is to retrieve the item with an ID of 1):


  http://schemas.microsoft.com/sharepoint/soap/GetListItems
  
    
      
        {CE7A4C2E-D03A-4AF3-BCA3-BA2A0ADCADC7}
      
      
        
          
            
              
                
                1
              
            
          
        
      
    
  
  *

However this would give me the following error:

Failed to execute web request for the specified URL

With the following in the details:

Element <Query> of parameter query is missing or invalid

From looking at the SOAP message with Microsoft Network Monitor, it looks as though the node is getting escaped to <Query> etc, which is why it fails.

However, I was able to get this to work using the method described in Martin Kurek's response at:

http://www.sharepointblogs.com/dwise/archive/2007/11/28/connecting-sql-reporting-services-to-a-sharepoint-list-redux.aspx

So, I used this as my query:


  http://schemas.microsoft.com/sharepoint/soap/GetListItems
   
      
         
            {CE7A4C2E-D03A-4AF3-BCA3-BA2A0ADCADC7}
         
         
           
      
   
   *

And then defined a parameter on the dataset named query, with the following value:

1

I was also able to make my query dependent on a report parameter, by setting the query dataset parameter to the following expression:

="" & 
Parameters!TaskID.Value & 
""
qid & accept id: (1146012, 1146072) query: Join on multiple booleans soup:

It's not really a SQL problem you're asking, just a boolean expression problem. I assume you've got another column in these tables that allows you to join the rows in t1 to t2, but following your examples (where there is only 1 row in t1), you can do it as:

\n
  SELECT t2.A2\n       , t2.B2\n       , t3.C2\n    FROM t1\n       , t2\n   WHERE (t2.A2 OR NOT T1.A1)\n     AND (t2.B2 OR NOT T1.B1)\n     AND (t2.C2 OR NOT T1.C1)\n;\n
\n

I now see the non-abstracted answer you've posted above. Based on that, there are some issues in your SQL. For one thing, you should be expressing only the conditions in your JOIN clauses that relate the vw_fbScheduleFull table to the fbDivision table (i.e. the foreign/primary key relationship); all the LowerDivision/UpperDivision/SeniorDivision stuff should be in the WHERE clause.

\n

Secondly, you're ignoring the operator precedence of the AND and OR operators - you want to enclose each of the *Division pairs within parens to avoid undesirable effects.

\n

Not knowing the full schema of the tables, I would guess that the proper version of this query would look something like this:

\n
  SELECT vw_fbScheduleFull.LocationName\n       , vw_fbScheduleFull.FieldName\n       , vw_fbScheduleFull.Description\n       , vw_fbScheduleFull.StartTime\n       , vw_fbScheduleFull.EndTime\n       , vw_fbScheduleFull.LowerDivision\n       , vw_fbScheduleFull.UpperDivision\n       , vw_fbScheduleFull.SeniorDivision\n    FROM vw_fbScheduleFull \n       , fbDivision\n   WHERE vw_fbScheduleFull.PracticeDate = ?\n     AND vw_fbScheduleFull.Locked IS NULL \n     AND fbDivision.DivisionName = ?\n     AND (vw_fbScheduleFull.LowerDivision = 1 OR fbDivision.LowerDivision <> 1)\n     AND (vw_fbScheduleFull.UpperDivision = 1 OR fbDivision.UpperDivision <> 1)\n     AND (vw_fbScheduleFull.SeniorDivision = 1 OR fbDivision.SeniorDivision <> 1)\nORDER BY vw_fbScheduleFull.LocationName\n       , vw_fbScheduleFull.FieldName\n       , vw_fbScheduleFull.StartTime \n;\n
\n

Looking one more time, I realize that your "fbDivision.DivisionName = ?" probably is reducing the number of rows in that table to one, and that there isn't a formal PK/FK relationship between those two tables. In which case, you should dispense with the INNER JOIN nomenclature in the FROM clause and just list the two tables; I've updated my example.

\n soup wrap:

It's not really a SQL problem you're asking, just a boolean expression problem. I assume you've got another column in these tables that allows you to join the rows in t1 to t2, but following your examples (where there is only 1 row in t1), you can do it as:

  SELECT t2.A2
       , t2.B2
       , t3.C2
    FROM t1
       , t2
   WHERE (t2.A2 OR NOT T1.A1)
     AND (t2.B2 OR NOT T1.B1)
     AND (t2.C2 OR NOT T1.C1)
;

I now see the non-abstracted answer you've posted above. Based on that, there are some issues in your SQL. For one thing, you should be expressing only the conditions in your JOIN clauses that relate the vw_fbScheduleFull table to the fbDivision table (i.e. the foreign/primary key relationship); all the LowerDivision/UpperDivision/SeniorDivision stuff should be in the WHERE clause.

Secondly, you're ignoring the operator precedence of the AND and OR operators - you want to enclose each of the *Division pairs within parens to avoid undesirable effects.

Not knowing the full schema of the tables, I would guess that the proper version of this query would look something like this:

  SELECT vw_fbScheduleFull.LocationName
       , vw_fbScheduleFull.FieldName
       , vw_fbScheduleFull.Description
       , vw_fbScheduleFull.StartTime
       , vw_fbScheduleFull.EndTime
       , vw_fbScheduleFull.LowerDivision
       , vw_fbScheduleFull.UpperDivision
       , vw_fbScheduleFull.SeniorDivision
    FROM vw_fbScheduleFull 
       , fbDivision
   WHERE vw_fbScheduleFull.PracticeDate = ?
     AND vw_fbScheduleFull.Locked IS NULL 
     AND fbDivision.DivisionName = ?
     AND (vw_fbScheduleFull.LowerDivision = 1 OR fbDivision.LowerDivision <> 1)
     AND (vw_fbScheduleFull.UpperDivision = 1 OR fbDivision.UpperDivision <> 1)
     AND (vw_fbScheduleFull.SeniorDivision = 1 OR fbDivision.SeniorDivision <> 1)
ORDER BY vw_fbScheduleFull.LocationName
       , vw_fbScheduleFull.FieldName
       , vw_fbScheduleFull.StartTime 
;

Looking one more time, I realize that your "fbDivision.DivisionName = ?" probably is reducing the number of rows in that table to one, and that there isn't a formal PK/FK relationship between those two tables. In which case, you should dispense with the INNER JOIN nomenclature in the FROM clause and just list the two tables; I've updated my example.

qid & accept id: (1154702, 1154723) query: SQL Checking for NULL and incrementals soup:

This kind of incremental querying is just not efficient. You'll get better results by saying - "I'll never need more than 100 results so give me these" :

\n
SELECT top 100 *\nFROM news\nORDER BY date desc\n
\n

Then filtering further on the client side if you want only a particular day's items (such as the items with a common date as the first item in the result).

\n

Or, you could transform your multiple query request into a two query request:

\n
DECLARE\n  @theDate datetime,\n  @theDate2 datetime\n\nSET @theDate = (SELECT Max(date) FROM news)\n  --trim the time off of @theDate\nSET @theDate = DateAdd(dd, DateDiff(dd, 0, @theDate), 0)\nSET @theDate2 = DateAdd(dd, 1, @theDate)\n\nSELECT *\nFROM news\nWHERE @theDate <= date AND date < @theDate2\nORDER BY date desc\n
\n soup wrap:

This kind of incremental querying is just not efficient. You'll get better results by saying - "I'll never need more than 100 results so give me these" :

SELECT top 100 *
FROM news
ORDER BY date desc

Then filtering further on the client side if you want only a particular day's items (such as the items with a common date as the first item in the result).

Or, you could transform your multiple query request into a two query request:

DECLARE
  @theDate datetime,
  @theDate2 datetime

SET @theDate = (SELECT Max(date) FROM news)
  --trim the time off of @theDate
SET @theDate = DateAdd(dd, DateDiff(dd, 0, @theDate), 0)
SET @theDate2 = DateAdd(dd, 1, @theDate)

SELECT *
FROM news
WHERE @theDate <= date AND date < @theDate2
ORDER BY date desc
qid & accept id: (1179355, 1179472) query: Oracle Minus - From a list of values, how do I count ONLY non reversed values soup:

Minus operations use distinct sets. Try this instead:

\n
select row_number() over (partition by name_id, val order by name_id, val), name_id, val \nfrom check_minus\nwhere val > 0\n  minus\nselect row_number() over (partition by name_id, val order by name_id, val), name_id, abs(val) \nfrom check_minus\nwhere val < 0\n
\n

It produces:

\n
RowNum Name_Id   Val\n1,     1,        20\n2,     1,        5\n2,     1,        15\n3,     1,        15\n
\n soup wrap:

Minus operations use distinct sets. Try this instead:

select row_number() over (partition by name_id, val order by name_id, val), name_id, val 
from check_minus
where val > 0
  minus
select row_number() over (partition by name_id, val order by name_id, val), name_id, abs(val) 
from check_minus
where val < 0

It produces:

RowNum Name_Id   Val
1,     1,        20
2,     1,        5
2,     1,        15
3,     1,        15
qid & accept id: (1197943, 1319077) query: Creating public synonym at system level soup:

Try creating a view called MASTER_MYVIEW first (you may need to deal with privileges there as well):

\n
create view master_myview as select ...;\n
\n

Then create a public synonym for that new view:

\n
create or replace public synonym master_myview for .master_myview;\n
\n soup wrap:

Try creating a view called MASTER_MYVIEW first (you may need to deal with privileges there as well):

create view master_myview as select ...;

Then create a public synonym for that new view:

create or replace public synonym master_myview for .master_myview;
qid & accept id: (1207740, 1207791) query: Make a query Count() return 0 instead of empty soup:

Replace the Count statements with

\n
Sum(Iif(DateDiff("d",DateAdded,Date())>=91,Iif(DateDiff("d",DateAdded,Date())<=180,'1','0'),'0')) AS BTWN_91_180,\n
\n

I'm not a fan of the nested Iifs, but it doesn't look like there's any way around them, since DateDiff and BETWEEN...AND were not playing nicely.

\n

To prune ItemNames without any added dates, the query block had to be enclosed in a larger query, since checking against a calculated field cannot be done from inside a query. The end result is this query:

\n
SELECT *\nFROM \n     (\n     SELECT DISTINCT Source.ItemName AS InvestmentManager, \n     Sum(Iif(DateDiff("d",DateAdded,Date())>=20,Iif(DateDiff("d",DateAdded,Date())<=44,'1','0'),'0')) AS BTWN_20_44,\n     Sum(Iif(DateDiff("d",DateAdded,Date())>=45,Iif(DateDiff("d",DateAdded,Date())<=60,'1','0'),'0')) AS BTWN_45_60,\n     Sum(Iif(DateDiff("d",DateAdded,Date())>=61,Iif(DateDiff("d",DateAdded,Date())<=90,'1','0'),'0')) AS BTWN_61_90,\n     Sum(Iif(DateDiff("d",DateAdded,Date())>=91,Iif(DateDiff("d",DateAdded,Date())<=180,'1','0'),'0')) AS BTWN_91_180,\n     Sum(Iif(DateDiff("d",DateAdded,Date())>180,'1','0')) AS GT_180,\n     Sum(Iif(DateDiff("d",DateAdded,Date())>=20,'1','0')) AS Total\n     FROM Source\n     WHERE CompleteState='FAILED'\n     GROUP BY ItemName\n     )\nWHERE Total > 0;\n
\n soup wrap:

Replace the Count statements with

Sum(Iif(DateDiff("d",DateAdded,Date())>=91,Iif(DateDiff("d",DateAdded,Date())<=180,'1','0'),'0')) AS BTWN_91_180,

I'm not a fan of the nested Iifs, but it doesn't look like there's any way around them, since DateDiff and BETWEEN...AND were not playing nicely.

To prune ItemNames without any added dates, the query block had to be enclosed in a larger query, since checking against a calculated field cannot be done from inside a query. The end result is this query:

SELECT *
FROM 
     (
     SELECT DISTINCT Source.ItemName AS InvestmentManager, 
     Sum(Iif(DateDiff("d",DateAdded,Date())>=20,Iif(DateDiff("d",DateAdded,Date())<=44,'1','0'),'0')) AS BTWN_20_44,
     Sum(Iif(DateDiff("d",DateAdded,Date())>=45,Iif(DateDiff("d",DateAdded,Date())<=60,'1','0'),'0')) AS BTWN_45_60,
     Sum(Iif(DateDiff("d",DateAdded,Date())>=61,Iif(DateDiff("d",DateAdded,Date())<=90,'1','0'),'0')) AS BTWN_61_90,
     Sum(Iif(DateDiff("d",DateAdded,Date())>=91,Iif(DateDiff("d",DateAdded,Date())<=180,'1','0'),'0')) AS BTWN_91_180,
     Sum(Iif(DateDiff("d",DateAdded,Date())>180,'1','0')) AS GT_180,
     Sum(Iif(DateDiff("d",DateAdded,Date())>=20,'1','0')) AS Total
     FROM Source
     WHERE CompleteState='FAILED'
     GROUP BY ItemName
     )
WHERE Total > 0;
qid & accept id: (1263780, 1263795) query: SQL - Find patterns of records soup:

If it's one song after another, assuming a table named tblSongs with a 'sequence' & 'name' column. You might want to try something like

\n
select top N first.name, second.name, count(*)\nfrom tblSongs as first \n     inner join tblSongs as second\n         on second.sequence=first.sequence + 1\ngroup by first.name, second.name\norder by count(*) desc\n
\n

If song sequence X,Y is counted the same as Y,X then

\n
select top N first.name, second.name, count(*)\nfrom tblSongs as first \n     inner join tblSongs as second\n         on second.sequence=first.sequence + 1\n         or second.sequence=first.sequence - 1\ngroup by first.name, second.name\norder by count(*) desc\n
\n

If you are looking for any pattern of 2 song sequences, then

\n
select first.name, second.name, abs(second.sequence - first.sequence) as spacing_count\nfrom tblSongs as first \n     inner join tblSongs as second\n         on second.sequence=first.sequence + 1\n         or second.sequence=first.sequence - 1\n
\n

Then do some statistical analysis on the spacing_count (which is beyond me).

\n

I believe those will get you started.

\n soup wrap:

If it's one song after another, assuming a table named tblSongs with a 'sequence' & 'name' column. You might want to try something like

select top N first.name, second.name, count(*)
from tblSongs as first 
     inner join tblSongs as second
         on second.sequence=first.sequence + 1
group by first.name, second.name
order by count(*) desc

If song sequence X,Y is counted the same as Y,X then

select top N first.name, second.name, count(*)
from tblSongs as first 
     inner join tblSongs as second
         on second.sequence=first.sequence + 1
         or second.sequence=first.sequence - 1
group by first.name, second.name
order by count(*) desc

If you are looking for any pattern of 2 song sequences, then

select first.name, second.name, abs(second.sequence - first.sequence) as spacing_count
from tblSongs as first 
     inner join tblSongs as second
         on second.sequence=first.sequence + 1
         or second.sequence=first.sequence - 1

Then do some statistical analysis on the spacing_count (which is beyond me).

I believe those will get you started.

qid & accept id: (1326701, 1326746) query: Single or multiple INSERTs based on values SELECTed soup:

It's not trivial. First, you need another column "Flag" which is 0:

\n
INSERT INTO Results (year, month, day, hour, duration, court, Flag)\nSELECT DATEPART (yy, b.StartDateTime),\n       DATEPART (mm, b.StartDateTime),\n       DATEPART (dd, b.StartDateTime),\n       DATEPART (hh, b.StartDateTime),\n       a.Duration,\n       a.Court,\n       0\nFROM Bookings b\nINNER JOIN Activities a\nON b.ActivityID = a.ID\n
\n

You need to run these queries several times:

\n
-- Copy all rows with duration > 1 and set the flag to 1\ninsert into results(year, month, day, hour, duration, court, Flag)\nselect year, month, day, hour+1, duration-1, court, 1\nfrom result\nwhere duration > 1\n;\n-- Set the duration of all copied rows to 1\nupdate result\nset duration = 1\nwhere flag = 0 and duration > 1\n;\n-- Prepare the copies for the next round\nupdate result\nset flag = 0\nwhere flag = 1\n
\n

This will create an additional entry for each duration > 1. My guess is that you can't allocate a court for more than 8 hours, so you just need to run these three 8 times to fix all of them.

\n soup wrap:

It's not trivial. First, you need another column "Flag" which is 0:

INSERT INTO Results (year, month, day, hour, duration, court, Flag)
SELECT DATEPART (yy, b.StartDateTime),
       DATEPART (mm, b.StartDateTime),
       DATEPART (dd, b.StartDateTime),
       DATEPART (hh, b.StartDateTime),
       a.Duration,
       a.Court,
       0
FROM Bookings b
INNER JOIN Activities a
ON b.ActivityID = a.ID

You need to run these queries several times:

-- Copy all rows with duration > 1 and set the flag to 1
insert into results(year, month, day, hour, duration, court, Flag)
select year, month, day, hour+1, duration-1, court, 1
from result
where duration > 1
;
-- Set the duration of all copied rows to 1
update result
set duration = 1
where flag = 0 and duration > 1
;
-- Prepare the copies for the next round
update result
set flag = 0
where flag = 1

This will create an additional entry for each duration > 1. My guess is that you can't allocate a court for more than 8 hours, so you just need to run these three 8 times to fix all of them.

qid & accept id: (1341505, 1341604) query: Oracle: Normalizing data during migration soup:

The DISTINCT placed in a subquery should work:

\n
SQL> INSERT INTO meeting\n  2     SELECT seq.nextval, meeting_desc, meeting_date\n  3       FROM (SELECT DISTINCT meeting_desc, meeting_date\n  4               FROM current_table);\n\n2 rows inserted\n
\n

Once this is done, you would join this newly created table with the old table to associate the generated ids to the children tables:

\n
SQL>   INSERT INTO topic\n  2       SELECT m.id, topic_seq.NEXTVAL, ct.topic_desc\n  3         FROM current_table ct\n  4         JOIN meeting m ON (ct.meeting_desc = m.meeting_desc \n  5                            AND ct.meeting_date = m.meeting_date);\n\n5 rows inserted\n
\n soup wrap:

The DISTINCT placed in a subquery should work:

SQL> INSERT INTO meeting
  2     SELECT seq.nextval, meeting_desc, meeting_date
  3       FROM (SELECT DISTINCT meeting_desc, meeting_date
  4               FROM current_table);

2 rows inserted

Once this is done, you would join this newly created table with the old table to associate the generated ids to the children tables:

SQL>   INSERT INTO topic
  2       SELECT m.id, topic_seq.NEXTVAL, ct.topic_desc
  3         FROM current_table ct
  4         JOIN meeting m ON (ct.meeting_desc = m.meeting_desc 
  5                            AND ct.meeting_date = m.meeting_date);

5 rows inserted
qid & accept id: (1344697, 1344756) query: How can I make a stored procedure return a "dataset" using a parameter I pass? soup:

To fill a dataset from a stored procedure you would have code like below:

\n
SqlConnection mySqlConnection =new SqlConnection("server=(local);database=MyDatabase;Integrated Security=SSPI;");\n\n    SqlCommand mySqlCommand = mySqlConnection.CreateCommand();\n    mySqlCommand.CommandText = "IDCategory";\n    mySqlCommand.CommandType = CommandType.StoredProcedure;\n    mySqlCommand.Parameters.Add("@IDCategory", SqlDbType.Int).Value = 5;\n\n    SqlDataAdapter mySqlDataAdapter = new SqlDataAdapter();\n    mySqlDataAdapter.SelectCommand = mySqlCommand;\n    DataSet myDataSet = new DataSet();\n    mySqlConnection.Open();\n    mySqlDataAdapter.Fill(myDataSet);\n
\n

Your connection string will be different and there are a few different ways to do this but this should get you going.... Once you get a few of these under your belt take a look at the Using Statement. It helps clean up the resources and requires a few less lines of code. This assumes a Stored Procedure name IDCategory with one Parameter called the same. It may be a little different in your setup.

\n

Your stored procedure in this case will look something like:

\n
CREATE PROC [dbo].[IDCategory] \n    @IDCategory int\nAS \n    SELECT IDListing, IDCategory, Price, Seller, Image\n         FROM whateveryourtableisnamed\n         WHERE IDCategory = @IDCategory\n
\n

Here's a link on Stored Procedure basics:\nhttp://www.sql-server-performance.com/articles/dba/stored_procedures_basics_p1.aspx

\n

Here's a link on DataSets and other items with ADO.Net:\nhttp://authors.aspalliance.com/quickstart/howto/doc/adoplus/adoplusoverview.aspx

\n soup wrap:

To fill a dataset from a stored procedure you would have code like below:

SqlConnection mySqlConnection =new SqlConnection("server=(local);database=MyDatabase;Integrated Security=SSPI;");

    SqlCommand mySqlCommand = mySqlConnection.CreateCommand();
    mySqlCommand.CommandText = "IDCategory";
    mySqlCommand.CommandType = CommandType.StoredProcedure;
    mySqlCommand.Parameters.Add("@IDCategory", SqlDbType.Int).Value = 5;

    SqlDataAdapter mySqlDataAdapter = new SqlDataAdapter();
    mySqlDataAdapter.SelectCommand = mySqlCommand;
    DataSet myDataSet = new DataSet();
    mySqlConnection.Open();
    mySqlDataAdapter.Fill(myDataSet);

Your connection string will be different and there are a few different ways to do this but this should get you going.... Once you get a few of these under your belt take a look at the Using Statement. It helps clean up the resources and requires a few less lines of code. This assumes a Stored Procedure name IDCategory with one Parameter called the same. It may be a little different in your setup.

Your stored procedure in this case will look something like:

CREATE PROC [dbo].[IDCategory] 
    @IDCategory int
AS 
    SELECT IDListing, IDCategory, Price, Seller, Image
         FROM whateveryourtableisnamed
         WHERE IDCategory = @IDCategory

Here's a link on Stored Procedure basics: http://www.sql-server-performance.com/articles/dba/stored_procedures_basics_p1.aspx

Here's a link on DataSets and other items with ADO.Net: http://authors.aspalliance.com/quickstart/howto/doc/adoplus/adoplusoverview.aspx

qid & accept id: (1362148, 1362166) query: How to insert into a table with just one IDENTITY column (SQL Express) soup:
 INSERT INTO dbo.TableWithOnlyIdentity DEFAULT VALUES\n
\n

This works just fine in my case. How are you trying to get those rows into the database? SQL Server Mgmt Studio? SQL query from .NET app?

\n

Running inside Visual Studio in the "New Query" window, I get:

\n
\n

The DEFAULT VALUES SQL construct or\n statement is not supported.

\n
\n

==> OK, so Visual Studio can't handle it - that's not the fault of SQL Server, but of Visual Studio. Use the real SQL Management Studio instead - it works just fine there!

\n

Using ADO.NET also works like a charm:

\n
using(SqlConnection _con = new SqlConnection("server=(local);\n                             database=test;integrated security=SSPI;"))\n{\n    using(SqlCommand _cmd = new SqlCommand\n            ("INSERT INTO dbo.TableWithOnlyIdentity DEFAULT VALUES", _con))\n    {\n        _con.Open();\n        _cmd.ExecuteNonQuery();\n        _con.Close();\n    }\n}   \n
\n

Seems to be a limitation of VS - don't use VS for serious DB work :-)\nMarc

\n soup wrap:
 INSERT INTO dbo.TableWithOnlyIdentity DEFAULT VALUES

This works just fine in my case. How are you trying to get those rows into the database? SQL Server Mgmt Studio? SQL query from .NET app?

Running inside Visual Studio in the "New Query" window, I get:

The DEFAULT VALUES SQL construct or statement is not supported.

==> OK, so Visual Studio can't handle it - that's not the fault of SQL Server, but of Visual Studio. Use the real SQL Management Studio instead - it works just fine there!

Using ADO.NET also works like a charm:

using(SqlConnection _con = new SqlConnection("server=(local);
                             database=test;integrated security=SSPI;"))
{
    using(SqlCommand _cmd = new SqlCommand
            ("INSERT INTO dbo.TableWithOnlyIdentity DEFAULT VALUES", _con))
    {
        _con.Open();
        _cmd.ExecuteNonQuery();
        _con.Close();
    }
}   

Seems to be a limitation of VS - don't use VS for serious DB work :-) Marc

qid & accept id: (1410216, 1411458) query: DB2 SQL add rows based on other rows soup:

DrJokepu solution is ok, but that depends if what you call "Changes" in your question, is fixed. I.e.: are you always going to change +1 for the 2nd column? Or are those changes "dynamic" in a way you have to decide upon runtime which changes you're going to apply?

\n

There are in DB2 and any other SQL different constructs (like the insert into in DB2) or SELECT INTO for MS-SQL that will allow you to construct a set of queries.

\n

If I am not mistaken, you want to do this:

\n
    \n
  1. Insert some values into a table that come from a select (what you call "old")
  2. \n
  3. Create another set of records (like the "old" ones) but modify their values.
  4. \n
\n

Or maybe you just want to do number 2.

\n

Number 1 is easy, as Dr.Jokepu already showed you:

\n
INSERT INTO  (values) SELECT "values" FROM ;\n\n

Number 2 you can always do in the same query, adding the changes as you select:

\n
INSERT INTO MDSTD.MBANK ( MID, MAGN, MAAID, MTYPEOT, MAVAILS, MUSER, MTS)\nSELECT \n      MID \n     ,MAGN + 1\n     ,0 as MAAID\n     ,MTYPEOT\n     ,'A' as MAVAILS\n     ,MUSER\n     ,GETDATE() \nFROM mdstd.mbank \nWHERE MTYPEOT = '2' and MAVAILS = 'A'\n
\n

(note the GETDATE() is a MS-SQL function, I don't remember the exact function for DB/2 at this moment).

\n

One question remains, in your example you mentioned:

\n

"New = A Old = O"

\n

If Old changes to "O", then you really want to change the original row? the answer to this question depends upon the exact task you want to accomplish, which still isn't clear for me.

\n

If you want to duplicate the rows and change the "copies" or copy them and change both sets (old and new) but using different rules.

\n

UPDATE\nAfter rereading your post I understand you want to do this:

\n
    \n
  1. Duplicate a set of records (effectively copying them) but modifying their values.
  2. \n
  3. Modify the original set of records before you duplicated them
  4. \n
\n

If that is the case, I don't think you can do it in "two" queries, because you'll have no way to know what is the old row and what is the new one if you have already duplicated.

\n

A valid option is to create a temporary table, copy the rows there (modify them as the "new ones) with the query I've provided). Then in the original table execute an "update" (using the same WHERE CLAUSE to make sure you're modifying the same rows), update the "old" values with whatever you want to update and finally insert the new ones back into the original table (what we called "new") that are already modified.\nFinally, drop the temp table.

\n

Phew!

\n

Sounds weird, but unless we're talking about zillions of records every minute, this ought to be a kind of fast operation.

\n soup wrap:

DrJokepu solution is ok, but that depends if what you call "Changes" in your question, is fixed. I.e.: are you always going to change +1 for the 2nd column? Or are those changes "dynamic" in a way you have to decide upon runtime which changes you're going to apply?

There are in DB2 and any other SQL different constructs (like the insert into in DB2) or SELECT INTO for MS-SQL that will allow you to construct a set of queries.

If I am not mistaken, you want to do this:

  1. Insert some values into a table that come from a select (what you call "old")
  2. Create another set of records (like the "old" ones) but modify their values.

Or maybe you just want to do number 2.

Number 1 is easy, as Dr.Jokepu already showed you:

INSERT INTO 
(values) SELECT "values" FROM ;

Number 2 you can always do in the same query, adding the changes as you select:

INSERT INTO MDSTD.MBANK ( MID, MAGN, MAAID, MTYPEOT, MAVAILS, MUSER, MTS)
SELECT 
      MID 
     ,MAGN + 1
     ,0 as MAAID
     ,MTYPEOT
     ,'A' as MAVAILS
     ,MUSER
     ,GETDATE() 
FROM mdstd.mbank 
WHERE MTYPEOT = '2' and MAVAILS = 'A'

(note the GETDATE() is a MS-SQL function, I don't remember the exact function for DB/2 at this moment).

One question remains, in your example you mentioned:

"New = A Old = O"

If Old changes to "O", then you really want to change the original row? the answer to this question depends upon the exact task you want to accomplish, which still isn't clear for me.

If you want to duplicate the rows and change the "copies" or copy them and change both sets (old and new) but using different rules.

UPDATE After rereading your post I understand you want to do this:

  1. Duplicate a set of records (effectively copying them) but modifying their values.
  2. Modify the original set of records before you duplicated them

If that is the case, I don't think you can do it in "two" queries, because you'll have no way to know what is the old row and what is the new one if you have already duplicated.

A valid option is to create a temporary table, copy the rows there (modify them as the "new ones) with the query I've provided). Then in the original table execute an "update" (using the same WHERE CLAUSE to make sure you're modifying the same rows), update the "old" values with whatever you want to update and finally insert the new ones back into the original table (what we called "new") that are already modified. Finally, drop the temp table.

Phew!

Sounds weird, but unless we're talking about zillions of records every minute, this ought to be a kind of fast operation.

qid & accept id: (1421404, 1421486) query: Find out which tables were affected by Triggers soup:

Show the cascades and constraints:

\n
mysql> SHOW CREATE TABLE tablename;\n
\n

Show triggers:

\n
mysql> USE dbname;\nmysql> show triggers;\n
\n soup wrap:

Show the cascades and constraints:

mysql> SHOW CREATE TABLE tablename;

Show triggers:

mysql> USE dbname;
mysql> show triggers;
qid & accept id: (1479831, 1479840) query: Using ranking-function derived column in where clause (SQL Server 2008) soup:

You must move the WHERE operator above the project list where RowNumber column is created. Use a derived table or a CTE:

\n
SELECT * \n  FROM (\n   SELECT *, ROW_NUMBER() OVER (...) as RowNumber\n   FROM ...) As ...\n WHERE RowNumber = ...\n
\n

the equivalent CTE is:

\n
WITH cte AS (\nSELECT *, ROW_NUMBER() OVER (...) as RowNumber\n       FROM ...)\nSELECT * FROM cte \nWHERE RowNumber = ...   \n
\n soup wrap:

You must move the WHERE operator above the project list where RowNumber column is created. Use a derived table or a CTE:

SELECT * 
  FROM (
   SELECT *, ROW_NUMBER() OVER (...) as RowNumber
   FROM ...) As ...
 WHERE RowNumber = ...

the equivalent CTE is:

WITH cte AS (
SELECT *, ROW_NUMBER() OVER (...) as RowNumber
       FROM ...)
SELECT * FROM cte 
WHERE RowNumber = ...   
qid & accept id: (1627604, 1627661) query: SQL Query including time calculation soup:

I'm not sure it's specified in the SQL Standard, but most SQL implementations have some sort of function for determining intervals. It's really going to boil down to what flavor of SQL you're using.

\n

If you're working with Oracle/PLSQL:

\n
SELECT NumToDSInterval(enddate- startdate, 'MINUTE') FROM MyTable\n
\n

In SQL Server/T-SQL:

\n
SELECT DateDiff(n, startdate, enddate) FROM MyTable\n
\n

In MySQL:

\n
SELECT SubTime(enddate, startdate) FROM MyTable;\n
\n

I'm sure there's one for SQLite and PostGre and any other flavor as well.

\n soup wrap:

I'm not sure it's specified in the SQL Standard, but most SQL implementations have some sort of function for determining intervals. It's really going to boil down to what flavor of SQL you're using.

If you're working with Oracle/PLSQL:

SELECT NumToDSInterval(enddate- startdate, 'MINUTE') FROM MyTable

In SQL Server/T-SQL:

SELECT DateDiff(n, startdate, enddate) FROM MyTable

In MySQL:

SELECT SubTime(enddate, startdate) FROM MyTable;

I'm sure there's one for SQLite and PostGre and any other flavor as well.

qid & accept id: (1700110, 1700356) query: How do I select min/max dates from table 2 based on date in table 1 (without getting too much data from sums) soup:

If the monthly table contains a single entry for each month, you can do simply this:

\n
select\n    m.date as m1,\n    m.other_field,\n    min(d.date) as m2,\n    max(d.date) as m3\nfrom monthly m\njoin daily d\n    on month(d.date) = month(m.date)\n    and year(d.date) = year(m.date)\ngroup by m.date, m.other_field\norder by m.date\n
\n

otherwise:

\n
select m1, sum(other_field), m2, m3\nfrom (\n        select\n        m.date as m1,\n        m.other_field,\n        min(d.date) as m2,\n        max(d.date) as m3\n    from monthly m\n    join daily d\n        on month(d.date) = month(m.date)\n        and year(d.date) = year(m.date)\n    group by m.date, m.other_field) A\ngroup by A.m1, A.m2, A.m3\norder by A.m1\n
\n

Update from pax: Try as I might, I could not get the join solutions working properly - they all seemed to return the same wrong data as the original. In the end, I opted for a non-join solution since it worked and performance wasn't a big issue, since the tables typically have 24 rows (for monthly) and 700 rows (for daily). I'm editing this answer and accepting it since (1) it actually helped a great deal in getting the correct solution for me; and (2) I'm loathe to write my own answer and claim the glory for myself.

\n

Thanks for all your help. The following is what worked for me:

\n
select\n    m.date as p1,\n    m.grouping_field as p2,\n    sum(m.aggregating_field) as p3,\n    (select min(date) from daily\n        where month(date) = month(m.date)\n        and year(date) = year(m.date)) as p4,\n    (select max(date) from daily\n        where month(date) = month(m.date)\n        and year(date) = year(m.date)) as p5\nfrom\n    monthly m\ngroup by\n    m.date, m.grouping_field\n
\n

which gave me what I wanted:

\n
    P1       P2    P3       P4         P5\n----------  ----  ----  ----------  ----------\n2007-10-01  BoxA  12.3  2007-10-16  2007-10-30\n2007-10-01  BoxB  13.6  2007-10-16  2007-10-30\n2007-10-01  BoxC   7.4  2007-10-16  2007-10-30\n2007-11-01  BoxA  20.3  2007-11-01  2007-11-30\n2007-11-01  BoxB  24.2  2007-11-01  2007-11-30\n2007-11-01  BoxC  21.7  2007-11-01  2007-11-30\n2007-12-01  BoxA   6.9  2007-12-01  2007-12-15\n2007-12-01  BoxB   6.4  2007-12-01  2007-12-15\n2007-12-01  BoxC   6.9  2007-12-01  2007-12-15\n
\n soup wrap:

If the monthly table contains a single entry for each month, you can do simply this:

select
    m.date as m1,
    m.other_field,
    min(d.date) as m2,
    max(d.date) as m3
from monthly m
join daily d
    on month(d.date) = month(m.date)
    and year(d.date) = year(m.date)
group by m.date, m.other_field
order by m.date

otherwise:

select m1, sum(other_field), m2, m3
from (
        select
        m.date as m1,
        m.other_field,
        min(d.date) as m2,
        max(d.date) as m3
    from monthly m
    join daily d
        on month(d.date) = month(m.date)
        and year(d.date) = year(m.date)
    group by m.date, m.other_field) A
group by A.m1, A.m2, A.m3
order by A.m1

Update from pax: Try as I might, I could not get the join solutions working properly - they all seemed to return the same wrong data as the original. In the end, I opted for a non-join solution since it worked and performance wasn't a big issue, since the tables typically have 24 rows (for monthly) and 700 rows (for daily). I'm editing this answer and accepting it since (1) it actually helped a great deal in getting the correct solution for me; and (2) I'm loathe to write my own answer and claim the glory for myself.

Thanks for all your help. The following is what worked for me:

select
    m.date as p1,
    m.grouping_field as p2,
    sum(m.aggregating_field) as p3,
    (select min(date) from daily
        where month(date) = month(m.date)
        and year(date) = year(m.date)) as p4,
    (select max(date) from daily
        where month(date) = month(m.date)
        and year(date) = year(m.date)) as p5
from
    monthly m
group by
    m.date, m.grouping_field

which gave me what I wanted:

    P1       P2    P3       P4         P5
----------  ----  ----  ----------  ----------
2007-10-01  BoxA  12.3  2007-10-16  2007-10-30
2007-10-01  BoxB  13.6  2007-10-16  2007-10-30
2007-10-01  BoxC   7.4  2007-10-16  2007-10-30
2007-11-01  BoxA  20.3  2007-11-01  2007-11-30
2007-11-01  BoxB  24.2  2007-11-01  2007-11-30
2007-11-01  BoxC  21.7  2007-11-01  2007-11-30
2007-12-01  BoxA   6.9  2007-12-01  2007-12-15
2007-12-01  BoxB   6.4  2007-12-01  2007-12-15
2007-12-01  BoxC   6.9  2007-12-01  2007-12-15
qid & accept id: (1712077, 1713063) query: Wipe data from Oracle DB soup:

The easiest way would be to drop the schema the objects are associated to:

\n
DROP USER [schema name] CASCADE\n
\n

Nuke it from orbit - it's the only way to be sure ;)

\n

For the script you provided, you could instead run those queries without having to generate the intermediate script using the following anonymous procedure:

\n
BEGIN\n\n  --Bye Views!\n  FOR i IN (SELECT uv.view_name\n              FROM USER_VIEWS uv) LOOP\n    EXECUTE IMMEDIATE 'drop view '|| i.view_name ||'';\n  END LOOP;\n\n  --Bye Sequences!\n  FOR i IN (SELECT us.sequence_name\n              FROM USER_SEQUENCES us) LOOP\n    EXECUTE IMMEDIATE 'drop sequence '|| i.sequence_name ||'';\n  END LOOP;\n\n  --Bye Tables!\n  FOR i IN (SELECT ut.table_name\n              FROM USER_TABLES ut) LOOP\n    EXECUTE IMMEDIATE 'drop table '|| i.table_name ||' CASCADE CONSTRAINTS ';\n  END LOOP;\n\n  --Bye Procedures/Functions/Packages!\n  FOR i IN (SELECT us.name,\n                   us.type\n              FROM USER_SOURCE us\n             WHERE us.type IN ('PROCEDURE', 'FUNCTION', 'PACKAGE')\n          GROUP BY us.name, us.type) LOOP\n    EXECUTE IMMEDIATE 'drop '|| i.type ||' '|| i.name ||'';\n  END LOOP;\n\n  --Bye Synonyms!\n  FOR i IN (SELECT ut.synonym_name\n              FROM USER_SYNONYMS us\n             WHERE us.synonym_name NOT LIKE 'sta%' \n               AND us.synonym_name LIKE 's_%') LOOP\n    EXECUTE IMMEDIATE 'drop synonym '|| i.synonym_name ||'';\n  END LOOP;\n\nEND;\n
\n soup wrap:

The easiest way would be to drop the schema the objects are associated to:

DROP USER [schema name] CASCADE

Nuke it from orbit - it's the only way to be sure ;)

For the script you provided, you could instead run those queries without having to generate the intermediate script using the following anonymous procedure:

BEGIN

  --Bye Views!
  FOR i IN (SELECT uv.view_name
              FROM USER_VIEWS uv) LOOP
    EXECUTE IMMEDIATE 'drop view '|| i.view_name ||'';
  END LOOP;

  --Bye Sequences!
  FOR i IN (SELECT us.sequence_name
              FROM USER_SEQUENCES us) LOOP
    EXECUTE IMMEDIATE 'drop sequence '|| i.sequence_name ||'';
  END LOOP;

  --Bye Tables!
  FOR i IN (SELECT ut.table_name
              FROM USER_TABLES ut) LOOP
    EXECUTE IMMEDIATE 'drop table '|| i.table_name ||' CASCADE CONSTRAINTS ';
  END LOOP;

  --Bye Procedures/Functions/Packages!
  FOR i IN (SELECT us.name,
                   us.type
              FROM USER_SOURCE us
             WHERE us.type IN ('PROCEDURE', 'FUNCTION', 'PACKAGE')
          GROUP BY us.name, us.type) LOOP
    EXECUTE IMMEDIATE 'drop '|| i.type ||' '|| i.name ||'';
  END LOOP;

  --Bye Synonyms!
  FOR i IN (SELECT ut.synonym_name
              FROM USER_SYNONYMS us
             WHERE us.synonym_name NOT LIKE 'sta%' 
               AND us.synonym_name LIKE 's_%') LOOP
    EXECUTE IMMEDIATE 'drop synonym '|| i.synonym_name ||'';
  END LOOP;

END;
qid & accept id: (1742507, 1742528) query: AUTO-Parametrized multiple SELECT soup:

Yes, absolutely. For example:

\n
select cnt, count(*) from\n( select department_id, count(*) as cnt\n  from employees\n  group by department_id\n)\ngroup by cnt;\n
\n

This gives the "count of counts".

\n

Or perhaps you mean something more like this, which is also valid:

\n
select emp_name\nfrom employees\nwhere department_id in\n( select department_id\n  from departments\n  where location_id in\n  ( select location_id from locations\n    where country = 'US'\n  )\n);\n
\n soup wrap:

Yes, absolutely. For example:

select cnt, count(*) from
( select department_id, count(*) as cnt
  from employees
  group by department_id
)
group by cnt;

This gives the "count of counts".

Or perhaps you mean something more like this, which is also valid:

select emp_name
from employees
where department_id in
( select department_id
  from departments
  where location_id in
  ( select location_id from locations
    where country = 'US'
  )
);
qid & accept id: (1747745, 1748115) query: How to put a constraint on two combined fields? soup:

One possibility would be to hold a computed column on table1 i.e.

\n
fieldx = (field1 || field2)\n
\n

I don't know if DB2 supports computed (aka virtual) columns as such, but if not you can create a regular column and maintain it via a trigger. The create the foreign key constraint:

\n
ALTER TABLE table1\n    ADD CONSTRAINT foo FOREIGN KEY (fieldx) REFERENCES table2 (fieldx);\n
\n

Another possibility, of course, would be to modify your table design so that the keys are held consistently: if field1 and field2 are atomic values, then they should appear as such in table2, not as a concatenated value (which more or less breaks 1NF).

\n soup wrap:

One possibility would be to hold a computed column on table1 i.e.

fieldx = (field1 || field2)

I don't know if DB2 supports computed (aka virtual) columns as such, but if not you can create a regular column and maintain it via a trigger. The create the foreign key constraint:

ALTER TABLE table1
    ADD CONSTRAINT foo FOREIGN KEY (fieldx) REFERENCES table2 (fieldx);

Another possibility, of course, would be to modify your table design so that the keys are held consistently: if field1 and field2 are atomic values, then they should appear as such in table2, not as a concatenated value (which more or less breaks 1NF).

qid & accept id: (1773534, 1773691) query: What is the right way to call an Oracle stored function from ado.net and get the result? soup:

I'll assume you are using ODP.net (native Oracle client for .net).

\n

Let's say you have 2 Oracle stored functions like this:

\n
   FUNCTION my_func\n   (\n      p_parm1 VARCHAR2\n    , p_parm2 NUMBER\n   ) RETURN VARCHAR2\n   AS\n   BEGIN\n      RETURN p_parm1 || to_char(p_parm2);\n   END;\n\n   FUNCTION my_func2 RETURN SYS_REFCURSOR\n   AS\n      v_cursor SYS_REFCURSOR;\n   BEGIN\n      OPEN v_cursor FOR\n         SELECT 'hello there Sean' col1\n           FROM dual\n          UNION ALL\n         SELECT 'here is your answer' col1\n           FROM dual;      \n      RETURN v_cursor;          \n   END;\n
\n

One of the functions returns a VARCHAR2 and the other returns ref cursor. On VB side, you could do this:

\n
Dim con As New OracleConnection("Data Source=xe;User Id=sandbox;Password=sandbox; Promotable Transaction=local")\n\nTry\n    con.Open()\n    Dim cmd As OracleCommand = con.CreateCommand()\n    cmd.CommandText = "test_pkg.my_func"\n    cmd.CommandType = CommandType.StoredProcedure\n\n    Dim parm As OracleParameter\n\n    parm = New OracleParameter()\n    parm.Direction = ParameterDirection.ReturnValue\n    parm.OracleDbType = OracleDbType.Varchar2\n    parm.Size = 5000\n    cmd.Parameters.Add(parm)\n\n    parm = New OracleParameter()\n    parm.Direction = ParameterDirection.Input\n    parm.Value = "abc"\n    parm.OracleDbType = OracleDbType.Varchar2\n    cmd.Parameters.Add(parm)\n\n    parm = New OracleParameter()\n    parm.Direction = ParameterDirection.Input\n    parm.Value = 42\n    parm.OracleDbType = OracleDbType.Int32\n    cmd.Parameters.Add(parm)\n\n    cmd.ExecuteNonQuery()\n    Console.WriteLine("result of first function is " + cmd.Parameters(0).Value)\n\n    '''''''''''''''''''''''''''''''''''''''''''''\n    ' now for the second query\n    '''''''''''''''''''''''''''''''''''''''''''''\n    cmd = con.CreateCommand()\n    cmd.CommandText = "test_pkg.my_func2"\n    cmd.CommandType = CommandType.StoredProcedure\n\n    parm = New OracleParameter()\n    parm.Direction = ParameterDirection.ReturnValue\n    parm.OracleDbType = OracleDbType.RefCursor\n    cmd.Parameters.Add(parm)\n\n    Dim dr As OracleDataReader = cmd.ExecuteReader()\n    While (dr.Read())\n        Console.WriteLine(dr(0))\n    End While\n\nFinally\n    If (Not (con Is Nothing)) Then\n        con.Close()\n    End If\nEnd Try\n
\n soup wrap:

I'll assume you are using ODP.net (native Oracle client for .net).

Let's say you have 2 Oracle stored functions like this:

   FUNCTION my_func
   (
      p_parm1 VARCHAR2
    , p_parm2 NUMBER
   ) RETURN VARCHAR2
   AS
   BEGIN
      RETURN p_parm1 || to_char(p_parm2);
   END;

   FUNCTION my_func2 RETURN SYS_REFCURSOR
   AS
      v_cursor SYS_REFCURSOR;
   BEGIN
      OPEN v_cursor FOR
         SELECT 'hello there Sean' col1
           FROM dual
          UNION ALL
         SELECT 'here is your answer' col1
           FROM dual;      
      RETURN v_cursor;          
   END;

One of the functions returns a VARCHAR2 and the other returns ref cursor. On VB side, you could do this:

Dim con As New OracleConnection("Data Source=xe;User Id=sandbox;Password=sandbox; Promotable Transaction=local")

Try
    con.Open()
    Dim cmd As OracleCommand = con.CreateCommand()
    cmd.CommandText = "test_pkg.my_func"
    cmd.CommandType = CommandType.StoredProcedure

    Dim parm As OracleParameter

    parm = New OracleParameter()
    parm.Direction = ParameterDirection.ReturnValue
    parm.OracleDbType = OracleDbType.Varchar2
    parm.Size = 5000
    cmd.Parameters.Add(parm)

    parm = New OracleParameter()
    parm.Direction = ParameterDirection.Input
    parm.Value = "abc"
    parm.OracleDbType = OracleDbType.Varchar2
    cmd.Parameters.Add(parm)

    parm = New OracleParameter()
    parm.Direction = ParameterDirection.Input
    parm.Value = 42
    parm.OracleDbType = OracleDbType.Int32
    cmd.Parameters.Add(parm)

    cmd.ExecuteNonQuery()
    Console.WriteLine("result of first function is " + cmd.Parameters(0).Value)

    '''''''''''''''''''''''''''''''''''''''''''''
    ' now for the second query
    '''''''''''''''''''''''''''''''''''''''''''''
    cmd = con.CreateCommand()
    cmd.CommandText = "test_pkg.my_func2"
    cmd.CommandType = CommandType.StoredProcedure

    parm = New OracleParameter()
    parm.Direction = ParameterDirection.ReturnValue
    parm.OracleDbType = OracleDbType.RefCursor
    cmd.Parameters.Add(parm)

    Dim dr As OracleDataReader = cmd.ExecuteReader()
    While (dr.Read())
        Console.WriteLine(dr(0))
    End While

Finally
    If (Not (con Is Nothing)) Then
        con.Close()
    End If
End Try
qid & accept id: (1784283, 1784364) query: SQL Server 2005/2008 Group By statement with parameters without using dynamic SQL? soup:

You can group on a constant which might be useful

\n
SELECT\n    SUM(Column0),\n    CASE @MyVar WHEN 'Column1' THEN Column1 ELSE '' END AS MyGrouping\nFROM\n    Table1\nGROUP BY\n    CASE @MyVar WHEN 'Column1' THEN Column1 ELSE '' END\n
\n

Edit: For datatype mismatch and multiple values and this allows you to group on both columns...

\n
SELECT\n    SUM(Column0),\n    CASE @MyVar WHEN 'Column1' THEN Column1 ELSE NULL END AS Column1,\n    CASE @MyVar WHEN 'Column2' THEN Column2 ELSE NULL END AS Column2\nFROM\n    Table1\nGROUP BY\n    CASE @MyVar WHEN 'Column1' THEN Column1 ELSE NULL END,\n    CASE @MyVar WHEN 'Column2' THEN Column2 ELSE NULL END\n
\n soup wrap:

You can group on a constant which might be useful

SELECT
    SUM(Column0),
    CASE @MyVar WHEN 'Column1' THEN Column1 ELSE '' END AS MyGrouping
FROM
    Table1
GROUP BY
    CASE @MyVar WHEN 'Column1' THEN Column1 ELSE '' END

Edit: For datatype mismatch and multiple values and this allows you to group on both columns...

SELECT
    SUM(Column0),
    CASE @MyVar WHEN 'Column1' THEN Column1 ELSE NULL END AS Column1,
    CASE @MyVar WHEN 'Column2' THEN Column2 ELSE NULL END AS Column2
FROM
    Table1
GROUP BY
    CASE @MyVar WHEN 'Column1' THEN Column1 ELSE NULL END,
    CASE @MyVar WHEN 'Column2' THEN Column2 ELSE NULL END
qid & accept id: (1785942, 1786090) query: How can I use check constraint in sql server 2005 soup:

There is quite a wealth of information in the SQL Server documentation on this, but the two statements to create the check constraints you ask for are:

\n
ALTER TABLE tablename ADD CONSTRAINT constraintName CHECK (colname between 1 and 5);\n\nALTER TABLE tablename ADD CONSTRAINT constraintName CHECK (colname in (1,2,4));\n
\n

The condition of a check constraint can include:

\n
    \n
  1. A list of constant expressions introduced with in

  2. \n
  3. A range of constant expressions introduced with between

  4. \n
  5. A set of conditions introduced with like, which may contain wildcard characters

  6. \n
\n

This allows you to have conditions like:

\n
(colname >= 1 AND colname <= 5)\n
\n soup wrap:

There is quite a wealth of information in the SQL Server documentation on this, but the two statements to create the check constraints you ask for are:

ALTER TABLE tablename ADD CONSTRAINT constraintName CHECK (colname between 1 and 5);

ALTER TABLE tablename ADD CONSTRAINT constraintName CHECK (colname in (1,2,4));

The condition of a check constraint can include:

  1. A list of constant expressions introduced with in

  2. A range of constant expressions introduced with between

  3. A set of conditions introduced with like, which may contain wildcard characters

This allows you to have conditions like:

(colname >= 1 AND colname <= 5)
qid & accept id: (1809787, 1809981) query: Oracle: How do I determine the NEW name of an object in an "AFTER ALTER" trigger? soup:

ALTER RENAME won't fire the trigger, RENAME x TO y will.

\n

As for your question about names before and after, I think you will have to parse the DDL to retrieve them, like that:

\n
CREATE OR REPLACE TRIGGER MK_BEFORE_RENAME BEFORE RENAME ON SCHEMA \nDECLARE \n  sql_text ora_name_list_t;\n  v_stmt VARCHAR2(2000);\n  n PLS_INTEGER; \nBEGIN  \n  n := ora_sql_txt(sql_text);\n  FOR i IN 1..n LOOP\n   v_stmt := v_stmt || sql_text(i);\n  END LOOP;\n\n  Dbms_Output.Put_Line( 'Before: ' || regexp_replace( v_stmt, 'rename[[:space:]]+([a-z0-9_]+)[[:space:]]+to.*', '\1', 1, 1, 'i' ) );\n  Dbms_Output.Put_Line( 'After: ' || regexp_replace( v_stmt, 'rename[[:space:]]+.*[[:space:]]+to[[:space:]]+([a-z0-9_]+)', '\1', 1, 1, 'i' ) );\nEND;\n
\n

The regular expressions could surely be written more clearly, but it works:

\n
RENAME \nmktestx\nTO                 mktesty;\n\nBefore: mktestx\nAfter: mktesty\n
\n

UPDATE To accommodate your changed question:

\n
CREATE OR REPLACE TRIGGER MK_AFTER_ALTER AFTER ALTER ON SCHEMA \nDECLARE \n  sql_text ora_name_list_t;\n  v_stmt VARCHAR2(2000);\n  n PLS_INTEGER; \nBEGIN  \n  n := ora_sql_txt(sql_text);\n  FOR i IN 1..n LOOP\n   v_stmt := v_stmt || sql_text(i);\n  END LOOP;\n\n  Dbms_Output.Put_Line( 'Before: ' || regexp_replace( v_stmt, 'alter[[:space:]]+table[[:space:]]+([a-z0-9_]+)[[:space:]]+rename[[:space:]]+to.*', '\1', 1, 1, 'i' ) );\n  Dbms_Output.Put_Line( 'After: ' || regexp_replace( v_stmt, 'alter[[:space:]]+table[[:space:]]+.*to[[:space:]]+([a-z0-9_]+)', '\1', 1, 1, 'i' ) );\nEND;\n
\n soup wrap:

ALTER RENAME won't fire the trigger, RENAME x TO y will.

As for your question about names before and after, I think you will have to parse the DDL to retrieve them, like that:

CREATE OR REPLACE TRIGGER MK_BEFORE_RENAME BEFORE RENAME ON SCHEMA 
DECLARE 
  sql_text ora_name_list_t;
  v_stmt VARCHAR2(2000);
  n PLS_INTEGER; 
BEGIN  
  n := ora_sql_txt(sql_text);
  FOR i IN 1..n LOOP
   v_stmt := v_stmt || sql_text(i);
  END LOOP;

  Dbms_Output.Put_Line( 'Before: ' || regexp_replace( v_stmt, 'rename[[:space:]]+([a-z0-9_]+)[[:space:]]+to.*', '\1', 1, 1, 'i' ) );
  Dbms_Output.Put_Line( 'After: ' || regexp_replace( v_stmt, 'rename[[:space:]]+.*[[:space:]]+to[[:space:]]+([a-z0-9_]+)', '\1', 1, 1, 'i' ) );
END;

The regular expressions could surely be written more clearly, but it works:

RENAME 
mktestx
TO                 mktesty;

Before: mktestx
After: mktesty

UPDATE To accommodate your changed question:

CREATE OR REPLACE TRIGGER MK_AFTER_ALTER AFTER ALTER ON SCHEMA 
DECLARE 
  sql_text ora_name_list_t;
  v_stmt VARCHAR2(2000);
  n PLS_INTEGER; 
BEGIN  
  n := ora_sql_txt(sql_text);
  FOR i IN 1..n LOOP
   v_stmt := v_stmt || sql_text(i);
  END LOOP;

  Dbms_Output.Put_Line( 'Before: ' || regexp_replace( v_stmt, 'alter[[:space:]]+table[[:space:]]+([a-z0-9_]+)[[:space:]]+rename[[:space:]]+to.*', '\1', 1, 1, 'i' ) );
  Dbms_Output.Put_Line( 'After: ' || regexp_replace( v_stmt, 'alter[[:space:]]+table[[:space:]]+.*to[[:space:]]+([a-z0-9_]+)', '\1', 1, 1, 'i' ) );
END;
qid & accept id: (1822504, 1822671) query: Determine existence of results in jet SQL? soup:

How about:

\n
SELECT TOP 1 IIF(EXISTS(\n       SELECT * FROM foo \n       WHERE ), 0, 1) As f1 \nFROM foo\n
\n

Perhaps more clearly:

\n
SELECT TOP 1 IIF(EXISTS(\n       SELECT * FROM foo\n       WHERE ), 0, 1) As F1 \nFROM MSysObjects\n
\n soup wrap:

How about:

SELECT TOP 1 IIF(EXISTS(
       SELECT * FROM foo 
       WHERE ), 0, 1) As f1 
FROM foo

Perhaps more clearly:

SELECT TOP 1 IIF(EXISTS(
       SELECT * FROM foo
       WHERE ), 0, 1) As F1 
FROM MSysObjects
qid & accept id: (1830015, 1830082) query: Boolean expressions for a tagging system in SQL soup:

Assuming that data -> items, word -> name and tagged_item -> tagged_items.

\n

This is for "tag1 AND (tag2 OR tag3) AND NOT tag4 OR tag5". I'm sure you can figure out the rest.

\n
SELECT items.* FROM items\n    LEFT JOIN (SELECT i1.item_id FROM tagged_items AS i1 INNER JOIN tags AS t1 ON i1.tag_id = t1.id AND t1.name = 'tag1') AS ti1 ON items.id = ti1.item_id\n    LEFT JOIN (SELECT i2.item_id FROM tagged_items AS i2 INNER JOIN tags AS t2 ON i2.tag_id = t2.id AND t2.name = 'tag2') AS ti2 ON items.id = ti2.item_id\n    LEFT JOIN (SELECT i3.item_id FROM tagged_items AS i3 INNER JOIN tags AS t3 ON i3.tag_id = t3.id AND t3.name = 'tag3') AS ti3 ON items.id = ti3.item_id\n    LEFT JOIN (SELECT i4.item_id FROM tagged_items AS i4 INNER JOIN tags AS t4 ON i4.tag_id = t4.id AND t4.name = 'tag4') AS ti4 ON items.id = ti4.item_id\n    LEFT JOIN (SELECT i5.item_id FROM tagged_items AS i5 INNER JOIN tags AS t5 ON i5.tag_id = t5.id AND t5.name = 'tag5') AS ti5 ON items.id = ti5.item_id\nWHERE ti1.item_id IS NOT NULL AND (ti2.item_id IS NOT NULL OR ti3.item_id IS NOT NULL) AND ti4.item_id IS NULL OR ti5.item_id IS NOT NULL;\n
\n

Edit:\nIf you want to avoid subqueries, you could do this:

\n
SELECT items.* FROM items \n    LEFT JOIN tagged_items AS i1 ON items.id = i1.item_id LEFT JOIN tags AS t1 ON i1.tag_id = t1.id AND t1.name = 'tag1'\n    ...\nWHERE t1.item_id IS NOT NULL ...\n
\n

I'm not sure why you'd want to do it though, as the additional left joins will likely result in a slower run.

\n soup wrap:

Assuming that data -> items, word -> name and tagged_item -> tagged_items.

This is for "tag1 AND (tag2 OR tag3) AND NOT tag4 OR tag5". I'm sure you can figure out the rest.

SELECT items.* FROM items
    LEFT JOIN (SELECT i1.item_id FROM tagged_items AS i1 INNER JOIN tags AS t1 ON i1.tag_id = t1.id AND t1.name = 'tag1') AS ti1 ON items.id = ti1.item_id
    LEFT JOIN (SELECT i2.item_id FROM tagged_items AS i2 INNER JOIN tags AS t2 ON i2.tag_id = t2.id AND t2.name = 'tag2') AS ti2 ON items.id = ti2.item_id
    LEFT JOIN (SELECT i3.item_id FROM tagged_items AS i3 INNER JOIN tags AS t3 ON i3.tag_id = t3.id AND t3.name = 'tag3') AS ti3 ON items.id = ti3.item_id
    LEFT JOIN (SELECT i4.item_id FROM tagged_items AS i4 INNER JOIN tags AS t4 ON i4.tag_id = t4.id AND t4.name = 'tag4') AS ti4 ON items.id = ti4.item_id
    LEFT JOIN (SELECT i5.item_id FROM tagged_items AS i5 INNER JOIN tags AS t5 ON i5.tag_id = t5.id AND t5.name = 'tag5') AS ti5 ON items.id = ti5.item_id
WHERE ti1.item_id IS NOT NULL AND (ti2.item_id IS NOT NULL OR ti3.item_id IS NOT NULL) AND ti4.item_id IS NULL OR ti5.item_id IS NOT NULL;

Edit: If you want to avoid subqueries, you could do this:

SELECT items.* FROM items 
    LEFT JOIN tagged_items AS i1 ON items.id = i1.item_id LEFT JOIN tags AS t1 ON i1.tag_id = t1.id AND t1.name = 'tag1'
    ...
WHERE t1.item_id IS NOT NULL ...

I'm not sure why you'd want to do it though, as the additional left joins will likely result in a slower run.

qid & accept id: (1853433, 1853967) query: SQL Server locks - avoid insertion of duplicate entries soup:

To keep locks between multiple statements, they have to be wrapped in a transaction. In your example:

\n
If (SELECT 1 FROM t3 with (updlock) where t3.a=-86)\n    INSERT INTO T3 SELECT -86,-86\n
\n

The update lock can be released before the insert is executed. This would work reliably:

\n
begin transaction\nIf (SELECT 1 FROM t3 with (updlock) where t3.a=-86)\n    INSERT INTO T3 SELECT -86,-86\ncommit transaction\n
\n

Single statements are always wrapped in a transaction, so this would work too:

\n
 INSERT INTO T3 SELECT -86,-86\n WHERE NOT EXISTS (SELECT 1 FROM t3 with (updlock) where t3.a=-86)\n
\n

(This is assuming you have "implicit transactions" turned off, like the default SQL Server setting.)

\n soup wrap:

To keep locks between multiple statements, they have to be wrapped in a transaction. In your example:

If (SELECT 1 FROM t3 with (updlock) where t3.a=-86)
    INSERT INTO T3 SELECT -86,-86

The update lock can be released before the insert is executed. This would work reliably:

begin transaction
If (SELECT 1 FROM t3 with (updlock) where t3.a=-86)
    INSERT INTO T3 SELECT -86,-86
commit transaction

Single statements are always wrapped in a transaction, so this would work too:

 INSERT INTO T3 SELECT -86,-86
 WHERE NOT EXISTS (SELECT 1 FROM t3 with (updlock) where t3.a=-86)

(This is assuming you have "implicit transactions" turned off, like the default SQL Server setting.)

qid & accept id: (1858559, 1860098) query: Search literal within a word soup:

I think that should be better fetching the array of entries and then perform a text manipulation over the fetched data (in this case a search)!

\n

Because any text manipulation or complex query take more resources and if your database contains a lot of data, the query become too slow! Moreover, if you are running your \nquery on a shared server, that increases the performance issues!

\n

You can easily accomplish what you are trying to do with regex, once you have fetched the data from the database!

\n
\n

UPDATE: My suggestion is the same even if you are running your script on a dedicated server! However, if you want to perform a full-text search of the word "literal" in BOOLEAN MODE like you have described, you can remove the + operator (because you are searching only one word) and construct the query as follow:

\n
SELECT listOfColumsNames WHERE\nMATCH (colName) \nAGAINST ('literal*' IN BOOLEAN MODE);\n
\n

However, even if you add the AND operator, your query works fine: tested on Apache Server with MySQL 5.1!

\n

I suggest you to read the documentation about the full-text search in boolean mode.

\n

The only one problem of this query is that doesn't matches the word "literal" if it is a sub-string inside an other word, for example: "textliteraltext".\nAs you noticed, you can't use the * operator at the beginning of the word!

\n

So, to accomplish what you are trying to do, the fastest and easiest way is to follow the suggestion of Paul, using the % placeholder:

\n
SELECT listOfColumsNames \nWHERE colName LIKE '%literal%';\n
\n soup wrap:

I think that should be better fetching the array of entries and then perform a text manipulation over the fetched data (in this case a search)!

Because any text manipulation or complex query take more resources and if your database contains a lot of data, the query become too slow! Moreover, if you are running your query on a shared server, that increases the performance issues!

You can easily accomplish what you are trying to do with regex, once you have fetched the data from the database!


UPDATE: My suggestion is the same even if you are running your script on a dedicated server! However, if you want to perform a full-text search of the word "literal" in BOOLEAN MODE like you have described, you can remove the + operator (because you are searching only one word) and construct the query as follow:

SELECT listOfColumsNames WHERE
MATCH (colName) 
AGAINST ('literal*' IN BOOLEAN MODE);

However, even if you add the AND operator, your query works fine: tested on Apache Server with MySQL 5.1!

I suggest you to read the documentation about the full-text search in boolean mode.

The only one problem of this query is that doesn't matches the word "literal" if it is a sub-string inside an other word, for example: "textliteraltext". As you noticed, you can't use the * operator at the beginning of the word!

So, to accomplish what you are trying to do, the fastest and easiest way is to follow the suggestion of Paul, using the % placeholder:

SELECT listOfColumsNames 
WHERE colName LIKE '%literal%';
qid & accept id: (1979522, 1979549) query: How to fetch an object graph at once? soup:

A simple JOIN would do the trick:

\n
SELECT     o.*\n,          i.*\nFROM       orders o\nINNER JOIN order_items i\nON         o.id = i.order_id\n
\n

The will return one row for each row in order_items. The returned rows consist of all fields from the orders table, and concatenated to that, all fields from the order_items table (quite literally, the records from the tables are joined, that is, they are combined by record concatenation)

\n

So if orders has (id, order_date, customer_id) and order_items has (order_id, product_id, price) the result of the statement above will consist of records with (id, order_date, customer_id, order_id, product_id, price)

\n

One thing you need to be aware of is that this approach breaks down whenever there are two distinct 'detail' tables for one 'master'. Let me explain.

\n

In the orders/order_items example, orders is the master and order_items is the detail: each row in order_items belongs to, or is dependent on exactly one row in orders. The reverse is not true: one row in the orders table can have zero or more related rows in the order_items table. The join condition

\n
ON o.id = i.order_id \n
\n

ensures that only related rows are combined and returned (leaving out the condition would retturn all possible combinations of rows from the two tables, assuming the database would allow you to omit the join condition)

\n

Now, suppose you have one master with two details, for example, customers as master and customer_orders as detail1 and customer_phone_numbers. Suppose you want to retrieve a particular customer along with all is orders and all its phone numbers. You might be tempted to write:

\n
SELECT     c.*, o.*, p.*\nFROM       customers                c\nINNER JOIN customer_orders          o\nON         c.id                   = o.customer_id\nINNER JOIN customer_phone_numbers   p\nON         c.id                   = p.customer_id\n
\n

This is valid SQL, and it will execute (asuming the tables and column names are in place)\nBut the problem is, is that it will give you a rubbish result. Assuming you have on customer with two orders (1,2) and two phone numbers (A, B) you get these records:

\n
customer-data | order 1 | phone A\ncustomer-data | order 2 | phone A\ncustomer-data | order 1 | phone B\ncustomer-data | order 2 | phone B\n
\n

This is rubbish, as it suggests there is some relationship between order 1 and phone numbers A and B and order 2 and phone numbers A and B.

\n

What's worse is that these results can completely explode in numbers of records, much to the detriment of database performance.

\n

So, JOIN is excellent to "flatten" a hierarchy of items of known depth (customer -> orders -> order_items) into one big table which only duplicates the master items for each detail item. But it is awful to extract a true graph of related items. This is a direct consequence of the way SQL is designed - it can only output normalized tables without repeating groups. This is way object relational mappers exist, to allow object definitions that can have multiple dependent collections of subordinate objects to be stored and retrieved from a relational database without losing your sanity as a programmer.

\n soup wrap:

A simple JOIN would do the trick:

SELECT     o.*
,          i.*
FROM       orders o
INNER JOIN order_items i
ON         o.id = i.order_id

The will return one row for each row in order_items. The returned rows consist of all fields from the orders table, and concatenated to that, all fields from the order_items table (quite literally, the records from the tables are joined, that is, they are combined by record concatenation)

So if orders has (id, order_date, customer_id) and order_items has (order_id, product_id, price) the result of the statement above will consist of records with (id, order_date, customer_id, order_id, product_id, price)

One thing you need to be aware of is that this approach breaks down whenever there are two distinct 'detail' tables for one 'master'. Let me explain.

In the orders/order_items example, orders is the master and order_items is the detail: each row in order_items belongs to, or is dependent on exactly one row in orders. The reverse is not true: one row in the orders table can have zero or more related rows in the order_items table. The join condition

ON o.id = i.order_id 

ensures that only related rows are combined and returned (leaving out the condition would retturn all possible combinations of rows from the two tables, assuming the database would allow you to omit the join condition)

Now, suppose you have one master with two details, for example, customers as master and customer_orders as detail1 and customer_phone_numbers. Suppose you want to retrieve a particular customer along with all is orders and all its phone numbers. You might be tempted to write:

SELECT     c.*, o.*, p.*
FROM       customers                c
INNER JOIN customer_orders          o
ON         c.id                   = o.customer_id
INNER JOIN customer_phone_numbers   p
ON         c.id                   = p.customer_id

This is valid SQL, and it will execute (asuming the tables and column names are in place) But the problem is, is that it will give you a rubbish result. Assuming you have on customer with two orders (1,2) and two phone numbers (A, B) you get these records:

customer-data | order 1 | phone A
customer-data | order 2 | phone A
customer-data | order 1 | phone B
customer-data | order 2 | phone B

This is rubbish, as it suggests there is some relationship between order 1 and phone numbers A and B and order 2 and phone numbers A and B.

What's worse is that these results can completely explode in numbers of records, much to the detriment of database performance.

So, JOIN is excellent to "flatten" a hierarchy of items of known depth (customer -> orders -> order_items) into one big table which only duplicates the master items for each detail item. But it is awful to extract a true graph of related items. This is a direct consequence of the way SQL is designed - it can only output normalized tables without repeating groups. This is way object relational mappers exist, to allow object definitions that can have multiple dependent collections of subordinate objects to be stored and retrieved from a relational database without losing your sanity as a programmer.

qid & accept id: (2044752, 2045014) query: SQL mapping between multiple tables soup:

To expand on Arthur Thomas's solution here's a union without the WHERE in the subselects so that you can create a universal view:

\n
SELECT A.Name as Animal, B.Name as Zoo FROM A, AtoB, B\n    WHERE AtoB.A_ID = A.ID && B.ID = AtoB.B_ID \nUNION\nSELECT C.Name as Animal, B.Name as Zoo FROM C, CtoB, B\n    WHERE CtoB.C_ID = C.ID && B.ID = CtoB.B_ID\n
\n

Then, you can perform a query like:

\n
SELECT Animal FROM zoo_animals WHERE Zoo="Seattle Zoo"\n
\n soup wrap:

To expand on Arthur Thomas's solution here's a union without the WHERE in the subselects so that you can create a universal view:

SELECT A.Name as Animal, B.Name as Zoo FROM A, AtoB, B
    WHERE AtoB.A_ID = A.ID && B.ID = AtoB.B_ID 
UNION
SELECT C.Name as Animal, B.Name as Zoo FROM C, CtoB, B
    WHERE CtoB.C_ID = C.ID && B.ID = CtoB.B_ID

Then, you can perform a query like:

SELECT Animal FROM zoo_animals WHERE Zoo="Seattle Zoo"
qid & accept id: (2045053, 2045069) query: MYSQL - Retrieve Timestamps between dates soup:
SELECT timestamp\nFROM   tablename\nWHERE  timestamp >= userStartDate\n       AND timestamp < userEndDate + INTERVAL 1 DAY\n
\n

This will select every record having date portion between userStartDate and userEndDate, provided that these fields have type of DATE (without time portion).

\n

If the start and end dates come as strings, use STR_TO_DATE to convert from any given format:

\n
SELECT timestamp\nFROM   tablename\nWHERE  timestamp >= STR_TO_DATE('01/11/2010', '%m/%d/%Y')\n       AND timestamp < STR_TO_DATE('01/12/2010', '%m/%d/%Y') + INTERVAL 1 DAY\n
\n soup wrap:
SELECT timestamp
FROM   tablename
WHERE  timestamp >= userStartDate
       AND timestamp < userEndDate + INTERVAL 1 DAY

This will select every record having date portion between userStartDate and userEndDate, provided that these fields have type of DATE (without time portion).

If the start and end dates come as strings, use STR_TO_DATE to convert from any given format:

SELECT timestamp
FROM   tablename
WHERE  timestamp >= STR_TO_DATE('01/11/2010', '%m/%d/%Y')
       AND timestamp < STR_TO_DATE('01/12/2010', '%m/%d/%Y') + INTERVAL 1 DAY
qid & accept id: (2056938, 2056970) query: SQL isolate greatest values in a column soup:

These queries both isolate the row with the highest xfer_id for each distinct client_plt_id

\n
select xfer_id, client_plt_id, xfer_doc_no\nfrom   tab t1\nwhere  xfer_id = (\n       select max(xfer_id)\n       from   tab t2\n       where  t2.client_plt_id = t1.client_plt_id\n   )\n
\n

or, for mysql this may be better performing:

\n
select xfer_id, client_plt_id, xfer_doc_no\nfrom   tab t1\ninner join (\n       select max(xfer_id), client_plt_id\n       from   tab\n       group by client_plt_id\n       ) t2\non     t1.client_plt_id = t2.client_plt_id\nand    t1.xfer_id = t2.xfer_id\n
\n

For both these queries, you can simply add a WHERE clause to select on particualr client. Just append for example WHERE client_plt_id = 80016616.

\n

If you simply want the one row with the highest xfer_id, regardless of client_plt_id, this is what you need:

\n
select xfer_id, client_plt_id, xfer_doc_no\nfrom   tab t1\nwhere  xfer_id = (select max(xfer_id) from tab)\n
\n soup wrap:

These queries both isolate the row with the highest xfer_id for each distinct client_plt_id

select xfer_id, client_plt_id, xfer_doc_no
from   tab t1
where  xfer_id = (
       select max(xfer_id)
       from   tab t2
       where  t2.client_plt_id = t1.client_plt_id
   )

or, for mysql this may be better performing:

select xfer_id, client_plt_id, xfer_doc_no
from   tab t1
inner join (
       select max(xfer_id), client_plt_id
       from   tab
       group by client_plt_id
       ) t2
on     t1.client_plt_id = t2.client_plt_id
and    t1.xfer_id = t2.xfer_id

For both these queries, you can simply add a WHERE clause to select on particualr client. Just append for example WHERE client_plt_id = 80016616.

If you simply want the one row with the highest xfer_id, regardless of client_plt_id, this is what you need:

select xfer_id, client_plt_id, xfer_doc_no
from   tab t1
where  xfer_id = (select max(xfer_id) from tab)
qid & accept id: (2169720, 2169764) query: Oracle: pivot (coalesce) some counts onto a single row? soup:

What you're looking for is pivoting - transposing the row data into columnar.

\n

Oracle 9i+, Using WITH/CTE:

\n
\n

Use:

\n
WITH summary AS (\n    SELECT TRUNC(ls.started,'HH') AS dt,\n           ls.depot,\n           COUNT(*) AS num_depot\n      FROM logstats ls\n  GROUP BY TRUNC(ls.started,'HH'), ls.depot)\n  SELECT s.dt,\n         MAX(CASE WHEN s.depot = 'foo' THEN s.num_depot ELSE 0 END) AS "count_of_foo",\n         MAX(CASE WHEN s.depot = 'bar' THEN s.num_depot ELSE 0 END) AS "count_of_bar"\n    FROM summary s\nGROUP BY s.dt\nORDER BY s.dt\n
\n

Non-WITH/CTE Equivalent

\n
\n

Use:

\n
  SELECT s.dt,\n         MAX(CASE WHEN s.depot = 'foo' THEN s.num_depot ELSE 0 END) AS "count_of_foo",\n         MAX(CASE WHEN s.depot = 'bar' THEN s.num_depot ELSE 0 END) AS "count_of_bar"\n    FROM (SELECT TRUNC(ls.started,'HH') AS dt,\n                 ls.depot,\n                 COUNT(*) AS num_depot\n            FROM LOGSTATS ls\n        GROUP BY TRUNC(ls.started, 'HH'), ls.depot) s\nGROUP BY s.dt\nORDER BY s.dt\n
\n

Pre Oracle9i would need the CASE statements changed to DECODE, Oracle specific IF/ELSE logic.

\n

Oracle 11g+, Using PIVOT

\n
\n

Untested:

\n
  SELECT * \n    FROM (SELECT TRUNC(ls.started, 'HH') AS dt,\n                 ls.depot\n            FROM LOGSTATS ls\n        GROUP BY TRUNC(ls.started, 'HH'), ls.depot)\n   PIVOT (\n     COUNT(*) FOR depot\n   )\nORDER BY 1\n
\n soup wrap:

What you're looking for is pivoting - transposing the row data into columnar.

Oracle 9i+, Using WITH/CTE:


Use:

WITH summary AS (
    SELECT TRUNC(ls.started,'HH') AS dt,
           ls.depot,
           COUNT(*) AS num_depot
      FROM logstats ls
  GROUP BY TRUNC(ls.started,'HH'), ls.depot)
  SELECT s.dt,
         MAX(CASE WHEN s.depot = 'foo' THEN s.num_depot ELSE 0 END) AS "count_of_foo",
         MAX(CASE WHEN s.depot = 'bar' THEN s.num_depot ELSE 0 END) AS "count_of_bar"
    FROM summary s
GROUP BY s.dt
ORDER BY s.dt

Non-WITH/CTE Equivalent


Use:

  SELECT s.dt,
         MAX(CASE WHEN s.depot = 'foo' THEN s.num_depot ELSE 0 END) AS "count_of_foo",
         MAX(CASE WHEN s.depot = 'bar' THEN s.num_depot ELSE 0 END) AS "count_of_bar"
    FROM (SELECT TRUNC(ls.started,'HH') AS dt,
                 ls.depot,
                 COUNT(*) AS num_depot
            FROM LOGSTATS ls
        GROUP BY TRUNC(ls.started, 'HH'), ls.depot) s
GROUP BY s.dt
ORDER BY s.dt

Pre Oracle9i would need the CASE statements changed to DECODE, Oracle specific IF/ELSE logic.

Oracle 11g+, Using PIVOT


Untested:

  SELECT * 
    FROM (SELECT TRUNC(ls.started, 'HH') AS dt,
                 ls.depot
            FROM LOGSTATS ls
        GROUP BY TRUNC(ls.started, 'HH'), ls.depot)
   PIVOT (
     COUNT(*) FOR depot
   )
ORDER BY 1
qid & accept id: (2183107, 2184035) query: How to use foreign keys and a spatial index inside a MySQL table? soup:
\n

How can we combine fast children search in tree and also have a SPATIAL INDEX in a table?

\n
\n

Create the indexes on id and parentId of your table manually:

\n
CREATE INDEX ix_mytable_parentid ON mytable (parentid)\n
\n

Note that since id is most probably a PRIMARY KEY, no explicit index is required on it (one will be created implicitly).

\n

BTW, if you are having the natural geo-based hierarchy, what's the point of using parent-child relationships for searching?

\n

You can make the queries to use the SPATIAL indexes:

\n
SELECT  *\nFROM    mytable m1\nJOIN    mytable m2\nON      MBRContains (m2.area, m1.area)\n        AND m2.parentId = m1.id\nWHERE   m1.name = 'London'\n
\n

which will use the spatial index for searching and the relationship for fine filtering.

\n soup wrap:

How can we combine fast children search in tree and also have a SPATIAL INDEX in a table?

Create the indexes on id and parentId of your table manually:

CREATE INDEX ix_mytable_parentid ON mytable (parentid)

Note that since id is most probably a PRIMARY KEY, no explicit index is required on it (one will be created implicitly).

BTW, if you are having the natural geo-based hierarchy, what's the point of using parent-child relationships for searching?

You can make the queries to use the SPATIAL indexes:

SELECT  *
FROM    mytable m1
JOIN    mytable m2
ON      MBRContains (m2.area, m1.area)
        AND m2.parentId = m1.id
WHERE   m1.name = 'London'

which will use the spatial index for searching and the relationship for fine filtering.

qid & accept id: (2199315, 2199341) query: How to get Microsoft SQL MATH POWER to show as decimal and not as INT (which it seems to do)? soup:

The precision is lost because your input values are all integers.

\n

Try

\n
SELECT POWER(( 1.0 + 3.0 / 100.0 ), ( 1.0 / 365.0 ))\n
\n

If this doesn't give sufficient precision, cast the inputs to POWER as floats:

\n
SELECT POWER(( CAST(1.0 as float) + CAST(3.0 AS float) / 100.0 ), ( 1.0 / 365.0 ))\n
\n soup wrap:

The precision is lost because your input values are all integers.

Try

SELECT POWER(( 1.0 + 3.0 / 100.0 ), ( 1.0 / 365.0 ))

If this doesn't give sufficient precision, cast the inputs to POWER as floats:

SELECT POWER(( CAST(1.0 as float) + CAST(3.0 AS float) / 100.0 ), ( 1.0 / 365.0 ))
qid & accept id: (2289907, 2289947) query: Computing different sums depending on the value of one column soup:

Here you can use a trick that boolean expressions evaluate to either 0 or 1 in SQL:

\n
SELECT a2 + a8 + a7 * (a1 BETWEEN 0 AND 2) AS SUM\nFROM table_name\n
\n

A more general (and more conventional) way is to use a CASE expression:

\n
SELECT\n    CASE WHEN a1 BETWEEN 0 AND 2\n         THEN a2 + a7 + a8\n         ELSE a2 + a8\n    END AS SUM\nFROM table_name\n
\n

You can also do something like this to include a CASE expression without repeating the common terms:

\n
SELECT\n    a2 + a8 + (CASE WHEN a1 BETWEEN 0 AND 2 THEN a7 ELSE 0 END) AS SUM\nFROM table_name\n
\n soup wrap:

Here you can use a trick that boolean expressions evaluate to either 0 or 1 in SQL:

SELECT a2 + a8 + a7 * (a1 BETWEEN 0 AND 2) AS SUM
FROM table_name

A more general (and more conventional) way is to use a CASE expression:

SELECT
    CASE WHEN a1 BETWEEN 0 AND 2
         THEN a2 + a7 + a8
         ELSE a2 + a8
    END AS SUM
FROM table_name

You can also do something like this to include a CASE expression without repeating the common terms:

SELECT
    a2 + a8 + (CASE WHEN a1 BETWEEN 0 AND 2 THEN a7 ELSE 0 END) AS SUM
FROM table_name
qid & accept id: (2318539, 2318693) query: Paging and custom-ordering a result soup:

Wrap your unioned queries in another one as a derived table and you can use the top clause.

\n
SELECT TOP 100 * FROM (\n   SELECT * FROM table where field = 'entry'\n   UNION ALL\n   SELECT * FROM table where field = 'entry#'\n) sortedresults\n
\n
\n

You were on the right track then. Add a defined column to each of your subsets of sorted results and then you can use that to keep the order sorted.

\n
WITH SearchResult AS\n  (SELECT *, ROW_NUMBER() OVER (ORDER BY QueryNum) as RowNum FROM\n     (SELECT *, 1 as QueryNum FROM KeywordTable WHERE field = 'Keyword'\n      UNION ALL\n      SELECT *, 2 from KeywordTable WHERE field = 'Keyword#'\n      ) SortedResults\n  )\nSELECT * from SearchResults WHERE RowNum BETWEEN 4 and 10\n
\n

It is important that you also sort each subquery by something other than keyword so their order stays the same between runs (and as a secondary sort on the row number function). Example: say you have k1, k2, k3, k4, k5 - if you select * where keyword like k% you might get k1, k2, k3, k4, k5 one time and k5, k4, k3, k2, k1 the next (SQL doesn't guarantee return order and it can differ). That will throw off your paging.

\n soup wrap:

Wrap your unioned queries in another one as a derived table and you can use the top clause.

SELECT TOP 100 * FROM (
   SELECT * FROM table where field = 'entry'
   UNION ALL
   SELECT * FROM table where field = 'entry#'
) sortedresults

You were on the right track then. Add a defined column to each of your subsets of sorted results and then you can use that to keep the order sorted.

WITH SearchResult AS
  (SELECT *, ROW_NUMBER() OVER (ORDER BY QueryNum) as RowNum FROM
     (SELECT *, 1 as QueryNum FROM KeywordTable WHERE field = 'Keyword'
      UNION ALL
      SELECT *, 2 from KeywordTable WHERE field = 'Keyword#'
      ) SortedResults
  )
SELECT * from SearchResults WHERE RowNum BETWEEN 4 and 10

It is important that you also sort each subquery by something other than keyword so their order stays the same between runs (and as a secondary sort on the row number function). Example: say you have k1, k2, k3, k4, k5 - if you select * where keyword like k% you might get k1, k2, k3, k4, k5 one time and k5, k4, k3, k2, k1 the next (SQL doesn't guarantee return order and it can differ). That will throw off your paging.

qid & accept id: (2355791, 2355996) query: Help with generating a report from data in a parent-children model soup:

SQL 2000 Based solution

\n
DECLARE @Stack TABLE (\n  StackID INTEGER IDENTITY\n  , Category VARCHAR(20)\n  , RootID INTEGER\n  , ChildID INTEGER\n  , Visited BIT)\n\nINSERT INTO @Stack\nSELECT  [Category] = c.category_name\n        , [RootID] = c.category_id\n        , [ChildID] = c.category_id\n        , 0\nFROM    Categories c\n\nWHILE EXISTS (SELECT * FROM @Stack WHERE Visited = 0)\nBEGIN\n  DECLARE @StackID INTEGER\n  SELECT  @StackID = MAX(StackID) FROM    @Stack\n\n  INSERT INTO @Stack\n  SELECT  st.Category\n          , st.RootID\n          , c.category_id\n          , 0\n  FROM    @Stack st\n          INNER JOIN Categories c ON c.father_id = st.ChildID  \n  WHERE   Visited = 0\n\n  UPDATE  @Stack\n  SET     Visited = 1\n  WHERE   StackID <= @StackID\nEND\n\nSELECT  st.RootID\n        , st.Category\n        , COUNT(s.sales_id)\nFROM    @Stack st\n        INNER JOIN Sales s ON s.category_id = st.ChildID\nGROUP BY st.RootID, st.Category\nORDER BY st.RootID\n
\n

SQL 2005 Based solution

\n

A CTE should get you what you want

\n\n

SQL Statement

\n
;WITH QtyCTE AS (\n  SELECT  [Category] = c.category_name\n          , [RootID] = c.category_id\n          , [ChildID] = c.category_id\n  FROM    Categories c\n  UNION ALL \n  SELECT  cte.Category\n          , cte.RootID\n          , c.category_id\n  FROM    QtyCTE cte\n          INNER JOIN Categories c ON c.father_id = cte.ChildID\n)\nSELECT  cte.RootID\n        , cte.Category\n        , COUNT(s.sales_id)\nFROM    QtyCTE cte\n        INNER JOIN Sales s ON s.category_id = cte.ChildID\nGROUP BY cte.RootID, cte.Category\nORDER BY cte.RootID\n
\n soup wrap:

SQL 2000 Based solution

DECLARE @Stack TABLE (
  StackID INTEGER IDENTITY
  , Category VARCHAR(20)
  , RootID INTEGER
  , ChildID INTEGER
  , Visited BIT)

INSERT INTO @Stack
SELECT  [Category] = c.category_name
        , [RootID] = c.category_id
        , [ChildID] = c.category_id
        , 0
FROM    Categories c

WHILE EXISTS (SELECT * FROM @Stack WHERE Visited = 0)
BEGIN
  DECLARE @StackID INTEGER
  SELECT  @StackID = MAX(StackID) FROM    @Stack

  INSERT INTO @Stack
  SELECT  st.Category
          , st.RootID
          , c.category_id
          , 0
  FROM    @Stack st
          INNER JOIN Categories c ON c.father_id = st.ChildID  
  WHERE   Visited = 0

  UPDATE  @Stack
  SET     Visited = 1
  WHERE   StackID <= @StackID
END

SELECT  st.RootID
        , st.Category
        , COUNT(s.sales_id)
FROM    @Stack st
        INNER JOIN Sales s ON s.category_id = st.ChildID
GROUP BY st.RootID, st.Category
ORDER BY st.RootID

SQL 2005 Based solution

A CTE should get you what you want

SQL Statement

;WITH QtyCTE AS (
  SELECT  [Category] = c.category_name
          , [RootID] = c.category_id
          , [ChildID] = c.category_id
  FROM    Categories c
  UNION ALL 
  SELECT  cte.Category
          , cte.RootID
          , c.category_id
  FROM    QtyCTE cte
          INNER JOIN Categories c ON c.father_id = cte.ChildID
)
SELECT  cte.RootID
        , cte.Category
        , COUNT(s.sales_id)
FROM    QtyCTE cte
        INNER JOIN Sales s ON s.category_id = cte.ChildID
GROUP BY cte.RootID, cte.Category
ORDER BY cte.RootID
qid & accept id: (2386632, 2386741) query: Fetch unique combinations of two field values soup:

For Ms Access you can try

\n
SELECT  DISTINCT\n        *\nFROM Table1 tM\nWHERE NOT EXISTS(SELECT 1 FROM Table1 t WHERE tM.Source = t.Dest AND tM.Dest = t.Source AND tm.Source > t.Source)\n
\n

EDIT:

\n

Example with table Data, which is the same...

\n
SELECT  DISTINCT\n        *\nFROM Data  tM\nWHERE NOT EXISTS(SELECT 1 FROM Data t WHERE tM.Source = t.Dest AND tM.Dest = t.Source AND tm.Source > t.Source)\n
\n

or (Nice and Access Formatted...)

\n
SELECT DISTINCT *\nFROM Data AS tM\nWHERE (((Exists (SELECT 1 FROM Data t WHERE tM.Source = t.Dest AND tM.Dest = t.Source AND tm.Source > t.Source))=False));\n
\n soup wrap:

For Ms Access you can try

SELECT  DISTINCT
        *
FROM Table1 tM
WHERE NOT EXISTS(SELECT 1 FROM Table1 t WHERE tM.Source = t.Dest AND tM.Dest = t.Source AND tm.Source > t.Source)

EDIT:

Example with table Data, which is the same...

SELECT  DISTINCT
        *
FROM Data  tM
WHERE NOT EXISTS(SELECT 1 FROM Data t WHERE tM.Source = t.Dest AND tM.Dest = t.Source AND tm.Source > t.Source)

or (Nice and Access Formatted...)

SELECT DISTINCT *
FROM Data AS tM
WHERE (((Exists (SELECT 1 FROM Data t WHERE tM.Source = t.Dest AND tM.Dest = t.Source AND tm.Source > t.Source))=False));
qid & accept id: (2401396, 2401595) query: Oracle SQL - Column with unix timestamp, need dd-mm-yyyy timestamp soup:

Given this data ...

\n
SQL> alter session set nls_date_format='dd-mon-yyyy hh24:mi:ss'\n  2  /\n\nSession altered.\n\nSQL> select * from t23\n  2  /\n\nMY_TIMESTAMP\n--------------------\n08-mar-2010 13:06:02\n08-mar-2010 13:06:08\n13-mar-1985 13:06:26\n\nSQL> \n
\n

.. it is simply a matter of converting the time elapsed since 01-JAN-1970 into seconds:

\n
SQL> select my_timestamp\n  2        , (my_timestamp - date '1970-01-01') * 86400 as unix_ts\n  3  from t23\n  4  /\n\nMY_TIMESTAMP            UNIX_TS\n-------------------- ----------\n08-mar-2010 13:06:02 1268053562\n08-mar-2010 13:06:08 1268053568\n13-mar-1985 13:06:26  479567186\n\nSQL>\n
\n soup wrap:

Given this data ...

SQL> alter session set nls_date_format='dd-mon-yyyy hh24:mi:ss'
  2  /

Session altered.

SQL> select * from t23
  2  /

MY_TIMESTAMP
--------------------
08-mar-2010 13:06:02
08-mar-2010 13:06:08
13-mar-1985 13:06:26

SQL> 

.. it is simply a matter of converting the time elapsed since 01-JAN-1970 into seconds:

SQL> select my_timestamp
  2        , (my_timestamp - date '1970-01-01') * 86400 as unix_ts
  3  from t23
  4  /

MY_TIMESTAMP            UNIX_TS
-------------------- ----------
08-mar-2010 13:06:02 1268053562
08-mar-2010 13:06:08 1268053568
13-mar-1985 13:06:26  479567186

SQL>
qid & accept id: (2406693, 2406949) query: MDX Year on Year Sales by Months soup:

SELECT {[Time].[2009], [Time].[2010]} ON 0,\n [Time].[Months].Members ON 1\n FROM [Your Cube Name] WHERE [Measures].[Sales]

\n

I based that on this query (below) that I've tested on the Adventure Works sample cube from Miscrosoft:

\n
SELECT {[Ship Date].[Fiscal Year].&[2002], [Ship Date].[Fiscal Year].&[2003]} ON 0,\n[Ship Date].[Month of Year].Members ON 1\nFROM [Adventure Works] WHERE [Measures].[Sales Amount]\n
\n

UPDATE:

\n

Based on your query I'm not sure why it is working without specifiying a hierarchy on your cube query (like [Time].[2010] instead of [Time].[Hierarchy Name].[2010]) but could you try this:

\n
SELECT EXISTS([Time].Members, {[Time].[2009], [Time].[2010]}) ON COLUMNS, \n[Time].[Months].Members ON ROWS \nFROM [SalesProductIndicator] WHERE [Measures].[Sales] \n
\n

Thanks

\n soup wrap:

SELECT {[Time].[2009], [Time].[2010]} ON 0, [Time].[Months].Members ON 1 FROM [Your Cube Name] WHERE [Measures].[Sales]

I based that on this query (below) that I've tested on the Adventure Works sample cube from Miscrosoft:

SELECT {[Ship Date].[Fiscal Year].&[2002], [Ship Date].[Fiscal Year].&[2003]} ON 0,
[Ship Date].[Month of Year].Members ON 1
FROM [Adventure Works] WHERE [Measures].[Sales Amount]

UPDATE:

Based on your query I'm not sure why it is working without specifiying a hierarchy on your cube query (like [Time].[2010] instead of [Time].[Hierarchy Name].[2010]) but could you try this:

SELECT EXISTS([Time].Members, {[Time].[2009], [Time].[2010]}) ON COLUMNS, 
[Time].[Months].Members ON ROWS 
FROM [SalesProductIndicator] WHERE [Measures].[Sales] 

Thanks

qid & accept id: (2411210, 2411337) query: Finding a sql query to get the latest associated date for each grouping soup:
select p.*\nfrom (\n    select EMPID, DateWorked, Max(EffectiveDate) as MaxEffectiveDate\n    from Payroll\n    where EffectiveDate <= DateWorked\n    group by EMPID, DateWorked\n) pm\ninner join Payroll p on pm.EMPID = p.EMPID and pm.DateWorked = p.DateWorked and pm.MaxEffectiveDate = p.EffectiveDate\n
\n

Output:

\n
EMPID       DateWorked              Hours       WageRate                                EffectiveDate\n----------- ----------------------- ----------- --------------------------------------- -----------------------\n1           2010-01-01 00:00:00.000 10          7.25                                    2009-06-10 00:00:00.000\n
\n soup wrap:
select p.*
from (
    select EMPID, DateWorked, Max(EffectiveDate) as MaxEffectiveDate
    from Payroll
    where EffectiveDate <= DateWorked
    group by EMPID, DateWorked
) pm
inner join Payroll p on pm.EMPID = p.EMPID and pm.DateWorked = p.DateWorked and pm.MaxEffectiveDate = p.EffectiveDate

Output:

EMPID       DateWorked              Hours       WageRate                                EffectiveDate
----------- ----------------------- ----------- --------------------------------------- -----------------------
1           2010-01-01 00:00:00.000 10          7.25                                    2009-06-10 00:00:00.000
qid & accept id: (2461579, 2461744) query: How to join dynamic sql statement in variable with normal statement soup:

Use temp tables & have the records dumped into it (from the dynamic query) & use the temp table to join with the static query that you have.

\n
set @query = 'CREATE table #myTempTable AS\nselect\n    HumanResources.Employee.EmployeeID\n    ,HumanResources.Employee.LoginID\n    ,HumanResources.Employee.Title\n    ,HumanResources.EmployeeAddress.AddressID\nfrom\n    HumanResources.Employee\n    inner join HumanResources.EmployeeAddress\n    on HumanResources.Employee.EmployeeID = HumanResources.EmployeeAddress.EmployeeID\n;';\n\nEXEC (@query);\n
\n

And then

\n
select\n    Employees.*\n    ,Addresses.City\nfrom\n    #myTempTable as Employees\n    inner join\n    (\n        select\n            Person.Address.AddressID\n            ,Person.Address.City\n        from\n            Person.Address\n    ) as Addresses\n    on Employees.AddressID = Addresses.AddressID\n
\n soup wrap:

Use temp tables & have the records dumped into it (from the dynamic query) & use the temp table to join with the static query that you have.

set @query = 'CREATE table #myTempTable AS
select
    HumanResources.Employee.EmployeeID
    ,HumanResources.Employee.LoginID
    ,HumanResources.Employee.Title
    ,HumanResources.EmployeeAddress.AddressID
from
    HumanResources.Employee
    inner join HumanResources.EmployeeAddress
    on HumanResources.Employee.EmployeeID = HumanResources.EmployeeAddress.EmployeeID
;';

EXEC (@query);

And then

select
    Employees.*
    ,Addresses.City
from
    #myTempTable as Employees
    inner join
    (
        select
            Person.Address.AddressID
            ,Person.Address.City
        from
            Person.Address
    ) as Addresses
    on Employees.AddressID = Addresses.AddressID
qid & accept id: (2466091, 2466136) query: SQL to return dates that fall in period and range soup:

For days use DATEDIFF and the modulo operation:

\n
SELECT * FROM dates\nWHERE `date` BETWEEN '1987-10-20' AND '1988-1-1'\nAND DATEDIFF(`date`, '1987-10-20') % 10 = 0\n
\n

For a period of 10 years, calculate the difference in the year modulo the period, and ensure that the month and day are the same:

\n
SELECT * FROM dates\nWHERE `date` BETWEEN '1980-10-20' AND '2000-10-20'\nAND MONTH(date) = 10 AND DAY(date) = 20 AND (YEAR(date) - 1980) % 10 = 0\n
\n

A period measured in months is not well-defined because months have different lengths. What is one month later than January 30th? You can get it working for some special cases such as 'first in the month'.

\n soup wrap:

For days use DATEDIFF and the modulo operation:

SELECT * FROM dates
WHERE `date` BETWEEN '1987-10-20' AND '1988-1-1'
AND DATEDIFF(`date`, '1987-10-20') % 10 = 0

For a period of 10 years, calculate the difference in the year modulo the period, and ensure that the month and day are the same:

SELECT * FROM dates
WHERE `date` BETWEEN '1980-10-20' AND '2000-10-20'
AND MONTH(date) = 10 AND DAY(date) = 20 AND (YEAR(date) - 1980) % 10 = 0

A period measured in months is not well-defined because months have different lengths. What is one month later than January 30th? You can get it working for some special cases such as 'first in the month'.

qid & accept id: (2473843, 2473860) query: MySQL: Select remaining rows soup:

Use:

\n
   SELECT t.name\n     FROM TOOLS t\nLEFT JOIN INSTALLS i ON i.tool_id = t.id\n                    AND i.user_id = 99\n    WHERE i.id IS NULL\n
\n

Alternately, you can use NOT EXISTS:

\n
SELECT t.name\n  FROM TOOLS t\n WHERE NOT EXISTS(SELECT NULL \n                    FROM INSTALLS i\n                   WHERE i.tool_id = t.id\n                     AND i.user_id = 99)\n
\n

...or NOT IN:

\n
SELECT t.name\n  FROM TOOLS t\n WHERE t.id NOT IN (SELECT i.tool_id\n                      FROM INSTALLS i\n                     WHERE i.user_id = 99)\n
\n

Of the three options, the LEFT JOIN/IS NULL is the most efficient on MySQL. You can read more about it in this article.

\n soup wrap:

Use:

   SELECT t.name
     FROM TOOLS t
LEFT JOIN INSTALLS i ON i.tool_id = t.id
                    AND i.user_id = 99
    WHERE i.id IS NULL

Alternately, you can use NOT EXISTS:

SELECT t.name
  FROM TOOLS t
 WHERE NOT EXISTS(SELECT NULL 
                    FROM INSTALLS i
                   WHERE i.tool_id = t.id
                     AND i.user_id = 99)

...or NOT IN:

SELECT t.name
  FROM TOOLS t
 WHERE t.id NOT IN (SELECT i.tool_id
                      FROM INSTALLS i
                     WHERE i.user_id = 99)

Of the three options, the LEFT JOIN/IS NULL is the most efficient on MySQL. You can read more about it in this article.

qid & accept id: (2507933, 2536202) query: Formatting the output of an SQL query soup:

I have found a way out of it.\nWe can use concatenation here,

\n
select name,id,location from employee;\n
\n

gives us 2 different columns, but not in CSV format.

\n

I did

\n
select name||','||id||','||location from employee;\n
\n

We get the output in a CSV format. It has just concatenated the output with commas (,).

\n soup wrap:

I have found a way out of it. We can use concatenation here,

select name,id,location from employee;

gives us 2 different columns, but not in CSV format.

I did

select name||','||id||','||location from employee;

We get the output in a CSV format. It has just concatenated the output with commas (,).

qid & accept id: (2524600, 2527255) query: How do I join three tables with SQLalchemy and keeping all of the columns in one of the tables? soup:

Option-1:

\n

Subscription is just a many-to-many relation object, and I would suggest that you model it as such rather then as a separate class. See Configuring Many-to-Many Relationships documentation of SQLAlchemy/declarative.

\n

You model with the test code becomes:

\n
from sqlalchemy import create_engine, Column, Integer, DateTime, String, ForeignKey, Table\nfrom sqlalchemy.orm import relation, scoped_session, sessionmaker, eagerload\nfrom sqlalchemy.ext.declarative import declarative_base\n\nengine = create_engine('sqlite:///:memory:', echo=True)\nsession = scoped_session(sessionmaker(bind=engine, autoflush=True))\nBase = declarative_base()\n\nt_subscription = Table('subscription', Base.metadata,\n    Column('userId', Integer, ForeignKey('user.id')),\n    Column('channelId', Integer, ForeignKey('channel.id')),\n)\n\nclass Channel(Base):\n    __tablename__ = 'channel'\n\n    id = Column(Integer, primary_key = True)\n    title = Column(String)\n    description = Column(String)\n    link = Column(String)\n    pubDate = Column(DateTime)\n\nclass User(Base):\n    __tablename__ = 'user'\n\n    id = Column(Integer, primary_key = True)\n    username = Column(String)\n    password = Column(String)\n    sessionId = Column(String)\n\n    channels = relation("Channel", secondary=t_subscription)\n\n# NOTE: no need for this class\n# class Subscription(Base):\n    # ...\n\nBase.metadata.create_all(engine)\n\n\n# ######################\n# Add test data\nc1 = Channel()\nc1.title = 'channel-1'\nc2 = Channel()\nc2.title = 'channel-2'\nc3 = Channel()\nc3.title = 'channel-3'\nc4 = Channel()\nc4.title = 'channel-4'\nsession.add(c1)\nsession.add(c2)\nsession.add(c3)\nsession.add(c4)\nu1 = User()\nu1.username ='user1'\nsession.add(u1)\nu1.channels.append(c1)\nu1.channels.append(c3)\nu2 = User()\nu2.username ='user2'\nsession.add(u2)\nu2.channels.append(c2)\nsession.commit()\n\n\n# ######################\n# clean the session and test the code\nsession.expunge_all()\n\n# retrieve all (I assume those are not that many)\nchannels = session.query(Channel).all()\n\n# get subscription info for the user\n#q = session.query(User)\n# use eagerload(...) so that all 'subscription' table data is loaded with the user itself, and not as a separate query\nq = session.query(User).options(eagerload(User.channels))\nfor u in q.all():\n    for c in channels:\n        print (c.id, c.title, (c in u.channels))\n
\n

which produces following output:

\n
(1, u'channel-1', True)\n(2, u'channel-2', False)\n(3, u'channel-3', True)\n(4, u'channel-4', False)\n(1, u'channel-1', False)\n(2, u'channel-2', True)\n(3, u'channel-3', False)\n(4, u'channel-4', False)\n
\n

Please note the use of eagerload, which will issue only 1 SELECT statement instead of 1 for each User when channels are asked for.

\n

Option-2:

\n

But if you want to keep you model and just create an SA query that would give you the columns as you ask, following query should do the job:

\n
from sqlalchemy import and_\nfrom sqlalchemy.sql.expression import case\n#...\nq = (session.query(#User.username, \n                   Channel.id, Channel.title, \n                   case([(Subscription.channelId == None, False)], else_=True)\n                  ).outerjoin((Subscription, \n                                and_(Subscription.userId==User.id, \n                                     Subscription.channelId==Channel.id))\n                             )\n    )\n# optionally filter by user\nq = q.filter(User.id == uid()) # assuming uid() is the function that provides user.id\nq = q.filter(User.sessionId == id()) # assuming uid() is the function that provides user.sessionId\nres = q.all()\nfor r in res:\n    print r\n
\n

The output is absolutely the same as in the option-1 above.

\n soup wrap:

Option-1:

Subscription is just a many-to-many relation object, and I would suggest that you model it as such rather then as a separate class. See Configuring Many-to-Many Relationships documentation of SQLAlchemy/declarative.

You model with the test code becomes:

from sqlalchemy import create_engine, Column, Integer, DateTime, String, ForeignKey, Table
from sqlalchemy.orm import relation, scoped_session, sessionmaker, eagerload
from sqlalchemy.ext.declarative import declarative_base

engine = create_engine('sqlite:///:memory:', echo=True)
session = scoped_session(sessionmaker(bind=engine, autoflush=True))
Base = declarative_base()

t_subscription = Table('subscription', Base.metadata,
    Column('userId', Integer, ForeignKey('user.id')),
    Column('channelId', Integer, ForeignKey('channel.id')),
)

class Channel(Base):
    __tablename__ = 'channel'

    id = Column(Integer, primary_key = True)
    title = Column(String)
    description = Column(String)
    link = Column(String)
    pubDate = Column(DateTime)

class User(Base):
    __tablename__ = 'user'

    id = Column(Integer, primary_key = True)
    username = Column(String)
    password = Column(String)
    sessionId = Column(String)

    channels = relation("Channel", secondary=t_subscription)

# NOTE: no need for this class
# class Subscription(Base):
    # ...

Base.metadata.create_all(engine)


# ######################
# Add test data
c1 = Channel()
c1.title = 'channel-1'
c2 = Channel()
c2.title = 'channel-2'
c3 = Channel()
c3.title = 'channel-3'
c4 = Channel()
c4.title = 'channel-4'
session.add(c1)
session.add(c2)
session.add(c3)
session.add(c4)
u1 = User()
u1.username ='user1'
session.add(u1)
u1.channels.append(c1)
u1.channels.append(c3)
u2 = User()
u2.username ='user2'
session.add(u2)
u2.channels.append(c2)
session.commit()


# ######################
# clean the session and test the code
session.expunge_all()

# retrieve all (I assume those are not that many)
channels = session.query(Channel).all()

# get subscription info for the user
#q = session.query(User)
# use eagerload(...) so that all 'subscription' table data is loaded with the user itself, and not as a separate query
q = session.query(User).options(eagerload(User.channels))
for u in q.all():
    for c in channels:
        print (c.id, c.title, (c in u.channels))

which produces following output:

(1, u'channel-1', True)
(2, u'channel-2', False)
(3, u'channel-3', True)
(4, u'channel-4', False)
(1, u'channel-1', False)
(2, u'channel-2', True)
(3, u'channel-3', False)
(4, u'channel-4', False)

Please note the use of eagerload, which will issue only 1 SELECT statement instead of 1 for each User when channels are asked for.

Option-2:

But if you want to keep you model and just create an SA query that would give you the columns as you ask, following query should do the job:

from sqlalchemy import and_
from sqlalchemy.sql.expression import case
#...
q = (session.query(#User.username, 
                   Channel.id, Channel.title, 
                   case([(Subscription.channelId == None, False)], else_=True)
                  ).outerjoin((Subscription, 
                                and_(Subscription.userId==User.id, 
                                     Subscription.channelId==Channel.id))
                             )
    )
# optionally filter by user
q = q.filter(User.id == uid()) # assuming uid() is the function that provides user.id
q = q.filter(User.sessionId == id()) # assuming uid() is the function that provides user.sessionId
res = q.all()
for r in res:
    print r

The output is absolutely the same as in the option-1 above.

qid & accept id: (2559110, 2559392) query: Is it possible to write a query which returns a date for every day between two specified days? soup:

Here's an example from postgres, I hope the dialects are comparable in regards to recursive

\n
WITH RECURSIVE t(n) AS (\n    VALUES (1)\n  UNION ALL\n    SELECT n+1 FROM t WHERE n < 20\n)\nSELECT n FROM t;\n
\n

...will return 20 records, numbers from 1 to 20\nCast/convert these to dates and there you are

\n

UPDATE:\nSorry, don't have ORA here, but according to this article

\n
SELECT\n   SYS_CONNECT_BY_PATH(DUMMY, '/')\nFROM\n   DUAL\nCONNECT BY\n   LEVEL<4;\n
\n

gives

\n
SYS_CONNECT_BY_PATH(DUMMY,'/')\n--------------------------------\n/X\n/X/X\n/X/X/X\n
\n

It is also stated that this is supposed to be very efficient way to generate rows.\nIf ROWNUM can be used in the above select and if variable can be used in LEVEL condition then solution can be worked out.

\n

UPDATE2:

\n

And indeed there are several options.

\n
SELECT (CAST('01-JAN-2010' AS DATE) + (ROWNUM - 1)) n\nFROM   ( SELECT 1 just_a_column\n         FROM   dual\n         CONNECT BY LEVEL <= 20\n       )\n
\n

orafaq states that: 'It should be noted that in later versions of oracle, at least as far back as 10gR1, operations against dual are optimized such that they require no logical or physical I/O operations. This makes them quite fast.', so I would say this is not completely esoteric.

\n soup wrap:

Here's an example from postgres, I hope the dialects are comparable in regards to recursive

WITH RECURSIVE t(n) AS (
    VALUES (1)
  UNION ALL
    SELECT n+1 FROM t WHERE n < 20
)
SELECT n FROM t;

...will return 20 records, numbers from 1 to 20 Cast/convert these to dates and there you are

UPDATE: Sorry, don't have ORA here, but according to this article

SELECT
   SYS_CONNECT_BY_PATH(DUMMY, '/')
FROM
   DUAL
CONNECT BY
   LEVEL<4;

gives

SYS_CONNECT_BY_PATH(DUMMY,'/')
--------------------------------
/X
/X/X
/X/X/X

It is also stated that this is supposed to be very efficient way to generate rows. If ROWNUM can be used in the above select and if variable can be used in LEVEL condition then solution can be worked out.

UPDATE2:

And indeed there are several options.

SELECT (CAST('01-JAN-2010' AS DATE) + (ROWNUM - 1)) n
FROM   ( SELECT 1 just_a_column
         FROM   dual
         CONNECT BY LEVEL <= 20
       )

orafaq states that: 'It should be noted that in later versions of oracle, at least as far back as 10gR1, operations against dual are optimized such that they require no logical or physical I/O operations. This makes them quite fast.', so I would say this is not completely esoteric.

qid & accept id: (2563918, 2564009) query: Create a Cumulative Sum Column in MySQL soup:

If performance is an issue, you could use a MySQL variable:

\n
set @csum := 0;\nupdate YourTable\nset cumulative_sum = (@csum := @csum + count)\norder by id;\n
\n

Alternatively, you could remove the cumulative_sum column and calculate it on each query:

\n
set @csum := 0;\nselect id, count, (@csum := @csum + count) as cumulative_sum\nfrom YourTable\norder by id;\n
\n

This calculates the running sum in a running way :)

\n soup wrap:

If performance is an issue, you could use a MySQL variable:

set @csum := 0;
update YourTable
set cumulative_sum = (@csum := @csum + count)
order by id;

Alternatively, you could remove the cumulative_sum column and calculate it on each query:

set @csum := 0;
select id, count, (@csum := @csum + count) as cumulative_sum
from YourTable
order by id;

This calculates the running sum in a running way :)

qid & accept id: (2588304, 2588972) query: SQL query multi table selection soup:

Lots of same answers here. For some reason, though, all of them are joining the Section table which is (likely) not necessary.

\n
select\n  p.*\n\nfrom\n  Product    p,\n  Category   c\n\nwhere\n  p.category_id = c.id and\n  c.section_id = 123\n;\n
\n
\n

Explicit ANSI JOIN syntax per @nemiss's request:

\n
select\n  p.*\n\nfrom Product    p\n\njoin Category   c\n  on c.id = p.category_id\n and c.section_id = 123\n;\n
\n
\n

Possible reason to include Section table: Selecting products based on Section name (instead of ID).

\n
select\n  p.*\n\nfrom Product    p\n\njoin Category   c\n  on c.id = p.category_id\n\njoin Section    s\n  on s.id = c.section_id\n and s.name = 'Books'\n;\n
\n

If doing this, you'll want to make sure Section.name is indexed

\n
alter table Product add index name;\n
\n soup wrap:

Lots of same answers here. For some reason, though, all of them are joining the Section table which is (likely) not necessary.

select
  p.*

from
  Product    p,
  Category   c

where
  p.category_id = c.id and
  c.section_id = 123
;

Explicit ANSI JOIN syntax per @nemiss's request:

select
  p.*

from Product    p

join Category   c
  on c.id = p.category_id
 and c.section_id = 123
;

Possible reason to include Section table: Selecting products based on Section name (instead of ID).

select
  p.*

from Product    p

join Category   c
  on c.id = p.category_id

join Section    s
  on s.id = c.section_id
 and s.name = 'Books'
;

If doing this, you'll want to make sure Section.name is indexed

alter table Product add index name;
qid & accept id: (2640048, 2640090) query: SQL: how to get the left 3 numbers from an int soup:

For SQL Server, the easiest way would definitely be:

\n
SELECT CAST(LEFT(CAST(YourInt AS VARCHAR(100)), 3) AS INT)\n
\n

Convert to string, take the left most three characters, and convert those back to an INT.

\n

Doing it purely on the numerical value gets messy since you need to know how many digits you need to get rid of and so forth...

\n

If you want to use purely only INT's, you'd have to construct something like this (at least you could do this in SQL Server - I'm not familiar enough with Access to know if that'll work in the Access SQL "dialect"):

\n
DECLARE @MyInt INT = 1234567\n\nSELECT\n    CASE \n        WHEN @MyInt < 1000 THEN @MyInt\n        WHEN @MyInt > 10000000 THEN @MyInt / 100000\n        WHEN @MyInt > 1000000 THEN @MyInt / 10000\n        WHEN @MyInt > 100000 THEN @MyInt / 1000\n        WHEN @MyInt > 10000 THEN @MyInt / 100\n        WHEN @MyInt > 1000 THEN @MyInt / 10\n    END AS 'NewInt'\n
\n

But that's always an approximation - what if you have a really really really large number..... it might just fall through the cracks....

\n soup wrap:

For SQL Server, the easiest way would definitely be:

SELECT CAST(LEFT(CAST(YourInt AS VARCHAR(100)), 3) AS INT)

Convert to string, take the left most three characters, and convert those back to an INT.

Doing it purely on the numerical value gets messy since you need to know how many digits you need to get rid of and so forth...

If you want to use purely only INT's, you'd have to construct something like this (at least you could do this in SQL Server - I'm not familiar enough with Access to know if that'll work in the Access SQL "dialect"):

DECLARE @MyInt INT = 1234567

SELECT
    CASE 
        WHEN @MyInt < 1000 THEN @MyInt
        WHEN @MyInt > 10000000 THEN @MyInt / 100000
        WHEN @MyInt > 1000000 THEN @MyInt / 10000
        WHEN @MyInt > 100000 THEN @MyInt / 1000
        WHEN @MyInt > 10000 THEN @MyInt / 100
        WHEN @MyInt > 1000 THEN @MyInt / 10
    END AS 'NewInt'

But that's always an approximation - what if you have a really really really large number..... it might just fall through the cracks....

qid & accept id: (2651249, 2652259) query: wanted to get all dates in mysql result soup:

There is an approach that can do this in pure SQL but it has limitations.

\n

First you need to have a number sequence 1,2,3...n as rows (assume select row from rows return that).

\n

Then you can left join on this and convert to dates based on number of days between min and max.

\n
 select @min_join_on := (select min(join_on) from user);\n select @no_rows := (select datediff(max(join_on), @min_join_on) from user)+1;\n
\n

will give you the required number of rows, which then you can use to

\n
 select adddate(@min_join_on, interval row day) from rows where row <= @no_rows;\n
\n

will return a required sequence of dates on which then you can do a left join back to the users table.
\nUsing variables can be avoided if you use sub queries, I broke it down for readability.

\n

Now, the problem is that the number of rows in table rows has to be bigger then @no_rows.\nFor 10,000 rows you can work with date ranges of up to 27 years, with 100,000 rows you can work with date ranges of up to 273 years (this feels really bad, but I am afraid that if you don't want to use stored procedures it will have to look and feel awkward).

\n

So, if you can work with such fixed date ranges you can even substitute the table with the query, such as this

\n
SELECT @row := @row + 1 as row FROM (select 0 union all select 1 union all select 3 union all select 4 union all select 5 union all select 6 union all select 6 union all select 7 union all select 8 union all select 9) t, (select 0 union all select 1 union all select 3 union all select 4 union all select 5 union all select 6 union all select 6 union all select 7 union all select 8 union all select 9) t2, (select 0 union all select 1 union all select 3 union all select 4 union all select 5 union all select 6 union all select 6 union all select 7 union all select 8 union all select 9) t3, (select 0 union all select 1 union all select 3 union all select 4 union all select 5 union all select 6 union all select 6 union all select 7 union all select 8 union all select 9) t4, (SELECT @row:=0) r\n
\n

which will produce 10,000 rows going from 1 to 10,000 and it will not be terribly inefficient at it.

\n

So at the end it is doable in a single query.

\n
create table user(id INT NOT NULL AUTO_INCREMENT, name varchar(100), join_on date, PRIMARY KEY(id));\n\nmysql> select * from user;\n+----+-------+------------+\n| id | name  | join_on    |\n+----+-------+------------+\n|  1 | user1 | 2010-04-02 | \n|  2 | user2 | 2010-04-04 | \n|  3 | user3 | 2010-04-08 | \n|  4 | user4 | 2010-04-08 | \n+----+-------+------------+\n4 rows in set (0.00 sec)\n\ninsert into user values (null, 'user1', '2010-04-02'), (null, 'user2', '2010-04-04'), (null, 'user3', '2010-04-08'), (null, 'user4', '2010-04-08')\n\n\nSELECT date, count(id)\nFROM (\nSELECT adddate((select min(join_on) from user), row-1) as date \nFROM ( \nSELECT @row := @row + 1 as row FROM (select 0 union all select 1 union all select 3 union all select 4 union all select 5 union all select 6 union all select 6 union all select 7 union all select 8 union all select 9) t, (select 0 union all select 1 union all select 3 union all select 4 union all select 5 union all select 6 union all select 6 union all select 7 union all select 8 union all select 9) t2, (SELECT @row:=0) r ) n  \nWHERE n.row <= ( select datediff(max(join_on), min(join_on)) from user) + 1\n) dr LEFT JOIN user u ON dr.date = u.join_on\nGROUP BY dr.date\n\n+------------+-----------+\n| date       | count(id) |\n+------------+-----------+\n| 2010-04-02 |         1 | \n| 2010-04-03 |         0 | \n| 2010-04-04 |         1 | \n| 2010-04-05 |         0 | \n| 2010-04-06 |         0 | \n| 2010-04-07 |         0 | \n| 2010-04-08 |         2 | \n+------------+-----------+\n7 rows in set (0.00 sec)\n
\n soup wrap:

There is an approach that can do this in pure SQL but it has limitations.

First you need to have a number sequence 1,2,3...n as rows (assume select row from rows return that).

Then you can left join on this and convert to dates based on number of days between min and max.

 select @min_join_on := (select min(join_on) from user);
 select @no_rows := (select datediff(max(join_on), @min_join_on) from user)+1;

will give you the required number of rows, which then you can use to

 select adddate(@min_join_on, interval row day) from rows where row <= @no_rows;

will return a required sequence of dates on which then you can do a left join back to the users table.
Using variables can be avoided if you use sub queries, I broke it down for readability.

Now, the problem is that the number of rows in table rows has to be bigger then @no_rows. For 10,000 rows you can work with date ranges of up to 27 years, with 100,000 rows you can work with date ranges of up to 273 years (this feels really bad, but I am afraid that if you don't want to use stored procedures it will have to look and feel awkward).

So, if you can work with such fixed date ranges you can even substitute the table with the query, such as this

SELECT @row := @row + 1 as row FROM (select 0 union all select 1 union all select 3 union all select 4 union all select 5 union all select 6 union all select 6 union all select 7 union all select 8 union all select 9) t, (select 0 union all select 1 union all select 3 union all select 4 union all select 5 union all select 6 union all select 6 union all select 7 union all select 8 union all select 9) t2, (select 0 union all select 1 union all select 3 union all select 4 union all select 5 union all select 6 union all select 6 union all select 7 union all select 8 union all select 9) t3, (select 0 union all select 1 union all select 3 union all select 4 union all select 5 union all select 6 union all select 6 union all select 7 union all select 8 union all select 9) t4, (SELECT @row:=0) r

which will produce 10,000 rows going from 1 to 10,000 and it will not be terribly inefficient at it.

So at the end it is doable in a single query.

create table user(id INT NOT NULL AUTO_INCREMENT, name varchar(100), join_on date, PRIMARY KEY(id));

mysql> select * from user;
+----+-------+------------+
| id | name  | join_on    |
+----+-------+------------+
|  1 | user1 | 2010-04-02 | 
|  2 | user2 | 2010-04-04 | 
|  3 | user3 | 2010-04-08 | 
|  4 | user4 | 2010-04-08 | 
+----+-------+------------+
4 rows in set (0.00 sec)

insert into user values (null, 'user1', '2010-04-02'), (null, 'user2', '2010-04-04'), (null, 'user3', '2010-04-08'), (null, 'user4', '2010-04-08')


SELECT date, count(id)
FROM (
SELECT adddate((select min(join_on) from user), row-1) as date 
FROM ( 
SELECT @row := @row + 1 as row FROM (select 0 union all select 1 union all select 3 union all select 4 union all select 5 union all select 6 union all select 6 union all select 7 union all select 8 union all select 9) t, (select 0 union all select 1 union all select 3 union all select 4 union all select 5 union all select 6 union all select 6 union all select 7 union all select 8 union all select 9) t2, (SELECT @row:=0) r ) n  
WHERE n.row <= ( select datediff(max(join_on), min(join_on)) from user) + 1
) dr LEFT JOIN user u ON dr.date = u.join_on
GROUP BY dr.date

+------------+-----------+
| date       | count(id) |
+------------+-----------+
| 2010-04-02 |         1 | 
| 2010-04-03 |         0 | 
| 2010-04-04 |         1 | 
| 2010-04-05 |         0 | 
| 2010-04-06 |         0 | 
| 2010-04-07 |         0 | 
| 2010-04-08 |         2 | 
+------------+-----------+
7 rows in set (0.00 sec)
qid & accept id: (2695116, 2697554) query: Update multiple table column values using single query soup:
/** XXX CODING HORROR... */\n
\n

Depending on your needs, you could use an updateable view. You create a view of your base tables and add an "instead of" trigger to this view and you update the view directly.

\n

Some example tables:

\n
create table party (\n    party_id integer,\n    employee_id integer\n    );\n\ncreate table party_name (\n    party_id integer,\n    first_name varchar2(120 char),\n    last_name varchar2(120 char)\n    );\n\ninsert into party values (1,1000);   \ninsert into party values (2,2000);\ninsert into party values (3,3000);\n\ninsert into party_name values (1,'Kipper','Family');\ninsert into party_name values (2,'Biff','Family');\ninsert into party_name values (3,'Chip','Family');\n\ncommit;\n\nselect * from party_v;\n\nPARTY_ID    EMPLOYEE_ID    FIRST_NAME    LAST_NAME\n1            1000           Kipper        Family\n2            2000           Biff          Family\n3            3000           Chip          Family\n
\n

... then create an updateable view

\n
create or replace view party_v\nas\nselect\n    p.party_id,\n    p.employee_id,\n    n.first_name,\n    n.last_name\nfrom\n    party p left join party_name n on p.party_id = n.party_id;\n\ncreate or replace trigger trg_party_update\ninstead of update on party_v \nfor each row\ndeclare\nbegin\n--\n    update party\n    set\n        party_id = :new.party_id,\n        employee_id = :new.employee_id\n    where\n        party_id = :old.party_id;\n--\n    update party_name\n    set\n        party_id = :new.party_id,\n        first_name = :new.first_name,\n        last_name = :new.last_name\n    where\n        party_id = :old.party_id;\n--\nend;\n/\n
\n

You can now update the view directly...

\n
update party_v\nset\n    employee_id = 42,\n    last_name = 'Oxford'\nwhere\n    party_id = 1;\n\nselect * from party_v;\n\nPARTY_ID    EMPLOYEE_ID    FIRST_NAME    LAST_NAME\n1            42             Kipper        Oxford\n2            2000           Biff          Family\n3            3000           Chip          Family\n
\n soup wrap:
/** XXX CODING HORROR... */

Depending on your needs, you could use an updateable view. You create a view of your base tables and add an "instead of" trigger to this view and you update the view directly.

Some example tables:

create table party (
    party_id integer,
    employee_id integer
    );

create table party_name (
    party_id integer,
    first_name varchar2(120 char),
    last_name varchar2(120 char)
    );

insert into party values (1,1000);   
insert into party values (2,2000);
insert into party values (3,3000);

insert into party_name values (1,'Kipper','Family');
insert into party_name values (2,'Biff','Family');
insert into party_name values (3,'Chip','Family');

commit;

select * from party_v;

PARTY_ID    EMPLOYEE_ID    FIRST_NAME    LAST_NAME
1            1000           Kipper        Family
2            2000           Biff          Family
3            3000           Chip          Family

... then create an updateable view

create or replace view party_v
as
select
    p.party_id,
    p.employee_id,
    n.first_name,
    n.last_name
from
    party p left join party_name n on p.party_id = n.party_id;

create or replace trigger trg_party_update
instead of update on party_v 
for each row
declare
begin
--
    update party
    set
        party_id = :new.party_id,
        employee_id = :new.employee_id
    where
        party_id = :old.party_id;
--
    update party_name
    set
        party_id = :new.party_id,
        first_name = :new.first_name,
        last_name = :new.last_name
    where
        party_id = :old.party_id;
--
end;
/

You can now update the view directly...

update party_v
set
    employee_id = 42,
    last_name = 'Oxford'
where
    party_id = 1;

select * from party_v;

PARTY_ID    EMPLOYEE_ID    FIRST_NAME    LAST_NAME
1            42             Kipper        Oxford
2            2000           Biff          Family
3            3000           Chip          Family
qid & accept id: (2746331, 2746350) query: How to retrieve the rows (with maximum value in a field) having a another common field? soup:

This:

\n
WITH    q AS\n        (\n        SELECT  *, ROW_NUMBER() OVER (PARTITION BY field2 ORDER BY field3 DESC) AS rn\n        FROM    table1\n        )\nSELECT  *\nFROM    q\nWHERE   rn = 1\n
\n

or this:

\n
SELECT  q.*\nFROM    (\n        SELECT  DISTINCT field2\n        FROM    table1\n        ) qo\nCROSS APPLY\n        (\n        SELECT  TOP 1 *\n        FROM    table1 t\n        WHERE   t.field2 = qo.field2\n        ORDER BY\n                t.field3 DESC\n        ) q\n
\n

Depending on the field2 cardinality, the first or the second query can be more efficient.

\n

See this article for more details:

\n\n soup wrap:

This:

WITH    q AS
        (
        SELECT  *, ROW_NUMBER() OVER (PARTITION BY field2 ORDER BY field3 DESC) AS rn
        FROM    table1
        )
SELECT  *
FROM    q
WHERE   rn = 1

or this:

SELECT  q.*
FROM    (
        SELECT  DISTINCT field2
        FROM    table1
        ) qo
CROSS APPLY
        (
        SELECT  TOP 1 *
        FROM    table1 t
        WHERE   t.field2 = qo.field2
        ORDER BY
                t.field3 DESC
        ) q

Depending on the field2 cardinality, the first or the second query can be more efficient.

See this article for more details:

qid & accept id: (2769007, 2769023) query: formula for computed column based on different table's column soup:

You could create a user-defined function for this:

\n
CREATE FUNCTION dbo.GetValue(INT @ncode, INT @recid)\nRETURNS INT\nAS \n   SELECT @recid * nvalue \n   FROM c_const \n   WHERE code = @ncode\n
\n

and then use that to define your computed column:

\n
ALTER TABLE dbo.YourTable\n   ADD NewColumnName AS dbo.GetValue(ncodeValue, recIdValue)\n
\n soup wrap:

You could create a user-defined function for this:

CREATE FUNCTION dbo.GetValue(INT @ncode, INT @recid)
RETURNS INT
AS 
   SELECT @recid * nvalue 
   FROM c_const 
   WHERE code = @ncode

and then use that to define your computed column:

ALTER TABLE dbo.YourTable
   ADD NewColumnName AS dbo.GetValue(ncodeValue, recIdValue)
qid & accept id: (2781315, 2781396) query: SQL Statement to update the date soup:

Dates are not strings, but either of the following will result in a date:

\n
DATE [Table] SET `Birthdate` = CDate('1993-08-02 00:00:00.0') WHERE `ID` = 000\n
\n

(see the documentation for CDate)

\n
DATE [Table] SET `Birthdate` = #08/02/1993# WHERE `ID` = 000\n
\n soup wrap:

Dates are not strings, but either of the following will result in a date:

DATE [Table] SET `Birthdate` = CDate('1993-08-02 00:00:00.0') WHERE `ID` = 000

(see the documentation for CDate)

DATE [Table] SET `Birthdate` = #08/02/1993# WHERE `ID` = 000
qid & accept id: (2781419, 2781452) query: Optimal way to convert to date soup:

try this:

\n
CONVERT(DATETIME, CONVERT(NVARCHAR, YYYYMMDD))\n
\n

For example:

\n
SELECT CONVERT(DATETIME, CONVERT(NVARCHAR, 20100401))\n
\n

Results in:

\n
2010-04-01 00:00:00.000\n
\n soup wrap:

try this:

CONVERT(DATETIME, CONVERT(NVARCHAR, YYYYMMDD))

For example:

SELECT CONVERT(DATETIME, CONVERT(NVARCHAR, 20100401))

Results in:

2010-04-01 00:00:00.000
qid & accept id: (2788575, 2788639) query: tsql script to add delete cascade to existing tables soup:
ALTER TABLE [wm].[TABLE_NAME]  WITH NOCHECK ADD  CONSTRAINT [FK_TABLE_NAME_PARENT_TABLE_NAME] FOREIGN KEY([FOREIGN_KEY])\nREFERENCES [wm].[PARENT_TABLE_NAME] ([PRIVATE_KEY])\nON DELETE CASCADE\nGO\n
\n\n
\n
ALTER TABLE [wm].[Thumbs]  WITH NOCHECK ADD  CONSTRAINT [FK_Thumbs_Documents] FOREIGN KEY([DocID])\nREFERENCES [wm].[Documents] ([ID])\nON DELETE CASCADE\nGO\n
\n soup wrap:
ALTER TABLE [wm].[TABLE_NAME]  WITH NOCHECK ADD  CONSTRAINT [FK_TABLE_NAME_PARENT_TABLE_NAME] FOREIGN KEY([FOREIGN_KEY])
REFERENCES [wm].[PARENT_TABLE_NAME] ([PRIVATE_KEY])
ON DELETE CASCADE
GO

ALTER TABLE [wm].[Thumbs]  WITH NOCHECK ADD  CONSTRAINT [FK_Thumbs_Documents] FOREIGN KEY([DocID])
REFERENCES [wm].[Documents] ([ID])
ON DELETE CASCADE
GO
qid & accept id: (2792388, 2792436) query: SQL Reset Identity ID in already populated table soup:

The easiest way would be to make a copy of the current table, fix up any parentid issues, drop it and then rename the new one.

\n

You could also temporarily remove the IDENTITY and try the folowing:

\n
;WITH TBL AS\n(\n  SELECT *, ROW_NUMBER(ORDER BY ID) AS RN\n  FROM CURRENT_TABLE\n)\nUPDATE TBL\nSET ID = RN\n
\n

Or, if you don't care about the order of the records, this

\n
DECLARE INT @id;\nSET @id = 0;\n\nUPDATE CURRENT_TABLE\nSET @id = ID = @id + 1;\n
\n soup wrap:

The easiest way would be to make a copy of the current table, fix up any parentid issues, drop it and then rename the new one.

You could also temporarily remove the IDENTITY and try the folowing:

;WITH TBL AS
(
  SELECT *, ROW_NUMBER(ORDER BY ID) AS RN
  FROM CURRENT_TABLE
)
UPDATE TBL
SET ID = RN

Or, if you don't care about the order of the records, this

DECLARE INT @id;
SET @id = 0;

UPDATE CURRENT_TABLE
SET @id = ID = @id + 1;
qid & accept id: (2884295, 2884315) query: Help with constructing a conditional SQL statement soup:

Naively:

\n
SELECT *\nFROM Entries\nWHERE Language = 'Swedish' \n\nUNION ALL\n\nSELECT *\nFROM Entries\nWHERE Language = 'English' \n    AND NOT EXISTS (\n        SELECT *\n        FROM Entries\n        WHERE Language = 'Swedish' \n    )\n
\n

or:

\n
SELECT *\nFROM Entries\nWHERE Language = 'Swedish' \n    OR (Language = 'English' \n        AND NOT EXISTS (\n            SELECT *\n            FROM Entries\n            WHERE Language = 'Swedish' \n        )\n    )\n
\n soup wrap:

Naively:

SELECT *
FROM Entries
WHERE Language = 'Swedish' 

UNION ALL

SELECT *
FROM Entries
WHERE Language = 'English' 
    AND NOT EXISTS (
        SELECT *
        FROM Entries
        WHERE Language = 'Swedish' 
    )

or:

SELECT *
FROM Entries
WHERE Language = 'Swedish' 
    OR (Language = 'English' 
        AND NOT EXISTS (
            SELECT *
            FROM Entries
            WHERE Language = 'Swedish' 
        )
    )
qid & accept id: (2900217, 2900250) query: Getting age in years in a SQL query soup:

Assuming birthday is stored as a DateTime

\n
Select Count(*)\nFrom    (\n        Select Id, Floor(DateDiff(d, BirthDate, GetDate()) / 365.25) As Age\n        From People\n        ) As EmpAges\nWhere EmpAges Between 20 And 40\n
\n

This could also be written without the derived table like so:

\n
Select Count(*)\nFrom People\nWhere Floor(DateDiff(d, BirthDate, GetDate()) / 365.25)  Between 20 And 40\n
\n

Yet another way would be to use DateAdd. As OMG Ponies and ck mentioned, this one would be the most efficient of the bunch as it would enable the use of an index on dateOfBirth if it existed.

\n
Select Count(*)\nFrom People\nWhere DateOfBirth Between DateAdd(yy, -40, GetDate()) And DateAdd(yy, -20, GetDate())\n
\n soup wrap:

Assuming birthday is stored as a DateTime

Select Count(*)
From    (
        Select Id, Floor(DateDiff(d, BirthDate, GetDate()) / 365.25) As Age
        From People
        ) As EmpAges
Where EmpAges Between 20 And 40

This could also be written without the derived table like so:

Select Count(*)
From People
Where Floor(DateDiff(d, BirthDate, GetDate()) / 365.25)  Between 20 And 40

Yet another way would be to use DateAdd. As OMG Ponies and ck mentioned, this one would be the most efficient of the bunch as it would enable the use of an index on dateOfBirth if it existed.

Select Count(*)
From People
Where DateOfBirth Between DateAdd(yy, -40, GetDate()) And DateAdd(yy, -20, GetDate())
qid & accept id: (2913338, 2913370) query: In mySQL, Is it possible to SELECT from two tables and merge the columns? soup:

You can combine columns from both tables using (id,name) as the joining criteria with:

\n
select\n    a.id                               as id,\n    a.name                             as name,\n    a.somefield1 || ' ' || b.somefied1 as somefield1\nfrom tablea a, tableb b\nwhere a.id   = b.id\n  and a.name = b.name\n  and b.name = 'mooseburgers';\n
\n

If you want to join on just the (id) and combine the name and somefield1 columns:

\n
select\n    a.id                               as id,\n    a.name || ' ' || b.name            as name,\n    a.somefield1 || ' ' || b.somefied1 as somefield1\nfrom tablea a, tableb b\nwhere a.id   = b.id\n  and b.name = 'mooseburgers';\n
\n

Although I have to admit this is a rather unusual way of doing things. I assume you have your reasons however :-)

\n

If I've misunderstood your question and you just want a more conventional union of the two tables, use something like:

\n
select id, name, somefield1, '' as somefield2 from tablea where name = 'mooseburgers'\nunion all\nselect id, name, somefield1, somefield2 from tableb where name = 'mooseburgers'\n
\n

This won't combine rows but will instead just append the rows from the two queries. Use union on its own if you want to remove duplicate rows but, if you're certain there are no duplicates or you don't want them removed, union all is often more efficient.

\n
\n

Based on your edit, the actual query would be:

\n
select name, somefield1 from tablea where name = 'zoot'\nunion all\nselect name, somefield1 from tableb where name = 'zoot'\n
\n

(or union if you don't want duplicates where a.name==b.name=='zoot' and a.somefield1==b.somefield1).

\n soup wrap:

You can combine columns from both tables using (id,name) as the joining criteria with:

select
    a.id                               as id,
    a.name                             as name,
    a.somefield1 || ' ' || b.somefied1 as somefield1
from tablea a, tableb b
where a.id   = b.id
  and a.name = b.name
  and b.name = 'mooseburgers';

If you want to join on just the (id) and combine the name and somefield1 columns:

select
    a.id                               as id,
    a.name || ' ' || b.name            as name,
    a.somefield1 || ' ' || b.somefied1 as somefield1
from tablea a, tableb b
where a.id   = b.id
  and b.name = 'mooseburgers';

Although I have to admit this is a rather unusual way of doing things. I assume you have your reasons however :-)

If I've misunderstood your question and you just want a more conventional union of the two tables, use something like:

select id, name, somefield1, '' as somefield2 from tablea where name = 'mooseburgers'
union all
select id, name, somefield1, somefield2 from tableb where name = 'mooseburgers'

This won't combine rows but will instead just append the rows from the two queries. Use union on its own if you want to remove duplicate rows but, if you're certain there are no duplicates or you don't want them removed, union all is often more efficient.


Based on your edit, the actual query would be:

select name, somefield1 from tablea where name = 'zoot'
union all
select name, somefield1 from tableb where name = 'zoot'

(or union if you don't want duplicates where a.name==b.name=='zoot' and a.somefield1==b.somefield1).

qid & accept id: (2919168, 2920858) query: Invoking a function call in a string in an Oracle Procedure soup:

It's easy enough to dynamically execute a string ...

\n
create or replace function fmt_fname (p_dyn_string in varchar2)\n    return varchar2\nis\n    return_value varchar2(128);\nbegin\n    execute immediate 'select '||p_dyn_string||' from dual'\n        into return_value;\n    return  return_value;\nend fmt_fname;\n/\n
\n

The problem arises where your string contains literals, with the dreaded quotes ...

\n
SQL> select fmt_fname('TEST||to_char(sysdate, 'DDD')') from dual\n  2  /\nselect fmt_fname('TEST||to_char(sysdate, 'DDD')') from dual\n                                          *\nERROR at line 1:\nORA-00907: missing right parenthesis\n\n\nSQL>\n
\n

So we have to escape the apostrophes, all of them, including the ones you haven't included in your posted string:

\n
SQL> select * from t34\n  2  /\n\n        ID FILENAME\n---------- ------------------------------\n         1 APC001\n         2 XYZ213\n         3 TEST147\n\n\nSQL> select * from t34\n  2  where filename = fmt_fname('''TEST''||to_char(sysdate, ''DDD'')')\n  3  /\n\n        ID FILENAME\n---------- ------------------------------\n         3 TEST147\n\nSQL>\n
\n

EDIT

\n

Just for the sake of fairness I feel I should point out that Tony's solution works just as well:

\n
SQL> create or replace function fmt_fname (p_dyn_string in varchar2)\n  2      return varchar2\n  3  is\n  4      return_value varchar2(128);\n  5  begin\n  6      execute immediate 'begin :result := ' || p_dyn_string || '; end;'\n  7          using out return_value;\n  8      return  return_value;\n  9  end;\n 10  /\n\nFunction created.\n\nSQL> select fmt_fname('''TEST''||to_char(sysdate, ''DDD'')') from dual\n  2  /\n\nFMT_FNAME('''TEST''||TO_CHAR(SYSDATE,''DDD'')')\n--------------------------------------------------------------------------------\nTEST147\n\nSQL>\n
\n

In fact, by avoiding the SELECT on DUAL it is probably better.

\n soup wrap:

It's easy enough to dynamically execute a string ...

create or replace function fmt_fname (p_dyn_string in varchar2)
    return varchar2
is
    return_value varchar2(128);
begin
    execute immediate 'select '||p_dyn_string||' from dual'
        into return_value;
    return  return_value;
end fmt_fname;
/

The problem arises where your string contains literals, with the dreaded quotes ...

SQL> select fmt_fname('TEST||to_char(sysdate, 'DDD')') from dual
  2  /
select fmt_fname('TEST||to_char(sysdate, 'DDD')') from dual
                                          *
ERROR at line 1:
ORA-00907: missing right parenthesis


SQL>

So we have to escape the apostrophes, all of them, including the ones you haven't included in your posted string:

SQL> select * from t34
  2  /

        ID FILENAME
---------- ------------------------------
         1 APC001
         2 XYZ213
         3 TEST147


SQL> select * from t34
  2  where filename = fmt_fname('''TEST''||to_char(sysdate, ''DDD'')')
  3  /

        ID FILENAME
---------- ------------------------------
         3 TEST147

SQL>

EDIT

Just for the sake of fairness I feel I should point out that Tony's solution works just as well:

SQL> create or replace function fmt_fname (p_dyn_string in varchar2)
  2      return varchar2
  3  is
  4      return_value varchar2(128);
  5  begin
  6      execute immediate 'begin :result := ' || p_dyn_string || '; end;'
  7          using out return_value;
  8      return  return_value;
  9  end;
 10  /

Function created.

SQL> select fmt_fname('''TEST''||to_char(sysdate, ''DDD'')') from dual
  2  /

FMT_FNAME('''TEST''||TO_CHAR(SYSDATE,''DDD'')')
--------------------------------------------------------------------------------
TEST147

SQL>

In fact, by avoiding the SELECT on DUAL it is probably better.

qid & accept id: (2922856, 2922972) query: mysql: how to change column to be PK Auto_Increment soup:

Here we create a little table:

\n
mysql> CREATE TABLE test2 (id int);\n
\n

Note Null is YES, and id is not a primary key, nor does it auto_increment.

\n
mysql> DESCRIBE test2;\n+-------+---------+------+-----+---------+-------+\n| Field | Type    | Null | Key | Default | Extra |\n+-------+---------+------+-----+---------+-------+\n| id    | int(11) | YES  |     | NULL    |       | \n+-------+---------+------+-----+---------+-------+\n1 row in set (0.00 sec)\n
\n

Here is the alter command:

\n
mysql> ALTER TABLE test2 MODIFY COLUMN id INT NOT NULL auto_increment, ADD primary key (id);\n
\n

Now Null is NO, and id is a primary key with auto_increment.

\n
mysql> describe test2;\ndescribe test2;\n+-------+---------+------+-----+---------+----------------+\n| Field | Type    | Null | Key | Default | Extra          |\n+-------+---------+------+-----+---------+----------------+\n| id    | int(11) | NO   | PRI | NULL    | auto_increment | \n+-------+---------+------+-----+---------+----------------+\n1 row in set (0.00 sec)\n
\n

Primary keys are always unique.

\n soup wrap:

Here we create a little table:

mysql> CREATE TABLE test2 (id int);

Note Null is YES, and id is not a primary key, nor does it auto_increment.

mysql> DESCRIBE test2;
+-------+---------+------+-----+---------+-------+
| Field | Type    | Null | Key | Default | Extra |
+-------+---------+------+-----+---------+-------+
| id    | int(11) | YES  |     | NULL    |       | 
+-------+---------+------+-----+---------+-------+
1 row in set (0.00 sec)

Here is the alter command:

mysql> ALTER TABLE test2 MODIFY COLUMN id INT NOT NULL auto_increment, ADD primary key (id);

Now Null is NO, and id is a primary key with auto_increment.

mysql> describe test2;
describe test2;
+-------+---------+------+-----+---------+----------------+
| Field | Type    | Null | Key | Default | Extra          |
+-------+---------+------+-----+---------+----------------+
| id    | int(11) | NO   | PRI | NULL    | auto_increment | 
+-------+---------+------+-----+---------+----------------+
1 row in set (0.00 sec)

Primary keys are always unique.

qid & accept id: (2930768, 2930818) query: How to compare sqlite TIMESTAMP values soup:

The issue is with the way you've inserted data into your table: the +0200 syntax doesn't match any of SQLite's time formats:

\n
    \n
  1. YYYY-MM-DD
  2. \n
  3. YYYY-MM-DD HH:MM
  4. \n
  5. YYYY-MM-DD HH:MM:SS
  6. \n
  7. YYYY-MM-DD HH:MM:SS.SSS
  8. \n
  9. YYYY-MM-DDTHH:MM
  10. \n
  11. YYYY-MM-DDTHH:MM:SS
  12. \n
  13. YYYY-MM-DDTHH:MM:SS.SSS
  14. \n
  15. HH:MM
  16. \n
  17. HH:MM:SS
  18. \n
  19. HH:MM:SS.SSS
  20. \n
  21. now
  22. \n
  23. DDDDDDDDDD
  24. \n
\n

Changing it to use the SS.SSS format works correctly:

\n
sqlite> CREATE TABLE Foo (created_at TIMESTAMP);\nsqlite> INSERT INTO Foo VALUES('2010-05-28T15:36:56+0200');\nsqlite> SELECT * FROM Foo WHERE foo.created_at < '2010-05-28 16:20:55';\nsqlite> SELECT * FROM Foo WHERE DATETIME(foo.created_at) < '2010-05-28 16:20:55';\nsqlite> INSERT INTO Foo VALUES('2010-05-28T15:36:56.200');\nsqlite> SELECT * FROM Foo WHERE DATETIME(foo.created_at) < '2010-05-28 16:20:55';\n2010-05-28T15:36:56.200\n
\n

If you absolutely can't change the format when it is inserted, you might have to fall back to doing something "clever" and modifying the actual string (i.e. to replace the + with a ., etc.).

\n
\n

(original answer)

\n

You haven't described what kind of data is contained in your CREATED_AT column. If it indeed a datetime, it will compare correctly against a string:

\n
sqlite> SELECT DATETIME('now');\n2010-05-28 16:33:10\nsqlite> SELECT DATETIME('now') < '2011-01-01 00:00:00';\n1\n
\n

If it is stored as a unix timestamp, you need to call DATETIME function with the second argument as 'unixepoch' to compare against a string:

\n
sqlite> SELECT DATETIME(0, 'unixepoch');\n1970-01-01 00:00:00\nsqlite> SELECT DATETIME(0, 'unixepoch') < '2010-01-01 00:00:00';\n1\nsqlite> SELECT DATETIME(0, 'unixepoch') == DATETIME('1970-01-01 00:00:00');\n1\n
\n

If neither of those solve your problem (and even if they do!) you should always post some data so that other people can reproduce your problem. You should even feel free to come up with a subset of your original data that still reproduces the problem.

\n soup wrap:

The issue is with the way you've inserted data into your table: the +0200 syntax doesn't match any of SQLite's time formats:

  1. YYYY-MM-DD
  2. YYYY-MM-DD HH:MM
  3. YYYY-MM-DD HH:MM:SS
  4. YYYY-MM-DD HH:MM:SS.SSS
  5. YYYY-MM-DDTHH:MM
  6. YYYY-MM-DDTHH:MM:SS
  7. YYYY-MM-DDTHH:MM:SS.SSS
  8. HH:MM
  9. HH:MM:SS
  10. HH:MM:SS.SSS
  11. now
  12. DDDDDDDDDD

Changing it to use the SS.SSS format works correctly:

sqlite> CREATE TABLE Foo (created_at TIMESTAMP);
sqlite> INSERT INTO Foo VALUES('2010-05-28T15:36:56+0200');
sqlite> SELECT * FROM Foo WHERE foo.created_at < '2010-05-28 16:20:55';
sqlite> SELECT * FROM Foo WHERE DATETIME(foo.created_at) < '2010-05-28 16:20:55';
sqlite> INSERT INTO Foo VALUES('2010-05-28T15:36:56.200');
sqlite> SELECT * FROM Foo WHERE DATETIME(foo.created_at) < '2010-05-28 16:20:55';
2010-05-28T15:36:56.200

If you absolutely can't change the format when it is inserted, you might have to fall back to doing something "clever" and modifying the actual string (i.e. to replace the + with a ., etc.).


(original answer)

You haven't described what kind of data is contained in your CREATED_AT column. If it indeed a datetime, it will compare correctly against a string:

sqlite> SELECT DATETIME('now');
2010-05-28 16:33:10
sqlite> SELECT DATETIME('now') < '2011-01-01 00:00:00';
1

If it is stored as a unix timestamp, you need to call DATETIME function with the second argument as 'unixepoch' to compare against a string:

sqlite> SELECT DATETIME(0, 'unixepoch');
1970-01-01 00:00:00
sqlite> SELECT DATETIME(0, 'unixepoch') < '2010-01-01 00:00:00';
1
sqlite> SELECT DATETIME(0, 'unixepoch') == DATETIME('1970-01-01 00:00:00');
1

If neither of those solve your problem (and even if they do!) you should always post some data so that other people can reproduce your problem. You should even feel free to come up with a subset of your original data that still reproduces the problem.

qid & accept id: (2945765, 2946013) query: Determining SQL MERGE statement result soup:

What you could do is create a temporary table (or a table variable) and send your output there - add some meaningful fields to your OUTPUT clause to make it clear what row was \naffected by what action:

\n
DECLARE @OutputTable TABLE (Guid UNIQUEIDENTIFIER, Action VARCHAR(100))\n\nMERGE INTO TestTable as target\nUSING ( select '00D81CB4EA0842EF9E158BB8FEC48A1E' )\nAS source (Guid)\nON ( target.Guid = source.Guid ) \nWHEN MATCHED THEN\nUPDATE SET Test_Column = NULL\nWHEN NOT MATCHED THEN\nINSERT (Guid, Test_Column) VALUES ('00D81CB4EA0842EF9E158BB8FEC48A1E', NULL)\nOUTPUT INSERTED.Guid, $action INTO @OutputTable\n\nSELECT\n   Guid, Action\nFROM\n   @OutputTable\n
\n

UPDATE: ah, okay, so you want to call this from .NET ! Well, in that case, just call it using the .ExecuteReader() method on your SqlCommand object - the stuff you're outputting using OUTPUT... will be returned to the .NET caller as a result set - you can loop through that:

\n
using(SqlCommand cmd = new SqlCommand(mergeStmt, connection))\n{\n   connection.Open();\n\n   using(SqlDataReader rdr = cmd.ExecuteReader())\n   {\n      while(rdr.Read())\n      {\n         var outputAction = rdr.GetValue(0);\n      }\n\n      rdr.Close();\n   }\n   connection.Close();\n}\n
\n

You should get back the resulting "$action" from that data reader.

\n soup wrap:

What you could do is create a temporary table (or a table variable) and send your output there - add some meaningful fields to your OUTPUT clause to make it clear what row was affected by what action:

DECLARE @OutputTable TABLE (Guid UNIQUEIDENTIFIER, Action VARCHAR(100))

MERGE INTO TestTable as target
USING ( select '00D81CB4EA0842EF9E158BB8FEC48A1E' )
AS source (Guid)
ON ( target.Guid = source.Guid ) 
WHEN MATCHED THEN
UPDATE SET Test_Column = NULL
WHEN NOT MATCHED THEN
INSERT (Guid, Test_Column) VALUES ('00D81CB4EA0842EF9E158BB8FEC48A1E', NULL)
OUTPUT INSERTED.Guid, $action INTO @OutputTable

SELECT
   Guid, Action
FROM
   @OutputTable

UPDATE: ah, okay, so you want to call this from .NET ! Well, in that case, just call it using the .ExecuteReader() method on your SqlCommand object - the stuff you're outputting using OUTPUT... will be returned to the .NET caller as a result set - you can loop through that:

using(SqlCommand cmd = new SqlCommand(mergeStmt, connection))
{
   connection.Open();

   using(SqlDataReader rdr = cmd.ExecuteReader())
   {
      while(rdr.Read())
      {
         var outputAction = rdr.GetValue(0);
      }

      rdr.Close();
   }
   connection.Close();
}

You should get back the resulting "$action" from that data reader.

qid & accept id: (2978700, 2978764) query: Calculate running total in SQLite table using triggers soup:
    \n
  1. Please check the value of SQLITE_MAX_TRIGGER_DEPTH. Could it be set to 1 instead of default 1000?

  2. \n
  3. Please check your SQLite version. Before 3.6.18, recursive triggers were not supported.

  4. \n
\n

Please note that the following worked for me 100% OK

\n

drop table "AccountBalances"

\n
CREATE TEMP TABLE "AccountBalances" (\n  "Id" INTEGER PRIMARY KEY, \n  "Balance" REAL);\n\nINSERT INTO "AccountBalances" values (1,0)\nINSERT INTO "AccountBalances" values (2,0);\nINSERT INTO "AccountBalances" values (3,0);\nINSERT INTO "AccountBalances" values (4,0);\nINSERT INTO "AccountBalances" values (5,0);\nINSERT INTO "AccountBalances" values (6,0);\n\nCREATE TRIGGER UpdateAccountBalance AFTER UPDATE ON AccountBalances\nBEGIN\n UPDATE AccountBalances \n    SET Balance = 1 + new.Balance \n  WHERE Id = new.Id + 1;\nEND;\n\nPRAGMA recursive_triggers = 'on';\n\nUPDATE AccountBalances \n   SET Balance = 1 \n WHERE Id = 1\n\nselect * from "AccountBalances";\n
\n

Resulted in:

\n
Id  Balance\n1   1\n2   2\n3   3\n4   4\n5   5\n6   6\n
\n soup wrap:
  1. Please check the value of SQLITE_MAX_TRIGGER_DEPTH. Could it be set to 1 instead of default 1000?

  2. Please check your SQLite version. Before 3.6.18, recursive triggers were not supported.

Please note that the following worked for me 100% OK

drop table "AccountBalances"

CREATE TEMP TABLE "AccountBalances" (
  "Id" INTEGER PRIMARY KEY, 
  "Balance" REAL);

INSERT INTO "AccountBalances" values (1,0)
INSERT INTO "AccountBalances" values (2,0);
INSERT INTO "AccountBalances" values (3,0);
INSERT INTO "AccountBalances" values (4,0);
INSERT INTO "AccountBalances" values (5,0);
INSERT INTO "AccountBalances" values (6,0);

CREATE TRIGGER UpdateAccountBalance AFTER UPDATE ON AccountBalances
BEGIN
 UPDATE AccountBalances 
    SET Balance = 1 + new.Balance 
  WHERE Id = new.Id + 1;
END;

PRAGMA recursive_triggers = 'on';

UPDATE AccountBalances 
   SET Balance = 1 
 WHERE Id = 1

select * from "AccountBalances";

Resulted in:

Id  Balance
1   1
2   2
3   3
4   4
5   5
6   6
qid & accept id: (3005323, 3005737) query: How can I manage a FIFO-queue in an database with SQL? soup:

Reading the comments you say that you are willing to add a auto increment or date field to know the proper position of each row. Once you add this I would recommend adding one more row to the In table called processed which is automatically set to false when the row is added to the table. Any rows that have been copied to OUT already have their processed filed set to true.

\n
+----+\n| In |\n+-----------+-----------+-------+-----------+\n| AUtoId    | Supply_ID | Price | Processed |\n+-----------+-----------+-------+-----------+\n|     1     |     1     |  75   |     1     |\n|     2     |     1     |  75   |     1     |\n|     3     |     1     |  75   |     0     |\n|     4     |     2     |  80   |     0     |\n|     5     |     2     |  80   |     0     |\n+-----------+-----------+-------+---------- +\n
\n

Then to find the next item to move to OUT you can do

\n
SELECT TOP 1 Supply_ID, Price \nFROM In WHERE Processed = 0\nORDER BY [Your Auto Increment Field or Date]\n
\n

Once the row is moved to OUT then you just UPDATE the processed field of that row to true.

\n soup wrap:

Reading the comments you say that you are willing to add a auto increment or date field to know the proper position of each row. Once you add this I would recommend adding one more row to the In table called processed which is automatically set to false when the row is added to the table. Any rows that have been copied to OUT already have their processed filed set to true.

+----+
| In |
+-----------+-----------+-------+-----------+
| AUtoId    | Supply_ID | Price | Processed |
+-----------+-----------+-------+-----------+
|     1     |     1     |  75   |     1     |
|     2     |     1     |  75   |     1     |
|     3     |     1     |  75   |     0     |
|     4     |     2     |  80   |     0     |
|     5     |     2     |  80   |     0     |
+-----------+-----------+-------+---------- +

Then to find the next item to move to OUT you can do

SELECT TOP 1 Supply_ID, Price 
FROM In WHERE Processed = 0
ORDER BY [Your Auto Increment Field or Date]

Once the row is moved to OUT then you just UPDATE the processed field of that row to true.

qid & accept id: (3035105, 3035145) query: Self join to a table soup:
select e1.* from Employee e1, Employee e2  where \n           e2.name = 'a' and\n           e1.salary > e2.salary\n
\n

Using self join

\n
 select e1.* from Employee e1 join Employee e2  on \n           e2.name = 'a' and\n           e1.salary > e2.salary\n
\n soup wrap:
select e1.* from Employee e1, Employee e2  where 
           e2.name = 'a' and
           e1.salary > e2.salary

Using self join

 select e1.* from Employee e1 join Employee e2  on 
           e2.name = 'a' and
           e1.salary > e2.salary
qid & accept id: (3053125, 3053404) query: Shrinking database soup:

Firstly, if you can avoid shrinking a production database then do so. Buying additional disk storage is almost always the more practical solution in the long run.

\n

There is a reason that your database data/log files have grown to their current size and unless you have purged data from your database then it is very likely (if not a certainty) that your database will grow to the current size once again, post shrink exercise.

\n

With this in mind you should look to identify the cause of your database growth.

\n

Finally, if you absolutely must shrink your database, choose the time to do so wisely, i.e. perform this maintenance at a time when your live system typically experiences lower workload. Shrinking data files causes a significant amount of disk I/O, especially if the data pages are to be reorganized.

\n

Then identify which data files or log files contain the most free space and target these to be shrunk individually. There is no point in performing a database wide shrink exercise if for example it is only the log file that has a significant amount of free space.

\n

In order to do this, consult the documentation for the DBCC SHRINKFILE command.

\n

Useful Information:

\n

Indentify the amount of free space in the database overall:

\n
EXEC sp_spaceused\n
\n

Identify the amount of free log space:

\n
DBCC SQLPERF('logspace')\n
\n

Identify the amount of free space per data/log file:

\n
SELECT \n    name AS 'File Name' , \n    physical_name AS 'Physical Name', \n    size/128 AS 'Total Size in MB',\n    size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0 AS 'Available Space In MB',\n    *\nFROM sys.database_files;\n
\n soup wrap:

Firstly, if you can avoid shrinking a production database then do so. Buying additional disk storage is almost always the more practical solution in the long run.

There is a reason that your database data/log files have grown to their current size and unless you have purged data from your database then it is very likely (if not a certainty) that your database will grow to the current size once again, post shrink exercise.

With this in mind you should look to identify the cause of your database growth.

Finally, if you absolutely must shrink your database, choose the time to do so wisely, i.e. perform this maintenance at a time when your live system typically experiences lower workload. Shrinking data files causes a significant amount of disk I/O, especially if the data pages are to be reorganized.

Then identify which data files or log files contain the most free space and target these to be shrunk individually. There is no point in performing a database wide shrink exercise if for example it is only the log file that has a significant amount of free space.

In order to do this, consult the documentation for the DBCC SHRINKFILE command.

Useful Information:

Indentify the amount of free space in the database overall:

EXEC sp_spaceused

Identify the amount of free log space:

DBCC SQLPERF('logspace')

Identify the amount of free space per data/log file:

SELECT 
    name AS 'File Name' , 
    physical_name AS 'Physical Name', 
    size/128 AS 'Total Size in MB',
    size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0 AS 'Available Space In MB',
    *
FROM sys.database_files;
qid & accept id: (3084672, 3084703) query: TSQL Howto get count of unique users? soup:

You can;

\n
SELECT COUNT(DISTINCT userID) \nFROM Tbl\n
\n

You can give the count column a name by aliasing it:

\n
SELECT COUNT(DISTINCT userID)  NumberOfDistinctUsers\nFROM Tbl\n
\n soup wrap:

You can;

SELECT COUNT(DISTINCT userID) 
FROM Tbl

You can give the count column a name by aliasing it:

SELECT COUNT(DISTINCT userID)  NumberOfDistinctUsers
FROM Tbl
qid & accept id: (3167775, 3167957) query: SQL - Grab Detail Rows as Columns in Join soup:
select \n    C.ACCOUNTNO,\n    C.CONTACT,\n    C.KEY1,\n    C.KEY4,  \n    HichschoolCS.State as HighSchool,  \n    TestSatCS.state as Test\n\n\nfrom \n    contact1 C\n    left join CONTSUPP HichschoolCS on C.accountno=HichschoolCS.accountno \n        and HichschoolCS.contact = 'High School'\n    left join CONTSUPP TestSatCS on C.accountno=TestSatCS.accountno \n        and TestSatCS.contact = 'Test/SAT'\nwhere \n    C.KEY1!='00PRSP' \n    AND (C.U_KEY2='2009 FALL' \n    OR C.U_KEY2='2010 SPRING' \n    OR C.U_KEY2='2010 J TERM' \n    OR C.U_KEY2='2010 SUMMER')\n
\n

Update: Added example of only using the highest SAT score

\n
select \n    C.ACCOUNTNO,\n    C.CONTACT,\n    C.KEY1,\n    C.KEY4,  \n    HichschoolCS.State as HighSchool,  \n    TestSatCS.state as Test\n\n\nfrom \n    contact1 C\n    left join CONTSUPP HichschoolCS on C.accountno=HichschoolCS.accountno \n        and HichschoolCS.contact = 'High School'\n    left join (SELECT MAX(state) state, \n        accountno\n        FROM\n            CONTSUPP TestSatCS \n        WHERE \n            contact = 'Test/SAT'\n        GROUP\n            accountno) TestSatCS\n    on C.accountno=TestSatCS.accountno \n\nwhere \n    C.KEY1!='00PRSP' \n    AND (C.U_KEY2='2009 FALL' \n    OR C.U_KEY2='2010 SPRING' \n    OR C.U_KEY2='2010 J TERM' \n    OR C.U_KEY2='2010 SUMMER')\n
\n soup wrap:
select 
    C.ACCOUNTNO,
    C.CONTACT,
    C.KEY1,
    C.KEY4,  
    HichschoolCS.State as HighSchool,  
    TestSatCS.state as Test


from 
    contact1 C
    left join CONTSUPP HichschoolCS on C.accountno=HichschoolCS.accountno 
        and HichschoolCS.contact = 'High School'
    left join CONTSUPP TestSatCS on C.accountno=TestSatCS.accountno 
        and TestSatCS.contact = 'Test/SAT'
where 
    C.KEY1!='00PRSP' 
    AND (C.U_KEY2='2009 FALL' 
    OR C.U_KEY2='2010 SPRING' 
    OR C.U_KEY2='2010 J TERM' 
    OR C.U_KEY2='2010 SUMMER')

Update: Added example of only using the highest SAT score

select 
    C.ACCOUNTNO,
    C.CONTACT,
    C.KEY1,
    C.KEY4,  
    HichschoolCS.State as HighSchool,  
    TestSatCS.state as Test


from 
    contact1 C
    left join CONTSUPP HichschoolCS on C.accountno=HichschoolCS.accountno 
        and HichschoolCS.contact = 'High School'
    left join (SELECT MAX(state) state, 
        accountno
        FROM
            CONTSUPP TestSatCS 
        WHERE 
            contact = 'Test/SAT'
        GROUP
            accountno) TestSatCS
    on C.accountno=TestSatCS.accountno 

where 
    C.KEY1!='00PRSP' 
    AND (C.U_KEY2='2009 FALL' 
    OR C.U_KEY2='2010 SPRING' 
    OR C.U_KEY2='2010 J TERM' 
    OR C.U_KEY2='2010 SUMMER')
qid & accept id: (3240290, 3240324) query: How to find rows where a set of numbers is between two numbers? soup:

Using a JOIN, but risks duplicates:

\n
SELECT t.*\n  FROM TABLE1 t\n  JOIN (SELECT Sequence FROM Table1 WHERE Hash=2783342) x ON x.sequence BETWEEN t.sequence \n                                                                            AND t.sequenceend\n
\n

Using EXISTS, no duplicate risk:

\n
SELECT t.*\n  FROM TABLE1 t\n WHERE EXISTS(SELECT NULL\n                FROM TABLE1 x\n               WHERE x.hash = 2783342\n                 AND x.sequence BETWEEN t.sequence \n                                    AND t.sequenceend)\n
\n soup wrap:

Using a JOIN, but risks duplicates:

SELECT t.*
  FROM TABLE1 t
  JOIN (SELECT Sequence FROM Table1 WHERE Hash=2783342) x ON x.sequence BETWEEN t.sequence 
                                                                            AND t.sequenceend

Using EXISTS, no duplicate risk:

SELECT t.*
  FROM TABLE1 t
 WHERE EXISTS(SELECT NULL
                FROM TABLE1 x
               WHERE x.hash = 2783342
                 AND x.sequence BETWEEN t.sequence 
                                    AND t.sequenceend)
qid & accept id: (3244796, 3244864) query: Stored procedure - Passing a parameter as xml and reading the data soup:

You just need a WHERE clause I think.

\n
   INSERT INTO SN_IO ( [C1] ,[C2]  ,[C3] )\n   SELECT [C1] ,[C2] ,[C3]\n   FROM OPENXML (@currRecord, 'ios/io', 1)\n   WITH ([C1] [varchar](25)       'C1',\n         [C2] [varchar](25)       'C2',\n         [C3] [varchar](20)       'C3'  )    \n    WHERE  [C1]  IS NOT NULL  AND [C2]  IS NOT NULL AND [C3] IS NOT NULL  \n
\n

Or you can do it in the XPath instead which I guess may be more efficient

\n
   FROM OPENXML (@currRecord, 'ios/io[C1 and C2 and C3]', 1)\n
\n soup wrap:

You just need a WHERE clause I think.

   INSERT INTO SN_IO ( [C1] ,[C2]  ,[C3] )
   SELECT [C1] ,[C2] ,[C3]
   FROM OPENXML (@currRecord, 'ios/io', 1)
   WITH ([C1] [varchar](25)       'C1',
         [C2] [varchar](25)       'C2',
         [C3] [varchar](20)       'C3'  )    
    WHERE  [C1]  IS NOT NULL  AND [C2]  IS NOT NULL AND [C3] IS NOT NULL  

Or you can do it in the XPath instead which I guess may be more efficient

   FROM OPENXML (@currRecord, 'ios/io[C1 and C2 and C3]', 1)
qid & accept id: (3296390, 3296777) query: Enforcing uniqueness on PostgreSQL table column after non-unique values already inserted soup:

The query you're looking for is:

\n
select distinct on (my_unique_1, my_unique_2) * from my_table;\n
\n

This selects one row for each combination of columns within distinct on. Actually, it's always the first row. It's rarely used without order by since there is no reliable order in which the rows are returned (and so which is the first one).

\n

Combined with order by you can choose which rows are the first (this leaves rows with the greatest last_update_date):

\n
 select distinct on (my_unique_1, my_unique_2) * \n from my_table order by my_unique_1, my_unique_2, last_update_date desc;\n
\n

Now you can select this into a new table:

\n
 create table my_new_table as\n select distinct on (my_unique_1, my_unique_2) * \n from my_table order by my_unique_1, my_unique_2, last_update_date desc;\n
\n

Or you can use it for delete, assuming row_id is a primary key:

\n
 delete from my_table where row_id not in (\n     select distinct on (my_unique_1, my_unique_2) row_id \n     from my_table order by my_unique_1, my_unique_2, last_update_date desc);\n
\n soup wrap:

The query you're looking for is:

select distinct on (my_unique_1, my_unique_2) * from my_table;

This selects one row for each combination of columns within distinct on. Actually, it's always the first row. It's rarely used without order by since there is no reliable order in which the rows are returned (and so which is the first one).

Combined with order by you can choose which rows are the first (this leaves rows with the greatest last_update_date):

 select distinct on (my_unique_1, my_unique_2) * 
 from my_table order by my_unique_1, my_unique_2, last_update_date desc;

Now you can select this into a new table:

 create table my_new_table as
 select distinct on (my_unique_1, my_unique_2) * 
 from my_table order by my_unique_1, my_unique_2, last_update_date desc;

Or you can use it for delete, assuming row_id is a primary key:

 delete from my_table where row_id not in (
     select distinct on (my_unique_1, my_unique_2) row_id 
     from my_table order by my_unique_1, my_unique_2, last_update_date desc);
qid & accept id: (3317750, 3317795) query: Counting all other types but the current one soup:

You can do one query to get the distinct types, and LEFT JOIN the same table, checking for type-inequality:

\n
SELECT t1.type,\n       SUM(t2.some_value) / COUNT(t2.type)\nFROM ( SELECT DISTINCT type FROM temptable ) t1\nLEFT JOIN temptable t2 ON ( t1.type <> t2.type )\nGROUP BY t1.type\n
\n

Since you only want the average, you could replace the line

\n
FROM ( SELECT DISTINCT type FROM temptable ) t1\n
\n

by

\n
FROM temptable t1\n
\n

but the first solution might perform better, since the number of rows is reduced earlier.

\n soup wrap:

You can do one query to get the distinct types, and LEFT JOIN the same table, checking for type-inequality:

SELECT t1.type,
       SUM(t2.some_value) / COUNT(t2.type)
FROM ( SELECT DISTINCT type FROM temptable ) t1
LEFT JOIN temptable t2 ON ( t1.type <> t2.type )
GROUP BY t1.type

Since you only want the average, you could replace the line

FROM ( SELECT DISTINCT type FROM temptable ) t1

by

FROM temptable t1

but the first solution might perform better, since the number of rows is reduced earlier.

qid & accept id: (3318852, 3318885) query: what is the quickest way to run a query to find where 2 fields are the same soup:

EDIT

\n

Concatenation will give out false answers as pointed out in the comments ('Roberto Neil' vs 'Robert ONeil'.

\n

Here is an answer that eliminates the concatenation issue. I found out the non duplicates and eliminated them from the final answer.

\n
WITH MyTable AS\n(\n    SELECT 1 as ID, 'John' as FirstName, 'Doe' as LastName\n    UNION\n    SELECT 2 as ID, 'John' as FirstName, 'Doe' as LastName\n    UNION\n    SELECT 3 as ID, 'Tim' as FirstName, 'Doe' as LastName\n    UNION\n    SELECT 4 as ID, 'Jane' as FirstName, 'Doe' as LastName\n    UNION\n    SELECT 5 as ID, 'Jane' as FirstName, 'Doe' as LastName\n)\nSELECT Id, FirstName, LastName\nFROM MyTable SelectTable\nWHERE Id Not In\n(\n    SELECT Min (Id)\n    From MyTable SearchTable\n    GROUP BY FirstName, LastName\n    HAVING COUNT (*) = 1\n)\n
\n
\n

OLD SOLUTION

\n

Use GROUP BY and HAVING.. check out this working sample

\n
WITH MyTable AS\n(\nSELECT 1 as ID, 'John' as FirstName, 'Doe' as LastName\nUNION\nSELECT 2 as ID, 'John' as FirstName, 'Doe' as LastName\nUNION\nSELECT 3 as ID, 'Time' as FirstName, 'Doe' as LastName\nUNION\nSELECT 4 as ID, 'Jane' as FirstName, 'Doe' as LastName\n)\nSELECT ID, FirstName, LastName\nFROM MyTable\nWHERE FirstName + LastName IN\n(\n    SELECT FirstName + LastName\n    FROM MyTable\n    GROUP BY FirstName + LastName\n    HAVING COUNT (*) > 1\n)\n
\n

This will result in the following

\n
ID          FirstName LastName\n----------- --------- --------\n1           John      Doe\n2           John      Doe\n
\n soup wrap:

EDIT

Concatenation will give out false answers as pointed out in the comments ('Roberto Neil' vs 'Robert ONeil'.

Here is an answer that eliminates the concatenation issue. I found out the non duplicates and eliminated them from the final answer.

WITH MyTable AS
(
    SELECT 1 as ID, 'John' as FirstName, 'Doe' as LastName
    UNION
    SELECT 2 as ID, 'John' as FirstName, 'Doe' as LastName
    UNION
    SELECT 3 as ID, 'Tim' as FirstName, 'Doe' as LastName
    UNION
    SELECT 4 as ID, 'Jane' as FirstName, 'Doe' as LastName
    UNION
    SELECT 5 as ID, 'Jane' as FirstName, 'Doe' as LastName
)
SELECT Id, FirstName, LastName
FROM MyTable SelectTable
WHERE Id Not In
(
    SELECT Min (Id)
    From MyTable SearchTable
    GROUP BY FirstName, LastName
    HAVING COUNT (*) = 1
)

OLD SOLUTION

Use GROUP BY and HAVING.. check out this working sample

WITH MyTable AS
(
SELECT 1 as ID, 'John' as FirstName, 'Doe' as LastName
UNION
SELECT 2 as ID, 'John' as FirstName, 'Doe' as LastName
UNION
SELECT 3 as ID, 'Time' as FirstName, 'Doe' as LastName
UNION
SELECT 4 as ID, 'Jane' as FirstName, 'Doe' as LastName
)
SELECT ID, FirstName, LastName
FROM MyTable
WHERE FirstName + LastName IN
(
    SELECT FirstName + LastName
    FROM MyTable
    GROUP BY FirstName + LastName
    HAVING COUNT (*) > 1
)

This will result in the following

ID          FirstName LastName
----------- --------- --------
1           John      Doe
2           John      Doe
qid & accept id: (3332230, 3332280) query: I need to know how i can write IF statements and CASE break statements that use and execute queries, etc in MySQL? soup:

To my knowledge, MySQL doesn't support a table valued data type. The use of the function you posted would be:

\n
SELECT simplecompare(yt.n, yt.m) AS eval\n    FROM YOUR_TABE yt\n
\n

...which would return:

\n
eval\n--------\n1 = 1\n2 < 3\netc.\n
\n

SQL is set based, which is different from typical programming (procedural or OO).

\n soup wrap:

To my knowledge, MySQL doesn't support a table valued data type. The use of the function you posted would be:

SELECT simplecompare(yt.n, yt.m) AS eval
    FROM YOUR_TABE yt

...which would return:

eval
--------
1 = 1
2 < 3
etc.

SQL is set based, which is different from typical programming (procedural or OO).

qid & accept id: (3345268, 3345450) query: How to delete completely duplicate rows soup:

Try this - it will delete all duplicates from your table:

\n
;WITH duplicates AS\n(\n    SELECT \n       ProductID, ProductName, Description, Category,\n       ROW_NUMBER() OVER (PARTITION BY ProductID, ProductName\n                          ORDER BY ProductID) 'RowNum'\n    FROM dbo.tblProduct\n)\nDELETE FROM duplicates\nWHERE RowNum > 1\nGO\n\nSELECT * FROM dbo.tblProduct\nGO\n
\n

Your duplicates should be gone now: output is:

\n
ProductID   ProductName   DESCRIPTION        Category\n   1          Cinthol         cosmetic soap      soap\n   1          Lux             cosmetic soap      soap\n   1          Crowning Glory  cosmetic soap      soap\n   2          Cinthol         nice soap          soap\n   3          Lux             nice soap          soap\n
\n soup wrap:

Try this - it will delete all duplicates from your table:

;WITH duplicates AS
(
    SELECT 
       ProductID, ProductName, Description, Category,
       ROW_NUMBER() OVER (PARTITION BY ProductID, ProductName
                          ORDER BY ProductID) 'RowNum'
    FROM dbo.tblProduct
)
DELETE FROM duplicates
WHERE RowNum > 1
GO

SELECT * FROM dbo.tblProduct
GO

Your duplicates should be gone now: output is:

ProductID   ProductName   DESCRIPTION        Category
   1          Cinthol         cosmetic soap      soap
   1          Lux             cosmetic soap      soap
   1          Crowning Glory  cosmetic soap      soap
   2          Cinthol         nice soap          soap
   3          Lux             nice soap          soap
qid & accept id: (3361768, 3361804) query: Copy data from one column to other column (which is in a different table) soup:

In SQL Server 2008 you can use a multi-table update as follows:

\n
UPDATE tblindiantime \nSET tblindiantime.CountryName = contacts.BusinessCountry\nFROM tblindiantime \nJOIN contacts\nON -- join condition here\n
\n

You need a join condition to specify which row should be updated.

\n

If the target table is currently empty then you should use an INSERT instead:

\n
INSERT INTO tblindiantime (CountryName)\nSELECT BusinessCountry FROM contacts\n
\n soup wrap:

In SQL Server 2008 you can use a multi-table update as follows:

UPDATE tblindiantime 
SET tblindiantime.CountryName = contacts.BusinessCountry
FROM tblindiantime 
JOIN contacts
ON -- join condition here

You need a join condition to specify which row should be updated.

If the target table is currently empty then you should use an INSERT instead:

INSERT INTO tblindiantime (CountryName)
SELECT BusinessCountry FROM contacts
qid & accept id: (3426560, 3426580) query: SQL Server / 2 select in the same Stored procedure soup:

That can be done in a single statement:

\n
SELECT b.*\n  FROM TABLE_B b\n  JOIN TABLE_A a ON a.id2 = b.id2\n WHERE a.id1 = @ID1\n
\n

But this means that there will be duplicates if more than one record in TABLE_A relates to a TABLE_B record. In that situation, use EXISTS rather than adding DISTINCT to the previous query:

\n
SELECT b.*\n  FROM TABLE_B b\n WHERE EXISTS(SELECT NULL\n                FROM TABLE_A a\n               WHERE a.id2 = b.id2\n                 AND a.id1 = @ID1)\n
\n

The IN clause is equivalent, but EXISTS will be faster if there are duplicates:

\n
SELECT b.*\n  FROM TABLE_B b\n WHERE b.id2 IN (SELECT a.id2\n                   FROM TABLE_A a\n                  WHERE a.id1 = @ID1)\n
\n soup wrap:

That can be done in a single statement:

SELECT b.*
  FROM TABLE_B b
  JOIN TABLE_A a ON a.id2 = b.id2
 WHERE a.id1 = @ID1

But this means that there will be duplicates if more than one record in TABLE_A relates to a TABLE_B record. In that situation, use EXISTS rather than adding DISTINCT to the previous query:

SELECT b.*
  FROM TABLE_B b
 WHERE EXISTS(SELECT NULL
                FROM TABLE_A a
               WHERE a.id2 = b.id2
                 AND a.id1 = @ID1)

The IN clause is equivalent, but EXISTS will be faster if there are duplicates:

SELECT b.*
  FROM TABLE_B b
 WHERE b.id2 IN (SELECT a.id2
                   FROM TABLE_A a
                  WHERE a.id1 = @ID1)
qid & accept id: (3440516, 3440590) query: How do I use dynamic SQL to declare a column name derived from a table name? soup:

This example passes in a table name and a column name:

\n
CREATE PROCEDURE A\n  ( tab IN VARCHAR2\n  , col_name IN VARCHAR2\n  ) IS\nBEGIN\n   EXECUTE IMMEDIATE 'INSERT INTO ' || tab || '(' || col_name || ') VALUES(123)';\nEND A;\n
\n

You need to realise that everything after EXECUTE IMMEDIATE must be a string that contains some valid SQL. A good way to verify this is to set it up in a variable and print it to the screen:

\n
CREATE PROCEDURE A\n  ( tab IN VARCHAR2\n  , col_name IN VARCHAR2\n  ) IS\n   v_sql VARCHAR2(2000);\nBEGIN\n   v_sql := 'INSERT INTO ' || tab || '(' || col_name || ') VALUES(123)';\n   DBMS_OUTPUT.PUT_LINE('SQL='||v_sql);\n   EXECUTE IMMEDIATE v_sql;\nEND A;\n
\n

This should then display something like the following in SQL Plus:

\n
\n

SQL=INSERT INTO mytable(mycolumn)\n VALUES(123)

\n
\n

(provided server output is turned on).

\n

EDIT: Since you want the column name to be a local variable that always has the same value, this could be done as:

\n
CREATE PROCEDURE A (tab IN VARCHAR2)\nIS\n   col_name VARCHAR2(30) := 'MYCOLUMN';\n   v_sql VARCHAR2(2000);\nBEGIN\n   v_sql := 'INSERT INTO ' || tab || '(' || col_name || ') VALUES(123)';\n   DBMS_OUTPUT.PUT_LINE('SQL='||v_sql);\n   EXECUTE IMMEDIATE v_sql;\nEND A;\n
\n soup wrap:

This example passes in a table name and a column name:

CREATE PROCEDURE A
  ( tab IN VARCHAR2
  , col_name IN VARCHAR2
  ) IS
BEGIN
   EXECUTE IMMEDIATE 'INSERT INTO ' || tab || '(' || col_name || ') VALUES(123)';
END A;

You need to realise that everything after EXECUTE IMMEDIATE must be a string that contains some valid SQL. A good way to verify this is to set it up in a variable and print it to the screen:

CREATE PROCEDURE A
  ( tab IN VARCHAR2
  , col_name IN VARCHAR2
  ) IS
   v_sql VARCHAR2(2000);
BEGIN
   v_sql := 'INSERT INTO ' || tab || '(' || col_name || ') VALUES(123)';
   DBMS_OUTPUT.PUT_LINE('SQL='||v_sql);
   EXECUTE IMMEDIATE v_sql;
END A;

This should then display something like the following in SQL Plus:

SQL=INSERT INTO mytable(mycolumn) VALUES(123)

(provided server output is turned on).

EDIT: Since you want the column name to be a local variable that always has the same value, this could be done as:

CREATE PROCEDURE A (tab IN VARCHAR2)
IS
   col_name VARCHAR2(30) := 'MYCOLUMN';
   v_sql VARCHAR2(2000);
BEGIN
   v_sql := 'INSERT INTO ' || tab || '(' || col_name || ') VALUES(123)';
   DBMS_OUTPUT.PUT_LINE('SQL='||v_sql);
   EXECUTE IMMEDIATE v_sql;
END A;
qid & accept id: (3444082, 3444195) query: How to filter out similar rows (equal on certain columns) based on other column data soup:

Assuming SQL Server 2005+, use:

\n
SELECT x.id,\n       x.forename,\n       x.surname,\n       x.somedate\n  FROM (SELECT t.id,\n               t.forename,\n               t.surname,\n               t.somedate,\n               ROW_NUMBER() OVER (PARTITION BY t.forename, t.surname \n                                      ORDER BY t.somedate DESC, t.id DESC) AS rank\n          FROM TABLE t_ x\nWHERE x.rank = 1\n
\n

A risky approach would be:

\n
  SELECT MAX(t.id) AS id,\n         t.forename,\n         t.surname,\n         MAX(t.somedate) AS somedate\n    FROM TABLE t\nGROUP BY t.forename, t.surname\n
\n soup wrap:

Assuming SQL Server 2005+, use:

SELECT x.id,
       x.forename,
       x.surname,
       x.somedate
  FROM (SELECT t.id,
               t.forename,
               t.surname,
               t.somedate,
               ROW_NUMBER() OVER (PARTITION BY t.forename, t.surname 
                                      ORDER BY t.somedate DESC, t.id DESC) AS rank
          FROM TABLE t_ x
WHERE x.rank = 1

A risky approach would be:

  SELECT MAX(t.id) AS id,
         t.forename,
         t.surname,
         MAX(t.somedate) AS somedate
    FROM TABLE t
GROUP BY t.forename, t.surname
qid & accept id: (3482194, 3482211) query: Ensuring uniqueness of additions to MySQL table using PHP soup:

You can make the column that stores the User Agent string unique, and do INSERT ... ON DUPLICATE KEY UPDATE for your stats insertions

\n

For the table:

\n
  CREATE TABLE IF NOT EXISTS `user_agent_stats` (\n  `user_agent` varchar(255) collate utf8_bin NOT NULL,\n  `hits` int(21) NOT NULL default '1',\n  UNIQUE KEY `user_agent` (`user_agent`)\n) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_bin;\n\n+------------+--------------+------+-----+---------+-------+\n| Field      | Type         | Null | Key | Default | Extra |\n+------------+--------------+------+-----+---------+-------+\n| user_agent | varchar(255) | NO   | PRI | NULL    |       | \n| hits       | int(21)      | NO   |     | NULL    |       | \n+------------+--------------+------+-----+---------+-------+\n
\n

You could use the following query to insert user agents:

\n
INSERT INTO user_agent_stats( user_agent ) VALUES('user agent string') ON DUPLICATE KEY UPDATE hits = hits+1;\n
\n

Executing the above query multiple times gives:

\n
+-------------------+------+\n| user_agent        | hits |\n+-------------------+------+\n| user agent string |    6 | \n+-------------------+------+\n
\n soup wrap:

You can make the column that stores the User Agent string unique, and do INSERT ... ON DUPLICATE KEY UPDATE for your stats insertions

For the table:

  CREATE TABLE IF NOT EXISTS `user_agent_stats` (
  `user_agent` varchar(255) collate utf8_bin NOT NULL,
  `hits` int(21) NOT NULL default '1',
  UNIQUE KEY `user_agent` (`user_agent`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_bin;

+------------+--------------+------+-----+---------+-------+
| Field      | Type         | Null | Key | Default | Extra |
+------------+--------------+------+-----+---------+-------+
| user_agent | varchar(255) | NO   | PRI | NULL    |       | 
| hits       | int(21)      | NO   |     | NULL    |       | 
+------------+--------------+------+-----+---------+-------+

You could use the following query to insert user agents:

INSERT INTO user_agent_stats( user_agent ) VALUES('user agent string') ON DUPLICATE KEY UPDATE hits = hits+1;

Executing the above query multiple times gives:

+-------------------+------+
| user_agent        | hits |
+-------------------+------+
| user agent string |    6 | 
+-------------------+------+
qid & accept id: (3502478, 3502541) query: Top User SQL Query With Categories? soup:

This will return you top 10 users:

\n
SELECT  u.*,\n        (\n        SELECT  COUNT(*)\n        FROM    votes v\n        WHERE   v.receiver_id = u.user_id\n        ) AS score\nFROM    users u\nORDER BY\n        score DESC\nLIMIT 10\n
\n

This will return you one top user from each category:

\n
SELECT  u.*\nFROM    (\n        SELECT  DISTINCT category_id\n        FROM    users\n        ) uo\nJOIN    users u\nON      u.user_id = \n        (\n        SELECT  user_id\n        FROM    users ui\n        WHERE   ui.category_id = uo.category_id\n        ORDER BY\n                (\n                SELECT  COUNT(*)\n                FROM    votes v\n                WHERE   v.receiver_id = ui.user_id\n                ) DESC\n        LIMIT 1\n        )\n
\n soup wrap:

This will return you top 10 users:

SELECT  u.*,
        (
        SELECT  COUNT(*)
        FROM    votes v
        WHERE   v.receiver_id = u.user_id
        ) AS score
FROM    users u
ORDER BY
        score DESC
LIMIT 10

This will return you one top user from each category:

SELECT  u.*
FROM    (
        SELECT  DISTINCT category_id
        FROM    users
        ) uo
JOIN    users u
ON      u.user_id = 
        (
        SELECT  user_id
        FROM    users ui
        WHERE   ui.category_id = uo.category_id
        ORDER BY
                (
                SELECT  COUNT(*)
                FROM    votes v
                WHERE   v.receiver_id = ui.user_id
                ) DESC
        LIMIT 1
        )
qid & accept id: (3526673, 3526711) query: getting all chars before space in SQL SERVER soup:
Select Substring( MyTextColumn, 1, CharIndex( ' ', MyTextColumn ) - 1)\n
\n

Actually, if these are datetime values, then there is a better way:

\n
Select Cast(DateDiff(d, 0, MyDateColumn) As datetime)\n
\n soup wrap:
Select Substring( MyTextColumn, 1, CharIndex( ' ', MyTextColumn ) - 1)

Actually, if these are datetime values, then there is a better way:

Select Cast(DateDiff(d, 0, MyDateColumn) As datetime)
qid & accept id: (3550497, 3550522) query: SQL Server - Give a Login Permission for Read Access to All Existing and Future Databases soup:

For new databases, add the user in the model database. This is used as the template for all new databases.

\n
USE model\nCREATE USER ... FROM LOGIN...\nEXEC sp_addrolemember 'db_datareader', '...'\n
\n

For existing databases, use sp_MSForEachDb

\n
EXEC sp_MSForEachDb '\n USE ?\n CREATE USER ... FROM LOGIN...  \n EXEC sp_addrolemember ''db_datareader'', ''...''\n'\n
\n soup wrap:

For new databases, add the user in the model database. This is used as the template for all new databases.

USE model
CREATE USER ... FROM LOGIN...
EXEC sp_addrolemember 'db_datareader', '...'

For existing databases, use sp_MSForEachDb

EXEC sp_MSForEachDb '
 USE ?
 CREATE USER ... FROM LOGIN...  
 EXEC sp_addrolemember ''db_datareader'', ''...''
'
qid & accept id: (3579079, 3579462) query: How can you represent inheritance in a database? soup:

@Bill Karwin describes three inheritance models in his SQL Antipatterns book, when proposing solutions to the SQL Entity-Attribute-Value antipattern. This is a brief overview:

\n

Single Table Inheritance (aka Table Per Hierarchy Inheritance):

\n

Using a single table as in your first option is probably the simplest design. As you mentioned, many attributes that are subtype-specific will have to be given a NULL value on rows where these attributes do not apply. With this model, you would have one policies table, which would look something like this:

\n
+------+---------------------+----------+----------------+------------------+\n| id   | date_issued         | type     | vehicle_reg_no | property_address |\n+------+---------------------+----------+----------------+------------------+\n|    1 | 2010-08-20 12:00:00 | MOTOR    | 01-A-04004     | NULL             |\n|    2 | 2010-08-20 13:00:00 | MOTOR    | 02-B-01010     | NULL             |\n|    3 | 2010-08-20 14:00:00 | PROPERTY | NULL           | Oxford Street    |\n|    4 | 2010-08-20 15:00:00 | MOTOR    | 03-C-02020     | NULL             |\n+------+---------------------+----------+----------------+------------------+\n\n\------ COMMON FIELDS -------/          \----- SUBTYPE SPECIFIC FIELDS -----/\n
\n

Keeping the design simple is a plus, but the main problems with this approach are the following:

\n\n

Concrete Table Inheritance:

\n

Another approach to tackle inheritance is to create a new table for each subtype, repeating all the common attributes in each table. For example:

\n
--// Table: policies_motor\n+------+---------------------+----------------+\n| id   | date_issued         | vehicle_reg_no |\n+------+---------------------+----------------+\n|    1 | 2010-08-20 12:00:00 | 01-A-04004     |\n|    2 | 2010-08-20 13:00:00 | 02-B-01010     |\n|    3 | 2010-08-20 15:00:00 | 03-C-02020     |\n+------+---------------------+----------------+\n\n--// Table: policies_property    \n+------+---------------------+------------------+\n| id   | date_issued         | property_address |\n+------+---------------------+------------------+\n|    1 | 2010-08-20 14:00:00 | Oxford Street    |   \n+------+---------------------+------------------+\n
\n

This design will basically solve the problems identified for the single table method:

\n\n

However this model also comes with a few disadvantages:

\n\n

This is how you would have to query all the policies regardless of the type:

\n
SELECT     date_issued, other_common_fields, 'MOTOR' AS type\nFROM       policies_motor\nUNION ALL\nSELECT     date_issued, other_common_fields, 'PROPERTY' AS type\nFROM       policies_property;\n
\n

Note how adding new subtypes would require the above query to be modified with an additional UNION ALL for each subtype. This can easily lead to bugs in your application if this operation is forgotten.

\n

Class Table Inheritance (aka Table Per Type Inheritance):

\n

This is the solution that @David mentions in the other answer. You create a single table for your base class, which includes all the common attributes. Then you would create specific tables for each subtype, whose primary key also serves as a foreign key to the base table. Example:

\n
CREATE TABLE policies (\n   policy_id          int,\n   date_issued        datetime,\n\n   -- // other common attributes ...\n);\n\nCREATE TABLE policy_motor (\n    policy_id         int,\n    vehicle_reg_no    varchar(20),\n\n   -- // other attributes specific to motor insurance ...\n\n   FOREIGN KEY (policy_id) REFERENCES policies (policy_id)\n);\n\nCREATE TABLE policy_property (\n    policy_id         int,\n    property_address  varchar(20),\n\n   -- // other attributes specific to property insurance ...\n\n   FOREIGN KEY (policy_id) REFERENCES policies (policy_id)\n);\n
\n

This solution solves the problems identified in the other two designs:

\n\n

I consider the class table approach as the most suitable in most situations.

\n
\n

The names of these three models come from Martin Fowler's book Patterns of Enterprise Application Architecture.

\n soup wrap:

@Bill Karwin describes three inheritance models in his SQL Antipatterns book, when proposing solutions to the SQL Entity-Attribute-Value antipattern. This is a brief overview:

Single Table Inheritance (aka Table Per Hierarchy Inheritance):

Using a single table as in your first option is probably the simplest design. As you mentioned, many attributes that are subtype-specific will have to be given a NULL value on rows where these attributes do not apply. With this model, you would have one policies table, which would look something like this:

+------+---------------------+----------+----------------+------------------+
| id   | date_issued         | type     | vehicle_reg_no | property_address |
+------+---------------------+----------+----------------+------------------+
|    1 | 2010-08-20 12:00:00 | MOTOR    | 01-A-04004     | NULL             |
|    2 | 2010-08-20 13:00:00 | MOTOR    | 02-B-01010     | NULL             |
|    3 | 2010-08-20 14:00:00 | PROPERTY | NULL           | Oxford Street    |
|    4 | 2010-08-20 15:00:00 | MOTOR    | 03-C-02020     | NULL             |
+------+---------------------+----------+----------------+------------------+

\------ COMMON FIELDS -------/          \----- SUBTYPE SPECIFIC FIELDS -----/

Keeping the design simple is a plus, but the main problems with this approach are the following:

Concrete Table Inheritance:

Another approach to tackle inheritance is to create a new table for each subtype, repeating all the common attributes in each table. For example:

--// Table: policies_motor
+------+---------------------+----------------+
| id   | date_issued         | vehicle_reg_no |
+------+---------------------+----------------+
|    1 | 2010-08-20 12:00:00 | 01-A-04004     |
|    2 | 2010-08-20 13:00:00 | 02-B-01010     |
|    3 | 2010-08-20 15:00:00 | 03-C-02020     |
+------+---------------------+----------------+

--// Table: policies_property    
+------+---------------------+------------------+
| id   | date_issued         | property_address |
+------+---------------------+------------------+
|    1 | 2010-08-20 14:00:00 | Oxford Street    |   
+------+---------------------+------------------+

This design will basically solve the problems identified for the single table method:

However this model also comes with a few disadvantages:

This is how you would have to query all the policies regardless of the type:

SELECT     date_issued, other_common_fields, 'MOTOR' AS type
FROM       policies_motor
UNION ALL
SELECT     date_issued, other_common_fields, 'PROPERTY' AS type
FROM       policies_property;

Note how adding new subtypes would require the above query to be modified with an additional UNION ALL for each subtype. This can easily lead to bugs in your application if this operation is forgotten.

Class Table Inheritance (aka Table Per Type Inheritance):

This is the solution that @David mentions in the other answer. You create a single table for your base class, which includes all the common attributes. Then you would create specific tables for each subtype, whose primary key also serves as a foreign key to the base table. Example:

CREATE TABLE policies (
   policy_id          int,
   date_issued        datetime,

   -- // other common attributes ...
);

CREATE TABLE policy_motor (
    policy_id         int,
    vehicle_reg_no    varchar(20),

   -- // other attributes specific to motor insurance ...

   FOREIGN KEY (policy_id) REFERENCES policies (policy_id)
);

CREATE TABLE policy_property (
    policy_id         int,
    property_address  varchar(20),

   -- // other attributes specific to property insurance ...

   FOREIGN KEY (policy_id) REFERENCES policies (policy_id)
);

This solution solves the problems identified in the other two designs:

I consider the class table approach as the most suitable in most situations.


The names of these three models come from Martin Fowler's book Patterns of Enterprise Application Architecture.

qid & accept id: (3589286, 3589298) query: Simple MySql - Get Largest Number in Table soup:

Two options - using LIMIT:

\n
  SELECT yt.numeric_column\n    FROM YOUR_TABLE yt\nORDER BY yt.numeric_column DESC\n   LIMIT 1\n
\n

Using MAX:

\n
SELECT MAX(yt.numeric_column)\n  FROM YOUR_TABLE yt\n
\n soup wrap:

Two options - using LIMIT:

  SELECT yt.numeric_column
    FROM YOUR_TABLE yt
ORDER BY yt.numeric_column DESC
   LIMIT 1

Using MAX:

SELECT MAX(yt.numeric_column)
  FROM YOUR_TABLE yt
qid & accept id: (3609687, 3609741) query: Iterating through dates in SQL soup:

Try this:

\n
Select DateAdd(day, 0, DateDiff(day, 0, StartDate)) Date,\n    Name, Sum (Work) TotalWork\nFrom TableData\nGroup By Name, DateAdd(day, 0, DateDiff(day, 0, StartDate)) \n
\n

To get the missing days is harder.

\n
   Declare @SD DateTime, @ED DateTime  -- StartDate and EndDate variables\n   Select @SD = DateAdd(day, 0, DateDiff(day, 0, Min(StartDate))),\n          @ED = DateAdd(day, 0, DateDiff(day, 0, Max(StartDate)))\n   From TableData\n   Declare @Ds Table (aDate SmallDateTime)\n   While @SD <= @ED Begin \n       Insert @Ds(aDate ) Values @SD\n       Set @SD = @SD + 1\n   End \n-- ----------------------------------------------------\n Select DateAdd(day, 0, DateDiff(day, 0, td.StartDate)) Date,\n    td.Name, Sum (td.Work) TotalWork\n From @Ds ds Left Join TableData td\n    On DateAdd(day, 0, DateDiff(day, 0, tD.StartDate)) = ds.aDate \n Group By Name, DateAdd(day, 0, DateDiff(day, 0, tD.StartDate)) \n
\n

EDIT, I am revisiting this with a solution that uses a Common Table Expression (CTE). This does NOT require use of a dates table.

\n
    Declare @SD DateTime, @ED DateTime\n    Declare @count integer = datediff(day, @SD, @ED)\n    With Ints(i) As\n      (Select 0 Union All\n    Select i + 1 From Ints\n    Where i < @count )  \n     Select DateAdd(day, 0, DateDiff(day, 0, td.StartDate)) Date,\n         td.Name, Sum (td.Work) TotalWork\n     From Ints i \n        Left Join TableData d\n           On DateDiff(day, @SD, d.StartDate) = i.i\n     Group By d.Name, DateAdd(day, 0, DateDiff(day, 0, d.StartDate)) \n
\n soup wrap:

Try this:

Select DateAdd(day, 0, DateDiff(day, 0, StartDate)) Date,
    Name, Sum (Work) TotalWork
From TableData
Group By Name, DateAdd(day, 0, DateDiff(day, 0, StartDate)) 

To get the missing days is harder.

   Declare @SD DateTime, @ED DateTime  -- StartDate and EndDate variables
   Select @SD = DateAdd(day, 0, DateDiff(day, 0, Min(StartDate))),
          @ED = DateAdd(day, 0, DateDiff(day, 0, Max(StartDate)))
   From TableData
   Declare @Ds Table (aDate SmallDateTime)
   While @SD <= @ED Begin 
       Insert @Ds(aDate ) Values @SD
       Set @SD = @SD + 1
   End 
-- ----------------------------------------------------
 Select DateAdd(day, 0, DateDiff(day, 0, td.StartDate)) Date,
    td.Name, Sum (td.Work) TotalWork
 From @Ds ds Left Join TableData td
    On DateAdd(day, 0, DateDiff(day, 0, tD.StartDate)) = ds.aDate 
 Group By Name, DateAdd(day, 0, DateDiff(day, 0, tD.StartDate)) 

EDIT, I am revisiting this with a solution that uses a Common Table Expression (CTE). This does NOT require use of a dates table.

    Declare @SD DateTime, @ED DateTime
    Declare @count integer = datediff(day, @SD, @ED)
    With Ints(i) As
      (Select 0 Union All
    Select i + 1 From Ints
    Where i < @count )  
     Select DateAdd(day, 0, DateDiff(day, 0, td.StartDate)) Date,
         td.Name, Sum (td.Work) TotalWork
     From Ints i 
        Left Join TableData d
           On DateDiff(day, @SD, d.StartDate) = i.i
     Group By d.Name, DateAdd(day, 0, DateDiff(day, 0, d.StartDate)) 
qid & accept id: (3623645, 3624616) query: How to repair a corrupted MPTT tree (nested set) in the database using SQL? soup:

Using SQL Server, following script seems to be working for me.

\n

Output testscript

\n
category_id name                 parent      lft         rgt         lftcalc     rgtcalc\n----------- -------------------- ----------- ----------- ----------- ----------- -----------\n1           ELECTRONICS          NULL        1           20          1           20\n2           TELEVISIONS          1           2           9           2           9\n3           TUBE                 2           3           4           3           4\n4           LCD                  2           5           6           5           6\n5           PLASMA               2           7           8           7           8\n6           PORTABLE ELECTRONICS 1           10          19          10          19\n7           MP3 PLAYERS          6           11          14          11          14\n8           FLASH                7           12          13          12          13\n9           CD PLAYERS           6           15          16          15          16\n10          2 WAY RADIOS         6           17          18          17          18\n
\n

Script

\n
SET NOCOUNT ON\nGO\n\nDECLARE @nested_category TABLE (\n category_id INT PRIMARY KEY,\n name VARCHAR(20) NOT NULL,\n parent INT,\n lft INT,\n rgt INT\n);\n\nDECLARE @current_Category_ID INTEGER\nDECLARE @current_parent INTEGER\nDECLARE @SafeGuard INTEGER\nDECLARE @myLeft INTEGER\nSET @SafeGuard = 100\n\nINSERT INTO @nested_category \nSELECT           1,'ELECTRONICS',NULL,NULL,NULL\nUNION ALL SELECT 2,'TELEVISIONS',1,NULL,NULL\nUNION ALL SELECT 3,'TUBE',2,NULL,NULL\nUNION ALL SELECT 4,'LCD',2,NULL,NULL\nUNION ALL SELECT 5,'PLASMA',2,NULL,NULL\nUNION ALL SELECT 6,'PORTABLE ELECTRONICS',1,NULL,NULL\nUNION ALL SELECT 7,'MP3 PLAYERS',6,NULL,NULL\nUNION ALL SELECT 8,'FLASH',7,NULL,NULL\nUNION ALL SELECT 9,'CD PLAYERS',6,NULL,NULL\nUNION ALL SELECT 10,'2 WAY RADIOS',6,NULL,NULL\n\n/* Initialize */\nUPDATE  @nested_category \nSET     lft = 1\n        , rgt = 2\nWHERE   parent IS NULL\n\nUPDATE  @nested_category \nSET     lft = NULL\n        , rgt = NULL\nWHERE   parent IS NOT NULL\n\nWHILE EXISTS (SELECT * FROM @nested_category WHERE lft IS NULL) AND @SafeGuard > 0\nBEGIN\n  SELECT  @current_Category_ID = MAX(nc.category_id)\n  FROM    @nested_category nc\n          INNER JOIN @nested_category nc2 ON nc2.category_id = nc.parent\n  WHERE   nc.lft IS NULL\n          AND nc2.lft IS NOT NULL\n\n  SELECT  @current_parent = parent\n  FROM    @nested_category\n  WHERE   category_id = @current_category_id\n\n  SELECT  @myLeft = lft\n  FROM    @nested_category\n  WHERE   category_id = @current_parent\n\n  UPDATE @nested_category SET rgt = rgt + 2 WHERE rgt > @myLeft;\n  UPDATE @nested_category SET lft = lft + 2 WHERE lft > @myLeft;\n  UPDATE @nested_category SET lft = @myLeft + 1, rgt = @myLeft + 2 WHERE category_id = @current_category_id\n\n  SET @SafeGuard = @SafeGuard - 1\nEND\n\nSELECT * FROM @nested_category ORDER BY category_id\n\nSELECT  COUNT(node.name), node.name, MIN(node.lft)\nFROM    @nested_category AS node,\n        @nested_category AS parent\nWHERE   node.lft BETWEEN parent.lft AND parent.rgt\nGROUP BY \n        node.name\nORDER BY\n        3, 1\n
\n

Testscript ##

\n
SET NOCOUNT ON\nGO\n\nDECLARE @nested_category TABLE (\n category_id INT PRIMARY KEY,\n name VARCHAR(20) NOT NULL,\n parent INT,\n lft INT,\n rgt INT, \n lftcalc INT,\n rgtcalc INT\n);\n\nINSERT INTO @nested_category \nSELECT           1,'ELECTRONICS',NULL,1,20,NULL,NULL\nUNION ALL SELECT 2,'TELEVISIONS',1,2,9,NULL,NULL\nUNION ALL SELECT 3,'TUBE',2,3,4,NULL,NULL\nUNION ALL SELECT 4,'LCD',2,5,6,NULL,NULL\nUNION ALL SELECT 5,'PLASMA',2,7,8,NULL,NULL\nUNION ALL SELECT 6,'PORTABLE ELECTRONICS',1,10,19,NULL,NULL\nUNION ALL SELECT 7,'MP3 PLAYERS',6,11,14,NULL,NULL\nUNION ALL SELECT 8,'FLASH',7,12,13,NULL,NULL\nUNION ALL SELECT 9,'CD PLAYERS',6,15,16,NULL,NULL\nUNION ALL SELECT 10,'2 WAY RADIOS',6,17,18,NULL,NULL\n\n/* Initialize */\nUPDATE  @nested_category \nSET     lftcalc = 1\n        , rgtcalc = 2\nWHERE   parent IS NULL\n\nDECLARE @current_Category_ID INTEGER\nDECLARE @current_parent INTEGER\nDECLARE @SafeGuard INTEGER\nDECLARE @myRight INTEGER\nDECLARE @myLeft INTEGER\nSET @SafeGuard = 100\nWHILE EXISTS (SELECT * FROM @nested_category WHERE lftcalc IS NULL) AND @SafeGuard > 0\nBEGIN\n  SELECT  @current_Category_ID = MAX(nc.category_id)\n  FROM    @nested_category nc\n          INNER JOIN @nested_category nc2 ON nc2.category_id = nc.parent\n  WHERE   nc.lftcalc IS NULL\n          AND nc2.lftcalc IS NOT NULL\n\n  SELECT  @current_parent = parent\n  FROM    @nested_category\n  WHERE   category_id = @current_category_id\n\n  SELECT  @myLeft = lftcalc\n  FROM    @nested_category\n  WHERE   category_id = @current_parent\n\n  UPDATE @nested_category SET rgtcalc = rgtcalc + 2 WHERE rgtcalc > @myLeft;\n  UPDATE @nested_category SET lftcalc = lftcalc + 2 WHERE lftcalc > @myLeft;\n  UPDATE @nested_category SET lftcalc = @myLeft + 1, rgtcalc = @myLeft + 2 WHERE category_id = @current_category_id\n\n  SELECT * FROM @nested_category WHERE category_id = @current_parent\n  SELECT * FROM @nested_category ORDER BY category_id\n  SET @SafeGuard = @SafeGuard - 1\nEND\n\nSELECT * FROM @nested_category ORDER BY category_id\n\nSELECT  COUNT(node.name), node.name, MIN(node.lft)\nFROM    @nested_category AS node,\n        @nested_category AS parent\nWHERE   node.lft BETWEEN parent.lft AND parent.rgt\nGROUP BY \n        node.name\nORDER BY\n        3, 1\n
\n soup wrap:

Using SQL Server, following script seems to be working for me.

Output testscript

category_id name                 parent      lft         rgt         lftcalc     rgtcalc
----------- -------------------- ----------- ----------- ----------- ----------- -----------
1           ELECTRONICS          NULL        1           20          1           20
2           TELEVISIONS          1           2           9           2           9
3           TUBE                 2           3           4           3           4
4           LCD                  2           5           6           5           6
5           PLASMA               2           7           8           7           8
6           PORTABLE ELECTRONICS 1           10          19          10          19
7           MP3 PLAYERS          6           11          14          11          14
8           FLASH                7           12          13          12          13
9           CD PLAYERS           6           15          16          15          16
10          2 WAY RADIOS         6           17          18          17          18

Script

SET NOCOUNT ON
GO

DECLARE @nested_category TABLE (
 category_id INT PRIMARY KEY,
 name VARCHAR(20) NOT NULL,
 parent INT,
 lft INT,
 rgt INT
);

DECLARE @current_Category_ID INTEGER
DECLARE @current_parent INTEGER
DECLARE @SafeGuard INTEGER
DECLARE @myLeft INTEGER
SET @SafeGuard = 100

INSERT INTO @nested_category 
SELECT           1,'ELECTRONICS',NULL,NULL,NULL
UNION ALL SELECT 2,'TELEVISIONS',1,NULL,NULL
UNION ALL SELECT 3,'TUBE',2,NULL,NULL
UNION ALL SELECT 4,'LCD',2,NULL,NULL
UNION ALL SELECT 5,'PLASMA',2,NULL,NULL
UNION ALL SELECT 6,'PORTABLE ELECTRONICS',1,NULL,NULL
UNION ALL SELECT 7,'MP3 PLAYERS',6,NULL,NULL
UNION ALL SELECT 8,'FLASH',7,NULL,NULL
UNION ALL SELECT 9,'CD PLAYERS',6,NULL,NULL
UNION ALL SELECT 10,'2 WAY RADIOS',6,NULL,NULL

/* Initialize */
UPDATE  @nested_category 
SET     lft = 1
        , rgt = 2
WHERE   parent IS NULL

UPDATE  @nested_category 
SET     lft = NULL
        , rgt = NULL
WHERE   parent IS NOT NULL

WHILE EXISTS (SELECT * FROM @nested_category WHERE lft IS NULL) AND @SafeGuard > 0
BEGIN
  SELECT  @current_Category_ID = MAX(nc.category_id)
  FROM    @nested_category nc
          INNER JOIN @nested_category nc2 ON nc2.category_id = nc.parent
  WHERE   nc.lft IS NULL
          AND nc2.lft IS NOT NULL

  SELECT  @current_parent = parent
  FROM    @nested_category
  WHERE   category_id = @current_category_id

  SELECT  @myLeft = lft
  FROM    @nested_category
  WHERE   category_id = @current_parent

  UPDATE @nested_category SET rgt = rgt + 2 WHERE rgt > @myLeft;
  UPDATE @nested_category SET lft = lft + 2 WHERE lft > @myLeft;
  UPDATE @nested_category SET lft = @myLeft + 1, rgt = @myLeft + 2 WHERE category_id = @current_category_id

  SET @SafeGuard = @SafeGuard - 1
END

SELECT * FROM @nested_category ORDER BY category_id

SELECT  COUNT(node.name), node.name, MIN(node.lft)
FROM    @nested_category AS node,
        @nested_category AS parent
WHERE   node.lft BETWEEN parent.lft AND parent.rgt
GROUP BY 
        node.name
ORDER BY
        3, 1

Testscript ##

SET NOCOUNT ON
GO

DECLARE @nested_category TABLE (
 category_id INT PRIMARY KEY,
 name VARCHAR(20) NOT NULL,
 parent INT,
 lft INT,
 rgt INT, 
 lftcalc INT,
 rgtcalc INT
);

INSERT INTO @nested_category 
SELECT           1,'ELECTRONICS',NULL,1,20,NULL,NULL
UNION ALL SELECT 2,'TELEVISIONS',1,2,9,NULL,NULL
UNION ALL SELECT 3,'TUBE',2,3,4,NULL,NULL
UNION ALL SELECT 4,'LCD',2,5,6,NULL,NULL
UNION ALL SELECT 5,'PLASMA',2,7,8,NULL,NULL
UNION ALL SELECT 6,'PORTABLE ELECTRONICS',1,10,19,NULL,NULL
UNION ALL SELECT 7,'MP3 PLAYERS',6,11,14,NULL,NULL
UNION ALL SELECT 8,'FLASH',7,12,13,NULL,NULL
UNION ALL SELECT 9,'CD PLAYERS',6,15,16,NULL,NULL
UNION ALL SELECT 10,'2 WAY RADIOS',6,17,18,NULL,NULL

/* Initialize */
UPDATE  @nested_category 
SET     lftcalc = 1
        , rgtcalc = 2
WHERE   parent IS NULL

DECLARE @current_Category_ID INTEGER
DECLARE @current_parent INTEGER
DECLARE @SafeGuard INTEGER
DECLARE @myRight INTEGER
DECLARE @myLeft INTEGER
SET @SafeGuard = 100
WHILE EXISTS (SELECT * FROM @nested_category WHERE lftcalc IS NULL) AND @SafeGuard > 0
BEGIN
  SELECT  @current_Category_ID = MAX(nc.category_id)
  FROM    @nested_category nc
          INNER JOIN @nested_category nc2 ON nc2.category_id = nc.parent
  WHERE   nc.lftcalc IS NULL
          AND nc2.lftcalc IS NOT NULL

  SELECT  @current_parent = parent
  FROM    @nested_category
  WHERE   category_id = @current_category_id

  SELECT  @myLeft = lftcalc
  FROM    @nested_category
  WHERE   category_id = @current_parent

  UPDATE @nested_category SET rgtcalc = rgtcalc + 2 WHERE rgtcalc > @myLeft;
  UPDATE @nested_category SET lftcalc = lftcalc + 2 WHERE lftcalc > @myLeft;
  UPDATE @nested_category SET lftcalc = @myLeft + 1, rgtcalc = @myLeft + 2 WHERE category_id = @current_category_id

  SELECT * FROM @nested_category WHERE category_id = @current_parent
  SELECT * FROM @nested_category ORDER BY category_id
  SET @SafeGuard = @SafeGuard - 1
END

SELECT * FROM @nested_category ORDER BY category_id

SELECT  COUNT(node.name), node.name, MIN(node.lft)
FROM    @nested_category AS node,
        @nested_category AS parent
WHERE   node.lft BETWEEN parent.lft AND parent.rgt
GROUP BY 
        node.name
ORDER BY
        3, 1
qid & accept id: (3675616, 3675746) query: Skipping rows in sql query (finding end date based on start date and worked days) soup:

You could have a where clause that says there must be N working days between the start and the end day. Unlike the row_number() variants, this should work in MS Access. For example:

\n
declare @Task table (taskid int, empid int, start date, days int)\ninsert @Task values (1, 1, '2010-01-01', 1)\ninsert @Task values (2, 1, '2010-01-01', 2)\ninsert @Task values (3, 1, '2010-01-01', 3)\n\ndeclare @WorkableDays table (empid int, day date)\ninsert @WorkableDays values (1, '2010-01-01')\ninsert @WorkableDays values (1, '2010-01-02')\ninsert @WorkableDays values (1, '2010-01-05')\n\nselect  t.taskid\n,       t.start\n,       endday.day as end\nfrom    @Task t\njoin    @WorkableDays endday\non      endday.empid = t.empid\nwhere   t.days = \n        (\n        select  COUNT(*)\n        from    @WorkableDays wd\n        where   wd.empId = t.empId\n                and wd.day between t.start and endday.day\n        )\n
\n

This prints:

\n
taskid   start       end\n1        2010-01-01  2010-01-01\n2        2010-01-01  2010-01-02\n3        2010-01-01  2010-01-05\n
\n soup wrap:

You could have a where clause that says there must be N working days between the start and the end day. Unlike the row_number() variants, this should work in MS Access. For example:

declare @Task table (taskid int, empid int, start date, days int)
insert @Task values (1, 1, '2010-01-01', 1)
insert @Task values (2, 1, '2010-01-01', 2)
insert @Task values (3, 1, '2010-01-01', 3)

declare @WorkableDays table (empid int, day date)
insert @WorkableDays values (1, '2010-01-01')
insert @WorkableDays values (1, '2010-01-02')
insert @WorkableDays values (1, '2010-01-05')

select  t.taskid
,       t.start
,       endday.day as end
from    @Task t
join    @WorkableDays endday
on      endday.empid = t.empid
where   t.days = 
        (
        select  COUNT(*)
        from    @WorkableDays wd
        where   wd.empId = t.empId
                and wd.day between t.start and endday.day
        )

This prints:

taskid   start       end
1        2010-01-01  2010-01-01
2        2010-01-01  2010-01-02
3        2010-01-01  2010-01-05
qid & accept id: (3702873, 3705188) query: MySQL: How to select the UTC offset and DST for all timezones? soup:

Try this query. The offsettime is the (Offset / 60 / 60)

\n
SELECT tzname.`Time_zone_id`,(`Offset`/60/60) AS `offsettime`,`Is_DST`,`Name`,`Transition_type_id`,`Abbreviation`\nFROM `time_zone_transition_type` AS `transition`, `time_zone_name` AS `tzname`\nWHERE transition.`Time_zone_id`=tzname.`Time_zone_id`\nORDER BY transition.`Offset` ASC;\n
\n

The results are

\n
501 -12.00000000    0   0   PHOT    Pacific/Enderbury\n369 -12.00000000    0   0   GMT+12  Etc/GMT+12\n513 -12.00000000    0   1   KWAT    Pacific/Kwajalein\n483 -12.00000000    0   1   KWAT    Kwajalein\n518 -11.50000000    0   1   NUT Pacific/Niue\n496 -11.50000000    0   1   SAMT    Pacific/Apia\n528 -11.50000000    0   1   SAMT    Pacific/Samoa\n555 -11.50000000    0   1   SAMT    US/Samoa\n521 -11.50000000    0   1   SAMT    Pacific/Pago_Pago\n496 -11.44888889    0   0   LMT Pacific/Apia\n528 -11.38000000    0   0   LMT Pacific/Samoa\n555 -11.38000000    0   0   LMT US/Samoa\n521 -11.38000000    0   0   LMT Pacific/Pago_Pago\n518 -11.33333333    0   0   NUT Pacific/Niue\n544 -11.00000000    0   3   BST US/Aleutian\n163 -11.00000000    0   3   BST America/Nome\n518 -11.00000000    0   2   NUT Pacific/Niue\n496 -11.00000000    0   2   WST Pacific/Apia\n544 -11.00000000    0   0   NST US/Aleutian\n163 -11.00000000    0   0   NST America/Nome\n528 -11.00000000    0   4   SST Pacific/Samoa\n528 -11.00000000    0   3   BST Pacific/Samoa\n
\n soup wrap:

Try this query. The offsettime is the (Offset / 60 / 60)

SELECT tzname.`Time_zone_id`,(`Offset`/60/60) AS `offsettime`,`Is_DST`,`Name`,`Transition_type_id`,`Abbreviation`
FROM `time_zone_transition_type` AS `transition`, `time_zone_name` AS `tzname`
WHERE transition.`Time_zone_id`=tzname.`Time_zone_id`
ORDER BY transition.`Offset` ASC;

The results are

501 -12.00000000    0   0   PHOT    Pacific/Enderbury
369 -12.00000000    0   0   GMT+12  Etc/GMT+12
513 -12.00000000    0   1   KWAT    Pacific/Kwajalein
483 -12.00000000    0   1   KWAT    Kwajalein
518 -11.50000000    0   1   NUT Pacific/Niue
496 -11.50000000    0   1   SAMT    Pacific/Apia
528 -11.50000000    0   1   SAMT    Pacific/Samoa
555 -11.50000000    0   1   SAMT    US/Samoa
521 -11.50000000    0   1   SAMT    Pacific/Pago_Pago
496 -11.44888889    0   0   LMT Pacific/Apia
528 -11.38000000    0   0   LMT Pacific/Samoa
555 -11.38000000    0   0   LMT US/Samoa
521 -11.38000000    0   0   LMT Pacific/Pago_Pago
518 -11.33333333    0   0   NUT Pacific/Niue
544 -11.00000000    0   3   BST US/Aleutian
163 -11.00000000    0   3   BST America/Nome
518 -11.00000000    0   2   NUT Pacific/Niue
496 -11.00000000    0   2   WST Pacific/Apia
544 -11.00000000    0   0   NST US/Aleutian
163 -11.00000000    0   0   NST America/Nome
528 -11.00000000    0   4   SST Pacific/Samoa
528 -11.00000000    0   3   BST Pacific/Samoa
qid & accept id: (3805664, 3805706) query: Sort out the three first occurence of an attribute soup:

To get events for the next three non-sequential days, starting today, use:

\n
SELECT x.*\n  FROM (SELECT ep.*,\n               CASE\n                 WHEN DATE(@dt) = DATE(x.dt) THEN @rownum\n                 ELSE @rownum := @rownum + 1\n               END AS rank,\n          FROM EVENT_POST ep\n          JOIN (SELECT @rowrum := 0, @dt := NULL) r\n         WHERE ep.startdate >= CURRENT_DATE\n      ORDER BY t.startdate, t.starttime) x\n WHERE x.rank <= 3\n
\n

To get events for the next three sequential days, starting today, use the DATE_ADD function:

\n
SELECT ep.*\n  FROM EVENT_POST ep\n WHERE ep.startdate BETWEEN DATE(NOW)\n                        AND DATE_ADD(DATE(NOW), INTERVAL 3 DAY)\n
\n soup wrap:

To get events for the next three non-sequential days, starting today, use:

SELECT x.*
  FROM (SELECT ep.*,
               CASE
                 WHEN DATE(@dt) = DATE(x.dt) THEN @rownum
                 ELSE @rownum := @rownum + 1
               END AS rank,
          FROM EVENT_POST ep
          JOIN (SELECT @rowrum := 0, @dt := NULL) r
         WHERE ep.startdate >= CURRENT_DATE
      ORDER BY t.startdate, t.starttime) x
 WHERE x.rank <= 3

To get events for the next three sequential days, starting today, use the DATE_ADD function:

SELECT ep.*
  FROM EVENT_POST ep
 WHERE ep.startdate BETWEEN DATE(NOW)
                        AND DATE_ADD(DATE(NOW), INTERVAL 3 DAY)
qid & accept id: (3819810, 3819953) query: Normalizing a table: finding unique columns over series of rows (Oracle 10.x) soup:

Since 10 tables is not a lot, here is (some sort of) pseudo code

\n
for each table_name in tables\n  for each column_name in columns\n    case (exists (select 1\n          from table_name\n          group by PersonID\n          having min(column_name) = max(column_name))\n       when true then 'Worker'\n       when false then 'Person'\n    end case\n  end for\nend for\n
\n

with information schema and dynamic queries you could make the above proper PL/SQL or take the core query and script it in your favourite language.

\n

EDIT:\nThe above assumes no NULLs in column_name.

\n

EDIT2:\nOther variants of the core query can be

\n
SELECT 1\nFROM \n(SELECT COUNT(DISTINCT column_name) AS distinct_values_by_pid\nFROM table_name\nGROUP BY PersonID) T\nHAVING MIN(distinct_values_by_pid) = MAX(distinct_values_by_pid)\n
\n

Which will return a row if all values per PersonID are the same.\n(this query also has problems with NULLS, but I consider NULLs a separate issue; you can always cast a NULL to some out-of-domain value for purposes of the above query)

\n

The above query can be also written as

\n
SELECT MIN(c1)=MAX(c1), MIN(c2)=MAX(c2), ...\nFROM \n(SELECT COUNT(DISTINCT column_name_1) AS c1, COUNT(DISTINCT column_name_2) AS c2, ...\nFROM table_name\nGROUP BY PersonID) T\n
\n

Which will test multiple columns at the same time returning true for columns that belong to 'Workers' and false for columns that should go into 'Persons'.

\n soup wrap:

Since 10 tables is not a lot, here is (some sort of) pseudo code

for each table_name in tables
  for each column_name in columns
    case (exists (select 1
          from table_name
          group by PersonID
          having min(column_name) = max(column_name))
       when true then 'Worker'
       when false then 'Person'
    end case
  end for
end for

with information schema and dynamic queries you could make the above proper PL/SQL or take the core query and script it in your favourite language.

EDIT: The above assumes no NULLs in column_name.

EDIT2: Other variants of the core query can be

SELECT 1
FROM 
(SELECT COUNT(DISTINCT column_name) AS distinct_values_by_pid
FROM table_name
GROUP BY PersonID) T
HAVING MIN(distinct_values_by_pid) = MAX(distinct_values_by_pid)

Which will return a row if all values per PersonID are the same. (this query also has problems with NULLS, but I consider NULLs a separate issue; you can always cast a NULL to some out-of-domain value for purposes of the above query)

The above query can be also written as

SELECT MIN(c1)=MAX(c1), MIN(c2)=MAX(c2), ...
FROM 
(SELECT COUNT(DISTINCT column_name_1) AS c1, COUNT(DISTINCT column_name_2) AS c2, ...
FROM table_name
GROUP BY PersonID) T

Which will test multiple columns at the same time returning true for columns that belong to 'Workers' and false for columns that should go into 'Persons'.

qid & accept id: (3821642, 3822300) query: Parse SQL file to separate columns soup:

What about when there are three e-mails/names?\nWith shown data it should be easy to do

\n
select replace(substring(substring_index(`Personnel`, ',', 1),length(substring_index(`Personnel`, ',', 1 - 1)) + 1), ',', '') personnel1,\n       replace(substring(substring_index(`Personnel`, ',', 2),length(substring_index(`Personnel`, ',', 2 - 1)) + 1), ',', '') personnel2,\nfrom `pubs_for_client`\n
\n

The above will split the Personnel column on delimiter ,.
\nYou can then split these fields on delimiter ( and ) to split personnel into name, position and e-mail

\n

The SQL will be ugly (because mysql does not have split function), but it will get the job done.

\n

The split expression was taken from comments on mysql documentation (search for split).

\n

You can also

\n
CREATE FUNCTION strSplit(x varchar(255), delim varchar(12), pos int) returns varchar(255)\nreturn replace(substring(substring_index(x, delim, pos), length(substring_index(x, delim, pos - 1)) + 1), delim, '');\n
\n

After which you can user

\n
select strSplit(`Personnel`, ',', 1), strSplit(`Personnel`, ',', 2)\nfrom `pubs_for_client`\n
\n

You could also create your own function that will extract directly names and e-mails.

\n soup wrap:

What about when there are three e-mails/names? With shown data it should be easy to do

select replace(substring(substring_index(`Personnel`, ',', 1),length(substring_index(`Personnel`, ',', 1 - 1)) + 1), ',', '') personnel1,
       replace(substring(substring_index(`Personnel`, ',', 2),length(substring_index(`Personnel`, ',', 2 - 1)) + 1), ',', '') personnel2,
from `pubs_for_client`

The above will split the Personnel column on delimiter ,.
You can then split these fields on delimiter ( and ) to split personnel into name, position and e-mail

The SQL will be ugly (because mysql does not have split function), but it will get the job done.

The split expression was taken from comments on mysql documentation (search for split).

You can also

CREATE FUNCTION strSplit(x varchar(255), delim varchar(12), pos int) returns varchar(255)
return replace(substring(substring_index(x, delim, pos), length(substring_index(x, delim, pos - 1)) + 1), delim, '');

After which you can user

select strSplit(`Personnel`, ',', 1), strSplit(`Personnel`, ',', 2)
from `pubs_for_client`

You could also create your own function that will extract directly names and e-mails.

qid & accept id: (3827025, 3827089) query: Matching delimited string to table rows soup:

Short Term Solution

\n

For your immediate problem, the FIND_IN_SET function is what you want to use for joining:

\n

For People

\n
SELECT p.*\n  FROM PEOPLE p\n  JOIN HOUSES h ON FIND_IN_SET(p.name, h.people)\n WHERE h.name = ?\n
\n

For Houses

\n
SELECT h.*\n  FROM HOUSES h\n  JOIN PEOPLE p ON FIND_IN_SET(h.name, p.houses)\n WHERE p.name = ?\n
\n

Long Term Solution

\n

Is to properly model this by adding a table to link houses to people, because you're likely storing redundant relationships in both tables:

\n
CREATE TABLE people_houses (\n  house_id int,\n  person_id int,\n  PRIMARY KEY (house_id, person_id),\n  FOREIGN KEY (house_id) REFERENCES houses (id),\n  FOREIGN KEY (person_id) REFERENCES people (id)\n)\n
\n soup wrap:

Short Term Solution

For your immediate problem, the FIND_IN_SET function is what you want to use for joining:

For People

SELECT p.*
  FROM PEOPLE p
  JOIN HOUSES h ON FIND_IN_SET(p.name, h.people)
 WHERE h.name = ?

For Houses

SELECT h.*
  FROM HOUSES h
  JOIN PEOPLE p ON FIND_IN_SET(h.name, p.houses)
 WHERE p.name = ?

Long Term Solution

Is to properly model this by adding a table to link houses to people, because you're likely storing redundant relationships in both tables:

CREATE TABLE people_houses (
  house_id int,
  person_id int,
  PRIMARY KEY (house_id, person_id),
  FOREIGN KEY (house_id) REFERENCES houses (id),
  FOREIGN KEY (person_id) REFERENCES people (id)
)
qid & accept id: (3886340, 3886391) query: SQL Select Return Default Value If Null soup:

Two things:

\n
    \n
  1. Use left outer join instead of inner join to get all the listings, even with missing pictures.
  2. \n
  3. Use coalesce to apply the default

    \n
    SELECT Listing.Title\n    , Listing.MLS\n    , Pictures.PictureTH\n    , coalesce(Pictures.Picture, 'default.jpg') as Picture\n    , Listing.ID  \nFROM Listing \nLEFT OUTER JOIN Pictures \n    ON Listing.ID = Pictures.ListingID \n
  4. \n
\n

EDIT To limit to one row:

\n
SELECT Listing.Title\n    , Listing.MLS\n    , Pictures.PictureTH\n    , coalesce(Pictures.Picture, 'default.jpg') as Picture\n    , Listing.ID  \nFROM Listing \nLEFT OUTER JOIN Pictures \n    ON Listing.ID = Pictures.ListingID \nWHERE Pictures.ID is null\nOR Pictures.ID = (SELECT MIN(ID) \n    FROM Pictures \n    WHERE (ListingID = Listing.ID))) \n
\n soup wrap:

Two things:

  1. Use left outer join instead of inner join to get all the listings, even with missing pictures.
  2. Use coalesce to apply the default

    SELECT Listing.Title
        , Listing.MLS
        , Pictures.PictureTH
        , coalesce(Pictures.Picture, 'default.jpg') as Picture
        , Listing.ID  
    FROM Listing 
    LEFT OUTER JOIN Pictures 
        ON Listing.ID = Pictures.ListingID 
    

EDIT To limit to one row:

SELECT Listing.Title
    , Listing.MLS
    , Pictures.PictureTH
    , coalesce(Pictures.Picture, 'default.jpg') as Picture
    , Listing.ID  
FROM Listing 
LEFT OUTER JOIN Pictures 
    ON Listing.ID = Pictures.ListingID 
WHERE Pictures.ID is null
OR Pictures.ID = (SELECT MIN(ID) 
    FROM Pictures 
    WHERE (ListingID = Listing.ID))) 
qid & accept id: (3891758, 3892432) query: How to update one table from another one without specifying column names? soup:

Not sure if you'll be able to accomplish this without using dynamic sql to build out the update statement in a variable.

\n

This statement will return a list of columns based on the table name you put in:

\n
select name from syscolumns\nwhere [id] = (select [id] from sysobjects where name = 'tablename')\n
\n

Not sure if I can avoid a loop here....you'll need to load the results from above into a cursor and then build a query from it. Psuedo coded:

\n
set @query = 'update [1607348182] set '\nload cursor --(we will use @name to hold the column name)\nwhile stillrecordsincursor\nset @query = @query + @name + ' = tmp_[1607348182]. ' +@name + ','\nload next value from cursor\nloop!\n
\n

When the query is done being built in the loop, use exec sp_executesql @query.

\n

Just a little warning...building dynamic sql in a loop like this can get a bit confusing. For trouble shooting, putting a select @query in the loop and watch the @query get built.

\n

edit:\nNot sure if you'll be able to do all 1000 rows in an update at once...there are logical limits (varchar(8000)?) on the size that @query can grow too. You may have to divide the code so it handles 50 columns at a time. Put the columns from the syscolumns select statement into a temp table with an id and build your dynamic sql so it updates 20 columns (or 50?) at a time.

\n

Another alternative would be to use excel to mass build this. Do the column select and copy the results into column a of a spreadsheet. Put '= in column b, tmp.[12331312] in column c, copy column a into column D, and a comma into column e. Copy the entire spreadsheet into a notepad, and you should have the columns of the update statement built out for you. Not a bad solution if this is a one shot event, not sure if I'd rely on this as a on-going solution.

\n soup wrap:

Not sure if you'll be able to accomplish this without using dynamic sql to build out the update statement in a variable.

This statement will return a list of columns based on the table name you put in:

select name from syscolumns
where [id] = (select [id] from sysobjects where name = 'tablename')

Not sure if I can avoid a loop here....you'll need to load the results from above into a cursor and then build a query from it. Psuedo coded:

set @query = 'update [1607348182] set '
load cursor --(we will use @name to hold the column name)
while stillrecordsincursor
set @query = @query + @name + ' = tmp_[1607348182]. ' +@name + ','
load next value from cursor
loop!

When the query is done being built in the loop, use exec sp_executesql @query.

Just a little warning...building dynamic sql in a loop like this can get a bit confusing. For trouble shooting, putting a select @query in the loop and watch the @query get built.

edit: Not sure if you'll be able to do all 1000 rows in an update at once...there are logical limits (varchar(8000)?) on the size that @query can grow too. You may have to divide the code so it handles 50 columns at a time. Put the columns from the syscolumns select statement into a temp table with an id and build your dynamic sql so it updates 20 columns (or 50?) at a time.

Another alternative would be to use excel to mass build this. Do the column select and copy the results into column a of a spreadsheet. Put '= in column b, tmp.[12331312] in column c, copy column a into column D, and a comma into column e. Copy the entire spreadsheet into a notepad, and you should have the columns of the update statement built out for you. Not a bad solution if this is a one shot event, not sure if I'd rely on this as a on-going solution.

qid & accept id: (3895652, 3895665) query: How to Truncate the Decimal Places without Rounding Up? soup:

using the round function you can try this

\n
select round(4.584406, 1, 1)\n
\n

the output will be

\n
4.5\n
\n

the key is the third parameter

\n
ROUND ( numeric_expression , length [ ,function ] )\n
\n
\n

function

\n
Is the type of operation to perform. function must be tinyint,\n
\n

smallint, or int. When function is\n omitted or has a value of 0 (default),\n numeric_expression is rounded. When a\n value other than 0 is specified,\n numeric_expression is truncated.

\n
\n soup wrap:

using the round function you can try this

select round(4.584406, 1, 1)

the output will be

4.5

the key is the third parameter

ROUND ( numeric_expression , length [ ,function ] )

function

Is the type of operation to perform. function must be tinyint,

smallint, or int. When function is omitted or has a value of 0 (default), numeric_expression is rounded. When a value other than 0 is specified, numeric_expression is truncated.

qid & accept id: (3900330, 3900450) query: MySQL get only rows with a unique value for a certain field soup:
select min(id) from \n(\n  select id, senderID pID from table where receiverID = '1'\n  union\n  select id, receiverID pID from table where senderID = '1'\n) as fred\ngroup by pID;\n
\n

For your data set, this gives:

\n
+---------+\n| min(id) |\n+---------+\n|       0 |\n|       1 |\n+---------+\n
\n soup wrap:
select min(id) from 
(
  select id, senderID pID from table where receiverID = '1'
  union
  select id, receiverID pID from table where senderID = '1'
) as fred
group by pID;

For your data set, this gives:

+---------+
| min(id) |
+---------+
|       0 |
|       1 |
+---------+
qid & accept id: (3925043, 3925608) query: Most optimized way to get column totals in SQL Server 2005+ soup:

Any reason this isn't done as

\n
select prg.prefix_id, count(1) from tablename where... group by prg.prefix_id     \n
\n

It would leave you with a result set of the prefix_id and the count of rows for each prefix_ID...might be preferential over a series of count(case) statements, and I think it should be quicker, but I can't confirm for sure.

\n

I would use a subquery before resorting to @vars myself. Something like this:

\n
   select c1,c2,c1+c1 as total from \n   (SELECT \n   count(case when prg.prefix_id = 1 then iss.id end) as c1, \n   count(case when prg.prefix_id = 2 then iss.id end) as c2 \n   FROM dbo.TableName \n   WHERE ... ) a\n
\n soup wrap:

Any reason this isn't done as

select prg.prefix_id, count(1) from tablename where... group by prg.prefix_id     

It would leave you with a result set of the prefix_id and the count of rows for each prefix_ID...might be preferential over a series of count(case) statements, and I think it should be quicker, but I can't confirm for sure.

I would use a subquery before resorting to @vars myself. Something like this:

   select c1,c2,c1+c1 as total from 
   (SELECT 
   count(case when prg.prefix_id = 1 then iss.id end) as c1, 
   count(case when prg.prefix_id = 2 then iss.id end) as c2 
   FROM dbo.TableName 
   WHERE ... ) a
qid & accept id: (3932947, 3933001) query: SQL Server 2005: how to subtract 6 month soup:

You can use DATEADD:

\n
select DATEADD(month, -6, @d)\n
\n

EDIT: if you need the number of days up to 6 months ago you can use DATEDIFF:

\n
select DATEDIFF(day, @d, DATEADD(month, -6, @d))\n
\n soup wrap:

You can use DATEADD:

select DATEADD(month, -6, @d)

EDIT: if you need the number of days up to 6 months ago you can use DATEDIFF:

select DATEDIFF(day, @d, DATEADD(month, -6, @d))
qid & accept id: (3951413, 3951429) query: How can I find and replace in MySQL? soup:
UPDATE mytable \n   SET server_path = REPLACE(server_path,'/home/','/new_home/');\n
\n

Link to documentation.

\n

Edit:
\nIf you need to update multiple fields you can string them along—with commas in between—in that same UPDATE statement, e.g.:

\n
UPDATE mytable \n   SET mycol1 = REPLACE(mycol1,'/home/','/new_home/'), \n       mycol2 = REPLACE(mycol2,'/home/','/new_home/');\n
\n soup wrap:
UPDATE mytable 
   SET server_path = REPLACE(server_path,'/home/','/new_home/');

Link to documentation.

Edit:
If you need to update multiple fields you can string them along—with commas in between—in that same UPDATE statement, e.g.:

UPDATE mytable 
   SET mycol1 = REPLACE(mycol1,'/home/','/new_home/'), 
       mycol2 = REPLACE(mycol2,'/home/','/new_home/');
qid & accept id: (4017878, 4017990) query: php do something for every record in the database soup:

Try to avoid the loop at all costs. Think set based processing, which means handle the entire set of rows within one SQL command.

\n

I'm not entirely sure what you are attempting to do, as your question is a little vague. however, here are two possibly ways to handle what you are trying to do using set based thinking.

\n

You can do a JOIN in an UPDATE, essentially selecting from the parent table and UPDATEing the child table for all rows in a single UPDATE command.

\n
UPDATE c\n    SET Col1=p.Col1\n    FROM ParentTable           p\n        INNER JOIN ChildTable  c On p.ParentID=c.ParentID\n    WHERE ...\n
\n

you can also INSERT based on a SELECT, so you would create one row from each row returned in the SELECT, like:

\n
INSERT INTO ChildTable\n        (Col1, Col2, Col3, Col4)\n    SELECT\n        p.ColA, p.ColB, 'constant value', p.ColC-p.ColD\n        FROM ParentTable p\n        WHERE... \n
\n soup wrap:

Try to avoid the loop at all costs. Think set based processing, which means handle the entire set of rows within one SQL command.

I'm not entirely sure what you are attempting to do, as your question is a little vague. however, here are two possibly ways to handle what you are trying to do using set based thinking.

You can do a JOIN in an UPDATE, essentially selecting from the parent table and UPDATEing the child table for all rows in a single UPDATE command.

UPDATE c
    SET Col1=p.Col1
    FROM ParentTable           p
        INNER JOIN ChildTable  c On p.ParentID=c.ParentID
    WHERE ...

you can also INSERT based on a SELECT, so you would create one row from each row returned in the SELECT, like:

INSERT INTO ChildTable
        (Col1, Col2, Col3, Col4)
    SELECT
        p.ColA, p.ColB, 'constant value', p.ColC-p.ColD
        FROM ParentTable p
        WHERE... 
qid & accept id: (4038960, 4038974) query: Basic MySQL Table Join? soup:
SELECT `name`, `key`, ot.name AS OFFICE_NAME, `manager`, `id` \n  FROM `ASSOCIATION_TABLE` at\n       LEFT OUTER JOIN OFFICE_TABLE ot\n       ON ot.id = at.office\n WHERE `association`.`customer`=4;\n
\n

That's an outer join to OFFICE_TABLE. Your resultset will include any records in the ASSOCIATION_TABLE that do not have records in OFFICE_TABLE.

\n

If you only want to return results with records in OFFICE_TABLE you will want an inner join, e.g.:

\n
SELECT `name`, `key`, ot.name AS OFFICE_NAME, `manager`, `id` \n  FROM `ASSOCIATION_TABLE` at\n       INNER JOIN OFFICE_TABLE ot\n       ON ot.id = at.office\n WHERE `association`.`customer`=4;\n
\n soup wrap:
SELECT `name`, `key`, ot.name AS OFFICE_NAME, `manager`, `id` 
  FROM `ASSOCIATION_TABLE` at
       LEFT OUTER JOIN OFFICE_TABLE ot
       ON ot.id = at.office
 WHERE `association`.`customer`=4;

That's an outer join to OFFICE_TABLE. Your resultset will include any records in the ASSOCIATION_TABLE that do not have records in OFFICE_TABLE.

If you only want to return results with records in OFFICE_TABLE you will want an inner join, e.g.:

SELECT `name`, `key`, ot.name AS OFFICE_NAME, `manager`, `id` 
  FROM `ASSOCIATION_TABLE` at
       INNER JOIN OFFICE_TABLE ot
       ON ot.id = at.office
 WHERE `association`.`customer`=4;
qid & accept id: (4062845, 4063011) query: How can I save semantic information in a MySQL table? soup:

You're working on a hard and interesting problem! You may get some interesting ideas from looking at the Dublin Core Metadata Initiative.

\n

http://dublincore.org/metadata-basics/

\n

To make it simple, think of your metadata items as all fitting in one table.

\n

e.g.

\n
Ballmer employed-by Microsoft\nBallmer is-a Person\nMicrosoft is-a Organization\nMicrosoft run-by Ballmer\nSoftImage acquired-by Microsoft\nSoftImage is-a Organization\nJoel Spolsky is-a Person\nJoel Spolsky formerly-employed-by Microsoft\nSpolsky, Joel dreamed-up StackOverflow\nStackOverflow is-a Website\nSocrates is-a Person\nSocrates died-on (some date)\n
\n

The trick here is that some, but not all, your first and third column values need to be BOTH arbitrary text AND serve as indexes into the first and third columns. Then, if you're trying to figure out what your data base has on Spolsky, you can full-text search your first and third columns for his name. You'll get out a bunch of triplets. The values you find will tell you a lot. If you want to know more, you can search again.

\n

To pull this off you'll probably need to have five columns, as follows:

\n
Full text subject  (whatever your user puts in)\nCanonical subject (what your user puts in, massaged into a standard form)\nRelation (is-a etc)\nFull text object\nCanonical object\n
\n

The point of the canonical forms of your subject and object is to allow queries like this to work, even if your user puts in "Joel Spolsky" and "Spolsky, Joel" in two different places even if they mean the same person.

\n
SELECT * \n  FROM relationships a\n  JOIN relationships b (ON a.canonical_object = b.canonical_subject)\n WHERE MATCH (subject,object) AGAINST ('Spolsky')\n
\n soup wrap:

You're working on a hard and interesting problem! You may get some interesting ideas from looking at the Dublin Core Metadata Initiative.

http://dublincore.org/metadata-basics/

To make it simple, think of your metadata items as all fitting in one table.

e.g.

Ballmer employed-by Microsoft
Ballmer is-a Person
Microsoft is-a Organization
Microsoft run-by Ballmer
SoftImage acquired-by Microsoft
SoftImage is-a Organization
Joel Spolsky is-a Person
Joel Spolsky formerly-employed-by Microsoft
Spolsky, Joel dreamed-up StackOverflow
StackOverflow is-a Website
Socrates is-a Person
Socrates died-on (some date)

The trick here is that some, but not all, your first and third column values need to be BOTH arbitrary text AND serve as indexes into the first and third columns. Then, if you're trying to figure out what your data base has on Spolsky, you can full-text search your first and third columns for his name. You'll get out a bunch of triplets. The values you find will tell you a lot. If you want to know more, you can search again.

To pull this off you'll probably need to have five columns, as follows:

Full text subject  (whatever your user puts in)
Canonical subject (what your user puts in, massaged into a standard form)
Relation (is-a etc)
Full text object
Canonical object

The point of the canonical forms of your subject and object is to allow queries like this to work, even if your user puts in "Joel Spolsky" and "Spolsky, Joel" in two different places even if they mean the same person.

SELECT * 
  FROM relationships a
  JOIN relationships b (ON a.canonical_object = b.canonical_subject)
 WHERE MATCH (subject,object) AGAINST ('Spolsky')
qid & accept id: (4062865, 4062914) query: Adding a unique row count to a SQL 2008 "for xml path" statement? soup:

You could alias @@rowcount to '@id', like:

\n
declare @t table (name varchar(25))\n\ninsert @t (name) values ('jddjdjd')\n\nselect  @@rowcount as '@id'\n,       name\nfrom    @t\nfor xml path('row'), root('rows')\n
\n

This prints:

\n
\n    \n        jddjdjd\n    \n\n
\n

However, I'm not sure it's clearly defined what @@rowcount means at the point where it gets turned into an attribute.

\n soup wrap:

You could alias @@rowcount to '@id', like:

declare @t table (name varchar(25))

insert @t (name) values ('jddjdjd')

select  @@rowcount as '@id'
,       name
from    @t
for xml path('row'), root('rows')

This prints:


    
        jddjdjd
    

However, I'm not sure it's clearly defined what @@rowcount means at the point where it gets turned into an attribute.

qid & accept id: (4212229, 4212279) query: Deleting dynamically managed tables in MySQL soup:

you can run this query and get all the sql queries that you need to run;

\n
select concat( 'drop table ', a.table_name, ';' )\nfrom information_schema.tables a \nwhere a.table_name like 'dynamic_%';\n
\n

you can insert it to file like

\n
INTO OUTFILE '/tmp/delete.sql';\n
\n

update according to alexandre comment

\n
SET @v = ( select concat( 'drop table ', group_concat(a.table_name))\n    from information_schema.tables a \n    where a.table_name like 'dynamic_%'\n    AND a.table_schema = DATABASE()\n;);\n PREPARE s FROM @v; \nEXECUTE s;\n
\n soup wrap:

you can run this query and get all the sql queries that you need to run;

select concat( 'drop table ', a.table_name, ';' )
from information_schema.tables a 
where a.table_name like 'dynamic_%';

you can insert it to file like

INTO OUTFILE '/tmp/delete.sql';

update according to alexandre comment

SET @v = ( select concat( 'drop table ', group_concat(a.table_name))
    from information_schema.tables a 
    where a.table_name like 'dynamic_%'
    AND a.table_schema = DATABASE()
;);
 PREPARE s FROM @v; 
EXECUTE s;
qid & accept id: (4225984, 4226581) query: "Pivoting" non-aggregate data in SQL Server soup:

To get the basic numbered-role data, we might start with

\n
SELECT\n    org_nbr\n    , r1.assoc_id   role1_ID\n    , r1.last_name  role1_name\n    , r2.assoc_id   role2_ID\n    , r2.last_name  role2_name\n    , r3.assoc_id   role3_ID\n    , r3.last_name  role3_name\n    , r4.assoc_id   role4_ID\n    , r4.last_name  role4_name\n    , r5.assoc_id   role5_ID\n    , r5.last_name  role5_name\n    , r6.assoc_id   role6_ID\n    , r6.last_name  role6_name\nFROM\n    ASSOC_ROLE ar\n    LEFT JOIN ASSOCIATE r1 ON ar.role_id = 1 AND ar.assoc_id = r1.assoc_id\n    LEFT JOIN ASSOCIATE r2 ON ar.role_id = 2 AND ar.assoc_id = r2.assoc_id\n    LEFT JOIN ASSOCIATE r3 ON ar.role_id = 3 AND ar.assoc_id = r3.assoc_id\n    LEFT JOIN ASSOCIATE r4 ON ar.role_id = 4 AND ar.assoc_id = r4.assoc_id\n    LEFT JOIN ASSOCIATE r5 ON ar.role_id = 5 AND ar.assoc_id = r5.assoc_id\n    LEFT JOIN ASSOCIATE r6 ON ar.role_id = 6 AND ar.assoc_id = r6.assoc_id\n
\n

BUT this will give us, for each org_nbr, a separate row for each role_id that has data! Which is not what we want - so we need to GROUP BY org_nbr. But then we need to either GROUP BY or aggregate over every column in the SELECT list! The trick then is to come up with an aggregate function that will placate SQL Server and give us the results we want. In this case, MIN will do the job:

\n
SELECT\n    org_nbr\n    , MIN(r1.assoc_id)   role1_ID\n    , MIN(r1.last_name)  role1_name\n    , MIN(r2.assoc_id)   role2_ID\n    , MIN(r2.last_name)  role2_name\n    , MIN(r3.assoc_id)   role3_ID\n    , MIN(r3.last_name)  role3_name\n    , MIN(r4.assoc_id)   role4_ID\n    , MIN(r4.last_name)  role4_name\n    , MIN(r5.assoc_id)   role5_ID\n    , MIN(r5.last_name)  role5_name\n    , MIN(r6.assoc_id)   role6_ID\n    , MIN(r6.last_name)  role6_name\nFROM\n    ASSOC_ROLE ar\n    LEFT JOIN ASSOCIATE r1 ON ar.role_id = 1 AND ar.assoc_id = r1.assoc_id\n    LEFT JOIN ASSOCIATE r2 ON ar.role_id = 2 AND ar.assoc_id = r2.assoc_id\n    LEFT JOIN ASSOCIATE r3 ON ar.role_id = 3 AND ar.assoc_id = r3.assoc_id\n    LEFT JOIN ASSOCIATE r4 ON ar.role_id = 4 AND ar.assoc_id = r4.assoc_id\n    LEFT JOIN ASSOCIATE r5 ON ar.role_id = 5 AND ar.assoc_id = r5.assoc_id\n    LEFT JOIN ASSOCIATE r6 ON ar.role_id = 6 AND ar.assoc_id = r6.assoc_id\nGROUP BY\n    org_nbr\n
\n

Output:

\n
org_nbr    role1_ID    role1_name role2_ID    role2_name role3_ID    role3_name role4_ID    role4_name role5_ID    role5_name role6_ID    role6_name\n---------- ----------- ---------- ----------- ---------- ----------- ---------- ----------- ---------- ----------- ---------- ----------- ----------\n1AA        1447        Cooper     NULL        NULL       1448        Collins    1448        Collins    1448        Collins    1449        Lynch\nWarning: Null value is eliminated by an aggregate or other SET operation.\n
\n

Of course this will fall short should the maximum role_id increase...

\n soup wrap:

To get the basic numbered-role data, we might start with

SELECT
    org_nbr
    , r1.assoc_id   role1_ID
    , r1.last_name  role1_name
    , r2.assoc_id   role2_ID
    , r2.last_name  role2_name
    , r3.assoc_id   role3_ID
    , r3.last_name  role3_name
    , r4.assoc_id   role4_ID
    , r4.last_name  role4_name
    , r5.assoc_id   role5_ID
    , r5.last_name  role5_name
    , r6.assoc_id   role6_ID
    , r6.last_name  role6_name
FROM
    ASSOC_ROLE ar
    LEFT JOIN ASSOCIATE r1 ON ar.role_id = 1 AND ar.assoc_id = r1.assoc_id
    LEFT JOIN ASSOCIATE r2 ON ar.role_id = 2 AND ar.assoc_id = r2.assoc_id
    LEFT JOIN ASSOCIATE r3 ON ar.role_id = 3 AND ar.assoc_id = r3.assoc_id
    LEFT JOIN ASSOCIATE r4 ON ar.role_id = 4 AND ar.assoc_id = r4.assoc_id
    LEFT JOIN ASSOCIATE r5 ON ar.role_id = 5 AND ar.assoc_id = r5.assoc_id
    LEFT JOIN ASSOCIATE r6 ON ar.role_id = 6 AND ar.assoc_id = r6.assoc_id

BUT this will give us, for each org_nbr, a separate row for each role_id that has data! Which is not what we want - so we need to GROUP BY org_nbr. But then we need to either GROUP BY or aggregate over every column in the SELECT list! The trick then is to come up with an aggregate function that will placate SQL Server and give us the results we want. In this case, MIN will do the job:

SELECT
    org_nbr
    , MIN(r1.assoc_id)   role1_ID
    , MIN(r1.last_name)  role1_name
    , MIN(r2.assoc_id)   role2_ID
    , MIN(r2.last_name)  role2_name
    , MIN(r3.assoc_id)   role3_ID
    , MIN(r3.last_name)  role3_name
    , MIN(r4.assoc_id)   role4_ID
    , MIN(r4.last_name)  role4_name
    , MIN(r5.assoc_id)   role5_ID
    , MIN(r5.last_name)  role5_name
    , MIN(r6.assoc_id)   role6_ID
    , MIN(r6.last_name)  role6_name
FROM
    ASSOC_ROLE ar
    LEFT JOIN ASSOCIATE r1 ON ar.role_id = 1 AND ar.assoc_id = r1.assoc_id
    LEFT JOIN ASSOCIATE r2 ON ar.role_id = 2 AND ar.assoc_id = r2.assoc_id
    LEFT JOIN ASSOCIATE r3 ON ar.role_id = 3 AND ar.assoc_id = r3.assoc_id
    LEFT JOIN ASSOCIATE r4 ON ar.role_id = 4 AND ar.assoc_id = r4.assoc_id
    LEFT JOIN ASSOCIATE r5 ON ar.role_id = 5 AND ar.assoc_id = r5.assoc_id
    LEFT JOIN ASSOCIATE r6 ON ar.role_id = 6 AND ar.assoc_id = r6.assoc_id
GROUP BY
    org_nbr

Output:

org_nbr    role1_ID    role1_name role2_ID    role2_name role3_ID    role3_name role4_ID    role4_name role5_ID    role5_name role6_ID    role6_name
---------- ----------- ---------- ----------- ---------- ----------- ---------- ----------- ---------- ----------- ---------- ----------- ----------
1AA        1447        Cooper     NULL        NULL       1448        Collins    1448        Collins    1448        Collins    1449        Lynch
Warning: Null value is eliminated by an aggregate or other SET operation.

Of course this will fall short should the maximum role_id increase...

qid & accept id: (4226144, 4226200) query: Delete row when a table has an FK relationship soup:
delete \n  from projects \n where documentsFK = (\n                      select documentFK \n                        from documents \n                       where documentsFK > 125\n                     );\n\ndelete \n  from documents \n where documentsFK > 125;\n
\n

EDIT

\n
delete \n  from projects \n where documentsFK in (\n                       select documentFK \n                         from documents \n                        where documentsFK > 125\n                      );\n\ndelete \n  from documents \n where documentsFK > 125;\n
\n soup wrap:
delete 
  from projects 
 where documentsFK = (
                      select documentFK 
                        from documents 
                       where documentsFK > 125
                     );

delete 
  from documents 
 where documentsFK > 125;

EDIT

delete 
  from projects 
 where documentsFK in (
                       select documentFK 
                         from documents 
                        where documentsFK > 125
                      );

delete 
  from documents 
 where documentsFK > 125;
qid & accept id: (4257442, 4257582) query: SQL Server How to persist and use a time across different time zones soup:

In SQL Server 2008, use the DATETIMEOFFSET data type which is a DATETIME plus a timezone offset included.

\n
SELECT CAST('2010-11-23 16:35:29+09:00' AS datetimeoffset) \n
\n

would be Nov 23, 2010, 4:35pm in a +9 hour (from GMT) timezone.

\n

SQL Server 2008 also contains functions and SQL commands to convert DATETIMEOFFSET values from one timezone to another:

\n
SELECT \nSWITCHOFFSET(CAST('2010-11-23 16:35:29+09:00' AS datetimeoffset), '+01:00')\n
\n

would result in:

\n
2010-11-23 08:35:29.0000000 +01:00\n
\n

Same time, different timezone (+1 hour from GMT)

\n soup wrap:

In SQL Server 2008, use the DATETIMEOFFSET data type which is a DATETIME plus a timezone offset included.

SELECT CAST('2010-11-23 16:35:29+09:00' AS datetimeoffset) 

would be Nov 23, 2010, 4:35pm in a +9 hour (from GMT) timezone.

SQL Server 2008 also contains functions and SQL commands to convert DATETIMEOFFSET values from one timezone to another:

SELECT 
SWITCHOFFSET(CAST('2010-11-23 16:35:29+09:00' AS datetimeoffset), '+01:00')

would result in:

2010-11-23 08:35:29.0000000 +01:00

Same time, different timezone (+1 hour from GMT)

qid & accept id: (4283031, 4283064) query: how to get last date form DB table mysql soup:

A. This answers 'where date is the closest date from now...':

\n
SELECT *\nFROM `categoriesSupports`\nWHERE `date` IN (\n    SELECT `date`\n    FROM `categoriesSupports`\n    ORDER BY `date` DESC\n    LIMIT 1\n)\n
\n

Notes:

\n
    \n
  1. You can set LIMIT n to select entries for more dates.
  2. \n
  3. If you only want for the last date you can replace IN with = because the sub-select will return only one value.
  4. \n
  5. If your table includes future dates replace ORDER BY date DESC with ORDER BY ABS(NOW() - date) ASC.
  6. \n
\n
\n

A solution with JOINS. Will work only if you have past dates.

\n
SELECT a.*\nFROM `categoriesSupports` AS a\nLEFT JOIN `categoriesSupports` AS b\n    ON b.date > a.date\nWHERE b.id IS NULL\n
\n

Added just for reference.

\n
\n

B. This answers 'where date is in the last 3 days (including today)':

\n
SELECT *\nFROM `categoriesSupports`\nWHERE DATEDIFF(NOW(), `date`) < 3\n
\n

Replace 3 with any number if you want more or less days.

\n
\n

C. Same as A., but per support id

\n
SELECT a.*\nFROM `categoriesSupports` AS a\nLEFT JOIN `categoriesSupports` AS b\n    ON b.support_id = a.support_id AND b.date > a.date\nWHERE b.id IS NULL\n
\n

This answers the latest version of the question.

\n soup wrap:

A. This answers 'where date is the closest date from now...':

SELECT *
FROM `categoriesSupports`
WHERE `date` IN (
    SELECT `date`
    FROM `categoriesSupports`
    ORDER BY `date` DESC
    LIMIT 1
)

Notes:

  1. You can set LIMIT n to select entries for more dates.
  2. If you only want for the last date you can replace IN with = because the sub-select will return only one value.
  3. If your table includes future dates replace ORDER BY date DESC with ORDER BY ABS(NOW() - date) ASC.

A solution with JOINS. Will work only if you have past dates.

SELECT a.*
FROM `categoriesSupports` AS a
LEFT JOIN `categoriesSupports` AS b
    ON b.date > a.date
WHERE b.id IS NULL

Added just for reference.


B. This answers 'where date is in the last 3 days (including today)':

SELECT *
FROM `categoriesSupports`
WHERE DATEDIFF(NOW(), `date`) < 3

Replace 3 with any number if you want more or less days.


C. Same as A., but per support id

SELECT a.*
FROM `categoriesSupports` AS a
LEFT JOIN `categoriesSupports` AS b
    ON b.support_id = a.support_id AND b.date > a.date
WHERE b.id IS NULL

This answers the latest version of the question.

qid & accept id: (4301603, 4301887) query: Month name in sql server 2008 soup:
SELECT DATENAME(month, ) AS "Month Name" FROM \n
\n

Ex:

\n
SELECT DATENAME(month, JoinDate) AS "Month Name" FROM EMPLOYEE\n
\n

This value would return the monthname corresponding to the date value in the field JoinDate from the table EMPLOYEE.

\n soup wrap:
SELECT DATENAME(month, ) AS "Month Name" FROM 

Ex:

SELECT DATENAME(month, JoinDate) AS "Month Name" FROM EMPLOYEE

This value would return the monthname corresponding to the date value in the field JoinDate from the table EMPLOYEE.

qid & accept id: (4352912, 4353096) query: Select distinct not-null rows SQL server 2005 soup:

This works, don't know if it can be made any simpler

\n
SELECT ID1, ID2, ID3, ID4, ID5\nFROM IDS OUTT\nWHERE NOT EXISTS (SELECT 1\n                FROM IDS INN\n                WHERE OUTT.ID != INN.ID AND\n                      (ISNULL(OUTT.ID1, INN.ID1) = INN.ID1 OR (INN.ID1 IS NULL AND OUTT.ID1 IS NULL)) AND\n                      (ISNULL(OUTT.ID2, INN.ID2) = INN.ID2 OR (INN.ID2 IS NULL AND OUTT.ID2 IS NULL)) AND\n                      (ISNULL(OUTT.ID3, INN.ID3) = INN.ID3 OR (INN.ID3 IS NULL AND OUTT.ID3 IS NULL)) AND\n                      (ISNULL(OUTT.ID4, INN.ID4) = INN.ID4 OR (INN.ID4 IS NULL AND OUTT.ID4 IS NULL)) AND\n                      (ISNULL(OUTT.ID5, INN.ID5) = INN.ID5 OR (INN.ID5 IS NULL AND OUTT.ID5 IS NULL)))\n
\n

EDIT: Found a sweeter alternative, if your ids never have negative numbers

\n
SELECT ID1, ID2, ID3, ID4, ID5\nFROM IDS OUTT\nWHERE NOT EXISTS (SELECT 1\n                FROM IDS INN\n                WHERE OUTT.ID != INN.ID AND\n                      coalesce(OUTT.ID1, INN.ID1,-1) = isnull(INN.ID1,-1) AND\n                      coalesce(OUTT.ID2, INN.ID2,-1) = isnull(INN.ID2,-1) AND\n                      coalesce(OUTT.ID3, INN.ID3,-1) = isnull(INN.ID3,-1) AND\n                      coalesce(OUTT.ID4, INN.ID4,-1) = isnull(INN.ID4,-1) AND\n                      coalesce(OUTT.ID5, INN.ID5,-1) = isnull(INN.ID5,-1))  \n
\n

EDIT2: There is one case where it won't work - in case two rows (with different ids) have exact same form. I am assuming that it is not there. If such a thing is present, then first create a view with a select distinct on the base table first, and then apply this query.

\n soup wrap:

This works, don't know if it can be made any simpler

SELECT ID1, ID2, ID3, ID4, ID5
FROM IDS OUTT
WHERE NOT EXISTS (SELECT 1
                FROM IDS INN
                WHERE OUTT.ID != INN.ID AND
                      (ISNULL(OUTT.ID1, INN.ID1) = INN.ID1 OR (INN.ID1 IS NULL AND OUTT.ID1 IS NULL)) AND
                      (ISNULL(OUTT.ID2, INN.ID2) = INN.ID2 OR (INN.ID2 IS NULL AND OUTT.ID2 IS NULL)) AND
                      (ISNULL(OUTT.ID3, INN.ID3) = INN.ID3 OR (INN.ID3 IS NULL AND OUTT.ID3 IS NULL)) AND
                      (ISNULL(OUTT.ID4, INN.ID4) = INN.ID4 OR (INN.ID4 IS NULL AND OUTT.ID4 IS NULL)) AND
                      (ISNULL(OUTT.ID5, INN.ID5) = INN.ID5 OR (INN.ID5 IS NULL AND OUTT.ID5 IS NULL)))

EDIT: Found a sweeter alternative, if your ids never have negative numbers

SELECT ID1, ID2, ID3, ID4, ID5
FROM IDS OUTT
WHERE NOT EXISTS (SELECT 1
                FROM IDS INN
                WHERE OUTT.ID != INN.ID AND
                      coalesce(OUTT.ID1, INN.ID1,-1) = isnull(INN.ID1,-1) AND
                      coalesce(OUTT.ID2, INN.ID2,-1) = isnull(INN.ID2,-1) AND
                      coalesce(OUTT.ID3, INN.ID3,-1) = isnull(INN.ID3,-1) AND
                      coalesce(OUTT.ID4, INN.ID4,-1) = isnull(INN.ID4,-1) AND
                      coalesce(OUTT.ID5, INN.ID5,-1) = isnull(INN.ID5,-1))  

EDIT2: There is one case where it won't work - in case two rows (with different ids) have exact same form. I am assuming that it is not there. If such a thing is present, then first create a view with a select distinct on the base table first, and then apply this query.

qid & accept id: (4400347, 4400444) query: How to get a of count of items for multiple tables soup:

To get counts by ip and by day, the easiest way is to flatten the query:

\n
SELECT 'day1' AS day, srcIP, count(*) AS count FROM Day1 GROUP BY srcIP\nUNION\nSELECT 'day2' AS day, srcIP, count(*) AS count FROM Day2 GROUP BY srcIP\nUNION\nSELECT 'day3' AS day, srcIP, count(*) AS count FROM Day3 GROUP BY srcIP\n
\n

and then transpose it in your app to get the table format you want.

\n

Alternatively

\n

You can also do it by joining on IP:

\n
SELECT srcIP, d1.count, d2.count, d3.count\nFROM (SELECT srcIP, count(*) AS count FROM Day1 GROUP BY srcIP) d1\nLEFT JOIN (SELECT srcIP, count(*) AS count FROM Day2 GROUP BY srcIP) d2 USING (srcIP)\nLEFT JOIN (SELECT srcIP, count(*) AS count FROM Day3 GROUP BY srcIP) d3 USING (srcIP)\n
\n

But here you will be missing IPs that are not in Day1, unless you first do a SELECT DISTINCT srcIP from a UNION of all days, which is pretty expensive. Basically this table structure doesn't lend itself too easily to this kind of aggregation.

\n soup wrap:

To get counts by ip and by day, the easiest way is to flatten the query:

SELECT 'day1' AS day, srcIP, count(*) AS count FROM Day1 GROUP BY srcIP
UNION
SELECT 'day2' AS day, srcIP, count(*) AS count FROM Day2 GROUP BY srcIP
UNION
SELECT 'day3' AS day, srcIP, count(*) AS count FROM Day3 GROUP BY srcIP

and then transpose it in your app to get the table format you want.

Alternatively

You can also do it by joining on IP:

SELECT srcIP, d1.count, d2.count, d3.count
FROM (SELECT srcIP, count(*) AS count FROM Day1 GROUP BY srcIP) d1
LEFT JOIN (SELECT srcIP, count(*) AS count FROM Day2 GROUP BY srcIP) d2 USING (srcIP)
LEFT JOIN (SELECT srcIP, count(*) AS count FROM Day3 GROUP BY srcIP) d3 USING (srcIP)

But here you will be missing IPs that are not in Day1, unless you first do a SELECT DISTINCT srcIP from a UNION of all days, which is pretty expensive. Basically this table structure doesn't lend itself too easily to this kind of aggregation.

qid & accept id: (4429428, 4429746) query: Passing the tablename to the cursor soup:

To expand on JackPDouglas' answer, you cannot utilize a param name as the [table] name in a cursor. You must utilize dynamic sql into a REF CURSOR

\n

http://download.oracle.com/docs/cd/B10500_01/appdev.920/a96590/adg09dyn.htm#24492

\n
CREATE OR REPLACE PROCEDURE dynaQuery(\n       TAB IN VARCHAR2, \n       sid in number ,\n       cur OUT NOCOPY sys_refcursor) IS\n query_str VARCHAR2(200);\nBEGIN\n    query_str := 'SELECT USERNAME FROM ' || tab\n      || ' WHERE sid= :id';\ndbms_output.put_line(query_str);\n    OPEN cur FOR query_str USING sid;\nEND ;\n/\n
\n

Commence Example

\n
create table test1(sid number, username varchar2(50));\ninsert into test1(sid, username) values(123,'abc');\ninsert into test1(sid, username) values(123,'ddd');\ninsert into test1(sid, username) values(222,'abc');\ncommit;\n/\n\n\n\n declare \n  cur  sys_refcursor ;\n  sid number ;\n  uName varchar2(50) ;\n  begin\n  sid := 123; \n  dynaQuery('test1',sid, cur);\n   LOOP\n     FETCH cur INTO uName;\n     DBMS_OUTPUT.put_line(uName);\n     EXIT WHEN cur%NOTFOUND;\n     -- process row here\n   END LOOP;\nCLOSE CUR;\n\n\n  end ;\n
\n

Output:

\n
SELECT USERNAME FROM test1 WHERE sid= :id\nabc\nddd\nabc\nddd\nddd\n
\n

EDIT: Added Close CUR that was rightly suggested by @JackPDouglas

\n soup wrap:

To expand on JackPDouglas' answer, you cannot utilize a param name as the [table] name in a cursor. You must utilize dynamic sql into a REF CURSOR

http://download.oracle.com/docs/cd/B10500_01/appdev.920/a96590/adg09dyn.htm#24492

CREATE OR REPLACE PROCEDURE dynaQuery(
       TAB IN VARCHAR2, 
       sid in number ,
       cur OUT NOCOPY sys_refcursor) IS
 query_str VARCHAR2(200);
BEGIN
    query_str := 'SELECT USERNAME FROM ' || tab
      || ' WHERE sid= :id';
dbms_output.put_line(query_str);
    OPEN cur FOR query_str USING sid;
END ;
/

Commence Example

create table test1(sid number, username varchar2(50));
insert into test1(sid, username) values(123,'abc');
insert into test1(sid, username) values(123,'ddd');
insert into test1(sid, username) values(222,'abc');
commit;
/



 declare 
  cur  sys_refcursor ;
  sid number ;
  uName varchar2(50) ;
  begin
  sid := 123; 
  dynaQuery('test1',sid, cur);
   LOOP
     FETCH cur INTO uName;
     DBMS_OUTPUT.put_line(uName);
     EXIT WHEN cur%NOTFOUND;
     -- process row here
   END LOOP;
CLOSE CUR;


  end ;

Output:

SELECT USERNAME FROM test1 WHERE sid= :id
abc
ddd
abc
ddd
ddd

EDIT: Added Close CUR that was rightly suggested by @JackPDouglas

qid & accept id: (4434581, 4434608) query: SQL Query to check if student1 has a course with student 2 soup:

Try a self-join:

\n
SELECT T1.id_group\nFROM jos_gj_users T1\nJOIN jos_gj_users T2\nON T1.id_group = T2.id_group\nWHERE T1.id_user = 20\nAND T2.id_user = 21\n
\n

To just get a "true or false" result you can check from the client to see if at least one row exists in the result set rather than fetching the entire results.

\n

Alternatively you can do it in SQL by wrapping the above query in another SELECT that uses EXISTS:

\n
SELECT CASE WHEN EXISTS\n(\n    SELECT T1.id_group\n    FROM jos_gj_users T1\n    JOIN jos_gj_users T2\n    ON T1.id_group = T2.id_group\n    WHERE T1.id_user = 20\n    AND T2.id_user = 21\n) THEN 1 ELSE 0 END AS result\n
\n

This query returns either 0 (false) or 1 (true).

\n soup wrap:

Try a self-join:

SELECT T1.id_group
FROM jos_gj_users T1
JOIN jos_gj_users T2
ON T1.id_group = T2.id_group
WHERE T1.id_user = 20
AND T2.id_user = 21

To just get a "true or false" result you can check from the client to see if at least one row exists in the result set rather than fetching the entire results.

Alternatively you can do it in SQL by wrapping the above query in another SELECT that uses EXISTS:

SELECT CASE WHEN EXISTS
(
    SELECT T1.id_group
    FROM jos_gj_users T1
    JOIN jos_gj_users T2
    ON T1.id_group = T2.id_group
    WHERE T1.id_user = 20
    AND T2.id_user = 21
) THEN 1 ELSE 0 END AS result

This query returns either 0 (false) or 1 (true).

qid & accept id: (4441599, 4441664) query: How do I join an unknown number of rows to another row? soup:

You need to use a Dynamic PIVOT clause in order to do this.

\n

EDIT:

\n

Ok so I've done some playing around and based on the following sample data:

\n
Create Table TableA\n(\nIDCol int,\nSomeValue varchar(50)\n)\nCreate Table TableB\n(\nIDCol int,\nKEYCol int,\nValue varchar(50)\n)\n\nInsert into TableA\nValues (1, '123223')\nInsert Into TableA\nValues (2,'1232ff')\nInsert into TableA\nValues (3, '222222')\n\nInsert Into TableB\nValues( 23, 1, 435)\nInsert Into TableB\nValues( 24, 1, 436)\n\nInsert Into TableB\nValues( 25, 3, 45)\nInsert Into TableB\nValues( 26, 3, 46)\n\nInsert Into TableB\nValues( 27, 3, 435)\nInsert Into TableB\nValues( 28, 3, 437)\n
\n

You can execute the following Dynamic SQL.

\n
declare @sql varchar(max)\ndeclare @pivot_list varchar(max)\ndeclare @pivot_select varchar(max)\n\nSelect \n        @pivot_list = Coalesce(@Pivot_List + ', ','') + '[' + Value +']',\n        @Pivot_select = Coalesce(@pivot_Select, ', ','') +'IsNull([' + Value +'],'''') as [' + Value + '],'\nFrom \n(\nSelect distinct Value From dbo.TableB \n)PivotCodes\n\nSet @Sql = '\n;With p as (\n\nSelect a.IdCol,\n        a.SomeValue,\n        b.Value\nFrom dbo.TableA a\nLeft Join dbo.TableB b on a.IdCol = b.KeyCol\n)\nSelect IdCol, SomeValue ' + Left(@pivot_select, Len(@Pivot_Select)-1) + '\nFrom p\nPivot ( Max(Value) for Value in (' + @pivot_list + '\n        )\n    )as pvt\n'\n\nexec (@sql)\n
\n

This gives you the following output:

\n

alt text

\n

Although this works at the moment it would be a nightmare to maintain. I'd recommend trying to achieve these results somewhere else. i.e not in SQL!

\n

Good luck!

\n soup wrap:

You need to use a Dynamic PIVOT clause in order to do this.

EDIT:

Ok so I've done some playing around and based on the following sample data:

Create Table TableA
(
IDCol int,
SomeValue varchar(50)
)
Create Table TableB
(
IDCol int,
KEYCol int,
Value varchar(50)
)

Insert into TableA
Values (1, '123223')
Insert Into TableA
Values (2,'1232ff')
Insert into TableA
Values (3, '222222')

Insert Into TableB
Values( 23, 1, 435)
Insert Into TableB
Values( 24, 1, 436)

Insert Into TableB
Values( 25, 3, 45)
Insert Into TableB
Values( 26, 3, 46)

Insert Into TableB
Values( 27, 3, 435)
Insert Into TableB
Values( 28, 3, 437)

You can execute the following Dynamic SQL.

declare @sql varchar(max)
declare @pivot_list varchar(max)
declare @pivot_select varchar(max)

Select 
        @pivot_list = Coalesce(@Pivot_List + ', ','') + '[' + Value +']',
        @Pivot_select = Coalesce(@pivot_Select, ', ','') +'IsNull([' + Value +'],'''') as [' + Value + '],'
From 
(
Select distinct Value From dbo.TableB 
)PivotCodes

Set @Sql = '
;With p as (

Select a.IdCol,
        a.SomeValue,
        b.Value
From dbo.TableA a
Left Join dbo.TableB b on a.IdCol = b.KeyCol
)
Select IdCol, SomeValue ' + Left(@pivot_select, Len(@Pivot_Select)-1) + '
From p
Pivot ( Max(Value) for Value in (' + @pivot_list + '
        )
    )as pvt
'

exec (@sql)

This gives you the following output:

alt text

Although this works at the moment it would be a nightmare to maintain. I'd recommend trying to achieve these results somewhere else. i.e not in SQL!

Good luck!

qid & accept id: (4459902, 4460148) query: is it possible to have alphanumeric sequence generator in sql soup:

You could create a function like this:

\n
create function to_base_36 (n integer) return varchar2\nis\n  q integer;\n  r varchar2(100);\nbegin\n  q := n;\n  while q >= 36 loop\n     r := chr(mod(q,36)+case when mod(q,36) < 10 then 48 else 55 end) || r;\n     q := floor(q/36);\n  end loop;\n  r := chr(mod(q,36)+case when mod(q,36) < 10 then 48 else 55 end) || r;\n  return lpad(r,4,'0');\nend;\n
\n

and then use it like this:

\n
select rownum, to_base_36(rownum)\nfrom dual\nconnect by level < 36*36*36*36;\n
\n

Or, without creating a function:

\n
with digits as\n( select n, chr(mod(n,36)+case when mod(n,36) < 10 then 48 else 55 end) d\n  from (Select rownum-1 as n from dual connect by level < 37)\n)\nselect d1.n*36*36*36 + d2.n*36*36 + d3.n*36 + d4.n, d1.d||d2.d||d3.d||d4.d\nfrom digits d1, digits d2, digits d3, digits d4\n
\n soup wrap:

You could create a function like this:

create function to_base_36 (n integer) return varchar2
is
  q integer;
  r varchar2(100);
begin
  q := n;
  while q >= 36 loop
     r := chr(mod(q,36)+case when mod(q,36) < 10 then 48 else 55 end) || r;
     q := floor(q/36);
  end loop;
  r := chr(mod(q,36)+case when mod(q,36) < 10 then 48 else 55 end) || r;
  return lpad(r,4,'0');
end;

and then use it like this:

select rownum, to_base_36(rownum)
from dual
connect by level < 36*36*36*36;

Or, without creating a function:

with digits as
( select n, chr(mod(n,36)+case when mod(n,36) < 10 then 48 else 55 end) d
  from (Select rownum-1 as n from dual connect by level < 37)
)
select d1.n*36*36*36 + d2.n*36*36 + d3.n*36 + d4.n, d1.d||d2.d||d3.d||d4.d
from digits d1, digits d2, digits d3, digits d4
qid & accept id: (4521020, 4521199) query: Calculate open timeslots given availability and existing appointments - by day soup:

You need to discretize your time. Choose a time interval to use as your atom. Based on your example, that should probably be a half hour.

\n

Now

\n
Create table Availability (person_id int, interval_id int);\nCreate table Appointment (person_id int, interval_id int, appointment_desc text);\n
\n

I'm leaving out the primary keys, and there should be foreign keys to lookup tables for Person and Interval.

\n

There will be an Interval table for looking up what each interval_id stands for.

\n
Create table Interval(interval_id int primary key, interval_start datetime, interval_end datetime)\n
\n

Populate the Interval table with every interval you're going to have in your calendar. Populating it might be a chore, but you can create the actual values in Excel, then paste them into your Interval table.

\n

Now you can find free intervals as

\n
Select person_id, interval_id from Availability av\nleft join Appointment ap\non av.person_id = ap.person_id and av.interval_id = ap.interval_id\nwhere ap.interval_id is null\n
\n

MSSQL can do this kind of outer join in no time (provided you set up the keys), and you can include the list of free intervals in the pages you send, with javascript to display them when and as desired.

\n soup wrap:

You need to discretize your time. Choose a time interval to use as your atom. Based on your example, that should probably be a half hour.

Now

Create table Availability (person_id int, interval_id int);
Create table Appointment (person_id int, interval_id int, appointment_desc text);

I'm leaving out the primary keys, and there should be foreign keys to lookup tables for Person and Interval.

There will be an Interval table for looking up what each interval_id stands for.

Create table Interval(interval_id int primary key, interval_start datetime, interval_end datetime)

Populate the Interval table with every interval you're going to have in your calendar. Populating it might be a chore, but you can create the actual values in Excel, then paste them into your Interval table.

Now you can find free intervals as

Select person_id, interval_id from Availability av
left join Appointment ap
on av.person_id = ap.person_id and av.interval_id = ap.interval_id
where ap.interval_id is null

MSSQL can do this kind of outer join in no time (provided you set up the keys), and you can include the list of free intervals in the pages you send, with javascript to display them when and as desired.

qid & accept id: (4589157, 4589195) query: If and only if condition SQL -- SQL server 2008 soup:

For a "complete" pull:

\n
SELECT p.profileID, p.firstName, p.lastName, sc.cprAdultExp, sc.....\nFROM pro_Profile AS p\n   LEFT OUTER JOIN mod_StudentCertifications AS sc ON sc.profileID = p.profileID\nWHERE p.profileID NOT IN\n    (\n       SELECT profileID\n       FROM mod_userStatus\n    )\n;\n
\n

For a single "profile" pull:

\n
SELECT p.profileID, p.firstName, p.lastName, sc.cprAdultExp, sc.....\nFROM pro_Profile AS p\n   LEFT OUTER JOIN mod_StudentCertifications AS sc ON sc.profileID = p.profileID\nWHERE p.profileID = ?\n    AND p.profileID NOT IN      \n    (\n       SELECT profileID\n       FROM mod_userStatus\n       WHERE profileID = ?\n    )\n;\n
\n

EDIT: Looked at the execution plan of using a LEFT OUTER JOIN for mod_userStatus and checking it's primary key for null VS a NOT IN statement in a similar setup. The NOT IN statement is indeed less costly.

\n

The LEFT OUTER JOIN performs a filter & hash match (Cost: 2.984):\nalt text

\n

While the NOT IN performs a merge join (Cost: 1.508):\nalt text

\n soup wrap:

For a "complete" pull:

SELECT p.profileID, p.firstName, p.lastName, sc.cprAdultExp, sc.....
FROM pro_Profile AS p
   LEFT OUTER JOIN mod_StudentCertifications AS sc ON sc.profileID = p.profileID
WHERE p.profileID NOT IN
    (
       SELECT profileID
       FROM mod_userStatus
    )
;

For a single "profile" pull:

SELECT p.profileID, p.firstName, p.lastName, sc.cprAdultExp, sc.....
FROM pro_Profile AS p
   LEFT OUTER JOIN mod_StudentCertifications AS sc ON sc.profileID = p.profileID
WHERE p.profileID = ?
    AND p.profileID NOT IN      
    (
       SELECT profileID
       FROM mod_userStatus
       WHERE profileID = ?
    )
;

EDIT: Looked at the execution plan of using a LEFT OUTER JOIN for mod_userStatus and checking it's primary key for null VS a NOT IN statement in a similar setup. The NOT IN statement is indeed less costly.

The LEFT OUTER JOIN performs a filter & hash match (Cost: 2.984): alt text

While the NOT IN performs a merge join (Cost: 1.508): alt text

qid & accept id: (4598659, 4598746) query: sql stored procedure loop soup:

This is not directly an answer, but the code cannot be posted in a readible fashion in a comment, so I think this should be okay here:

\n

Don't loop in SPs, rather use a CTE to generate the numbers you need.

\n
DECLARE @YearToGet int;\nSET @YearToGet = 2005;\n\nWITH Years AS (\n    SELECT DATEPART(year, GETDATE()) [Year]\n    UNION ALL\n    SELECT [Year]-1 FROM Years WHERE [Year]>@YearToGet\n)\nSELECT * FROM Years -- join here with your query\nOPTION (MAXRECURSION 0) -- this avoids hitting the recursion limit in the CTE\n
\n

Edit: Try this

\n
WITH  Years\n          AS (\n              SELECT DATEPART(year, GETDATE()) [Year]\n              UNION ALL\n              SELECT [Year]-1\n                FROM Years\n                WHERE [Year] > @YearToGet\n             )\n    SELECT DIVISION, DYYYY, SUM(APRICE) AS Sales, SUM(PARTY) AS PAX, SUM(NetAmount) AS NetSales, SUM(InsAmount) AS InsSales, SUM(CancelRevenue) AS CXSales, SUM(OtherAmount) AS OtherSales, SUM(CXVALUE) AS CXValue\n      FROM dbo.B101BookingsDetails \n      JOIN Years yr ON DYYYY = yr.[Year]\n      WHERE Booked <= CONVERT(int, DATEADD(year, DYYYY-YEAR(GETDATE()), DATEADD(day, DATEDIFF(day, 2, GETDATE()), 0)))\n      GROUP BY DYYYY, DIVISION\n      ORDER BY DIVISION, DYYYY\n    OPTION (MAXRECURSION 0);\n
\n soup wrap:

This is not directly an answer, but the code cannot be posted in a readible fashion in a comment, so I think this should be okay here:

Don't loop in SPs, rather use a CTE to generate the numbers you need.

DECLARE @YearToGet int;
SET @YearToGet = 2005;

WITH Years AS (
    SELECT DATEPART(year, GETDATE()) [Year]
    UNION ALL
    SELECT [Year]-1 FROM Years WHERE [Year]>@YearToGet
)
SELECT * FROM Years -- join here with your query
OPTION (MAXRECURSION 0) -- this avoids hitting the recursion limit in the CTE

Edit: Try this

WITH  Years
          AS (
              SELECT DATEPART(year, GETDATE()) [Year]
              UNION ALL
              SELECT [Year]-1
                FROM Years
                WHERE [Year] > @YearToGet
             )
    SELECT DIVISION, DYYYY, SUM(APRICE) AS Sales, SUM(PARTY) AS PAX, SUM(NetAmount) AS NetSales, SUM(InsAmount) AS InsSales, SUM(CancelRevenue) AS CXSales, SUM(OtherAmount) AS OtherSales, SUM(CXVALUE) AS CXValue
      FROM dbo.B101BookingsDetails 
      JOIN Years yr ON DYYYY = yr.[Year]
      WHERE Booked <= CONVERT(int, DATEADD(year, DYYYY-YEAR(GETDATE()), DATEADD(day, DATEDIFF(day, 2, GETDATE()), 0)))
      GROUP BY DYYYY, DIVISION
      ORDER BY DIVISION, DYYYY
    OPTION (MAXRECURSION 0);
qid & accept id: (4621932, 4623655) query: Oracle: How do I display DBMS_XMLDOM.DOMDocument for debugging? soup:
DBMS_XMLDOM.WRITETOBUFFER  Writes the contents of the node to a buffer.\nDBMS_XMLDOM.WRITETOCLOB    Writes the contents of the node to a CLOB.\nDBMS_XMLDOM.WRITETOFILE    Writes the contents of the node to a file.\n
\n

I have PL/SQL code that wites it to the file system using a DIRECTORY:

\n
   dbms_xmldom.writeToFile(dbms_xmldom.newDOMDocument( xmldoc)\n                                       ,'DATAPUMPDIR/myfile.xml') ;\n
\n

I have created a function using dbms_xmldom.writetoclob

\n
   create or replace function xml2clob (xmldoc XMLType) return CLOB is\n     clobdoc CLOB := ' ';\n   begin\n     dbms_xmldom.writeToClob(dbms_xmldom.newDOMDocument( xmldoc)\n                                       ,clobdoc) ;\n     return clobdoc;\n   end;\n   /\n
\n

Query:

\n
SELECT xml2clob(Sys_Xmlagg(\n         Xmlelement(Name "dummy"\n                   ,dummy\n                   ),Xmlformat('dual')))\n   FROM dual;\n
\n

Output:

\n
\n\n  X\n\n
\n

You could try using a function like this:

\n
   create or replace function dom2clob (domdoc  DBMS_XMLDOM.DOMDocument) return CLOB is\n     clobdoc CLOB := ' ';\n   begin\n     dbms_xmldom.writeToClob(domdoc,clobdoc) ;\n     return clobdoc;\n   end;\n   /\n
\n soup wrap:
DBMS_XMLDOM.WRITETOBUFFER  Writes the contents of the node to a buffer.
DBMS_XMLDOM.WRITETOCLOB    Writes the contents of the node to a CLOB.
DBMS_XMLDOM.WRITETOFILE    Writes the contents of the node to a file.

I have PL/SQL code that wites it to the file system using a DIRECTORY:

   dbms_xmldom.writeToFile(dbms_xmldom.newDOMDocument( xmldoc)
                                       ,'DATAPUMPDIR/myfile.xml') ;

I have created a function using dbms_xmldom.writetoclob

   create or replace function xml2clob (xmldoc XMLType) return CLOB is
     clobdoc CLOB := ' ';
   begin
     dbms_xmldom.writeToClob(dbms_xmldom.newDOMDocument( xmldoc)
                                       ,clobdoc) ;
     return clobdoc;
   end;
   /

Query:

SELECT xml2clob(Sys_Xmlagg(
         Xmlelement(Name "dummy"
                   ,dummy
                   ),Xmlformat('dual')))
   FROM dual;

Output:



  X

You could try using a function like this:

   create or replace function dom2clob (domdoc  DBMS_XMLDOM.DOMDocument) return CLOB is
     clobdoc CLOB := ' ';
   begin
     dbms_xmldom.writeToClob(domdoc,clobdoc) ;
     return clobdoc;
   end;
   /
qid & accept id: (4761507, 4761860) query: Matching first char in string to digit or non-standard character soup:

Create links representing every letter and number. Clicking these links will provide the users with the results from the database that begin with the selected character.

\n
SELECT title FROM table\nWHERE LEFT(title,1) = ?Char\nORDER BY title ASC;\n
\n

Consider paginating these result pages into appropriate chunks. MySQL will let you do this with LIMIT

\n

This command will select the first 100 records from the desired character group:

\n
SELECT title FROM table\nWHERE LEFT(title,1) = ?Char\nORDER BY title ASC\nLIMIT 0, 100;\n
\n

This command will select the second 100 records from the desired character group:

\n
SELECT title FROM table\nWHERE LEFT(title,1) = ?Char\nORDER BY title ASC\nLIMIT 100, 100;\n
\n

Per your comments, if you want to combine characters 0-9 without using regex, you will need to combine several OR statements:

\n
SELECT title FROM table\nWHERE (\n    LEFT(title,1) = '0'\n    OR LEFT(title,1) = '1'\n    ...\n    )\nORDER BY title ASC;\n
\n soup wrap:

Create links representing every letter and number. Clicking these links will provide the users with the results from the database that begin with the selected character.

SELECT title FROM table
WHERE LEFT(title,1) = ?Char
ORDER BY title ASC;

Consider paginating these result pages into appropriate chunks. MySQL will let you do this with LIMIT

This command will select the first 100 records from the desired character group:

SELECT title FROM table
WHERE LEFT(title,1) = ?Char
ORDER BY title ASC
LIMIT 0, 100;

This command will select the second 100 records from the desired character group:

SELECT title FROM table
WHERE LEFT(title,1) = ?Char
ORDER BY title ASC
LIMIT 100, 100;

Per your comments, if you want to combine characters 0-9 without using regex, you will need to combine several OR statements:

SELECT title FROM table
WHERE (
    LEFT(title,1) = '0'
    OR LEFT(title,1) = '1'
    ...
    )
ORDER BY title ASC;
qid & accept id: (4773206, 4773215) query: how to update using nested query in SQL soup:

Give this a try

\n
Update t\nSet t.yyyy = q.Name\nFrom TableToUpdate t\nJoin AddressTable q on q.Address = t.Address\n
\n

This assumes that Address field (which you are joining on) is a one to one relationship with the Address field in the table you are updating

\n

This can also be written

\n
Update TableToUpdate\nSet yyyy = q.Name\nFrom AddressTable q\nWHERE q.Address = TableToUpdate.Address\n
\n

Since the update table is accessible in the FROM/WHERE clauses, except it cannot be aliased.

\n soup wrap:

Give this a try

Update t
Set t.yyyy = q.Name
From TableToUpdate t
Join AddressTable q on q.Address = t.Address

This assumes that Address field (which you are joining on) is a one to one relationship with the Address field in the table you are updating

This can also be written

Update TableToUpdate
Set yyyy = q.Name
From AddressTable q
WHERE q.Address = TableToUpdate.Address

Since the update table is accessible in the FROM/WHERE clauses, except it cannot be aliased.

qid & accept id: (4787104, 4787136) query: How to Select and Order By columns not in Groupy By SQL statement - Oracle soup:

It does not make sense to include columns that are not part of the GROUP BY clause. Consider if you have a MIN(X), MAX(Y) in the SELECT clause, which row should other columns (not grouped) come from?

\n

If your Oracle version is recent enough, you can use SUM - OVER() to show the SUM (grouped) against every data row.

\n
SELECT  \n    IMPORTID,Site,Desk,Region,RefObligor,\n    SUM(NOTIONAL) OVER(PARTITION BY IMPORTID, Region,RefObligor) AS SUM_NOTIONAL\nFrom \n    Positions\nWhere\n    ID = :importID\nOrder BY \n    IMPORTID,Region,Site,Desk,RefObligor\n
\n

Alternatively, you need to make an aggregate out of the Site, Desk columns

\n
SELECT  \n    IMPORTID,Region,Min(Site) Site, Min(Desk) Desk,RefObligor,SUM(NOTIONAL) AS SUM_NOTIONAL\nFrom \n    Positions\nWhere\n    ID = :importID\nGROUP BY \n    IMPORTID, Region,RefObligor\nOrder BY \n    IMPORTID, Region,Min(Site),Min(Desk),RefObligor\n
\n soup wrap:

It does not make sense to include columns that are not part of the GROUP BY clause. Consider if you have a MIN(X), MAX(Y) in the SELECT clause, which row should other columns (not grouped) come from?

If your Oracle version is recent enough, you can use SUM - OVER() to show the SUM (grouped) against every data row.

SELECT  
    IMPORTID,Site,Desk,Region,RefObligor,
    SUM(NOTIONAL) OVER(PARTITION BY IMPORTID, Region,RefObligor) AS SUM_NOTIONAL
From 
    Positions
Where
    ID = :importID
Order BY 
    IMPORTID,Region,Site,Desk,RefObligor

Alternatively, you need to make an aggregate out of the Site, Desk columns

SELECT  
    IMPORTID,Region,Min(Site) Site, Min(Desk) Desk,RefObligor,SUM(NOTIONAL) AS SUM_NOTIONAL
From 
    Positions
Where
    ID = :importID
GROUP BY 
    IMPORTID, Region,RefObligor
Order BY 
    IMPORTID, Region,Min(Site),Min(Desk),RefObligor
qid & accept id: (4821831, 4822494) query: sql server: generate primary key based on counter and another column value soup:

Whilst I agree with the naysayers, the principle of "accepting that which cannot be changed" tends to lower the overall stress level, IMHO. Try the following approach.

\n

Disadvantages

\n\n

On the up side, though, this approach doesn't have any race conditions associated with it, and it isn't too egregious a hack to really and truly offend my sensibilities. So...

\n

First, start with a key generation table. It will contain 1 row for each company, containing your company identifier and an integer counter that we'll be bumping up each time an insert is performed.

\n
create table dbo.CustomerNumberGenerator\n(\n  company     varchar(8) not null ,\n  curr_value  int        not null default(1) ,\n\n  constraint CustomerNumberGenerator_PK primary key clustered ( company ) ,\n\n)\n
\n

Second, you'll need a stored procedure like this (in fact, you might want to integrate this logic into the stored procedure responsible for inserting the customer record. More on that in a bit). This stored procedure accepts a company identifier (e.g. 'MSFT') as its sole argument. This stored procedure does the following:

\n\n

Here you go:

\n
create procedure dbo.GetNewCustomerNumber\n\n  @company         varchar(8)\n\nas\n\n  set nocount                 on\n  set ansi_nulls              on\n  set concat_null_yields_null on\n  set xact_abort              on\n\n  declare\n    @customer_number varchar(32)\n\n  --\n  -- put the supplied key in canonical form\n  --\n  set @company = ltrim(rtrim(upper(@company)))\n\n  --\n  -- if the name isn't already defined in the table, define it.\n  --\n  insert dbo.CustomerNumberGenerator ( company )\n  select id = @company\n  where not exists ( select *\n                     from dbo.CustomerNumberGenerator\n                     where company = @company\n                   )\n\n  --\n  -- now, an interlocked update to get the current value and increment the table\n  --\n  update CustomerNumberGenerator\n  set @customer_number = company + right( '00000000' + convert(varchar,curr_value) , 8 ) ,\n      curr_value       = curr_value + 1\n  where company = @company\n\n  --\n  -- return the new unique value to the caller\n  --\n  select customer_number = @customer_number\n  return 0\n\ngo\n
\n

The reason you might want to integrate this into the stored procedure that inserts a row into the customer table is that it makes globbing it all together into a single transaction; without that, your customer numbers may/will get gaps when an insert fails land gets rolled back.

\n soup wrap:

Whilst I agree with the naysayers, the principle of "accepting that which cannot be changed" tends to lower the overall stress level, IMHO. Try the following approach.

Disadvantages

On the up side, though, this approach doesn't have any race conditions associated with it, and it isn't too egregious a hack to really and truly offend my sensibilities. So...

First, start with a key generation table. It will contain 1 row for each company, containing your company identifier and an integer counter that we'll be bumping up each time an insert is performed.

create table dbo.CustomerNumberGenerator
(
  company     varchar(8) not null ,
  curr_value  int        not null default(1) ,

  constraint CustomerNumberGenerator_PK primary key clustered ( company ) ,

)

Second, you'll need a stored procedure like this (in fact, you might want to integrate this logic into the stored procedure responsible for inserting the customer record. More on that in a bit). This stored procedure accepts a company identifier (e.g. 'MSFT') as its sole argument. This stored procedure does the following:

Here you go:

create procedure dbo.GetNewCustomerNumber

  @company         varchar(8)

as

  set nocount                 on
  set ansi_nulls              on
  set concat_null_yields_null on
  set xact_abort              on

  declare
    @customer_number varchar(32)

  --
  -- put the supplied key in canonical form
  --
  set @company = ltrim(rtrim(upper(@company)))

  --
  -- if the name isn't already defined in the table, define it.
  --
  insert dbo.CustomerNumberGenerator ( company )
  select id = @company
  where not exists ( select *
                     from dbo.CustomerNumberGenerator
                     where company = @company
                   )

  --
  -- now, an interlocked update to get the current value and increment the table
  --
  update CustomerNumberGenerator
  set @customer_number = company + right( '00000000' + convert(varchar,curr_value) , 8 ) ,
      curr_value       = curr_value + 1
  where company = @company

  --
  -- return the new unique value to the caller
  --
  select customer_number = @customer_number
  return 0

go

The reason you might want to integrate this into the stored procedure that inserts a row into the customer table is that it makes globbing it all together into a single transaction; without that, your customer numbers may/will get gaps when an insert fails land gets rolled back.

qid & accept id: (4823208, 4823298) query: How do I select unique pairs of rows from a table at random? soup:
select a.id, b.id\nfrom people1 a\ninner join people1 b on a.id < b.id\nwhere not exists (\n    select *\n    from pairs1 c\n    where c.person_a_id = a.id\n      and c.person_b_id = b.id)\norder by a.id * rand()\nlimit 1;\n
\n

Limit 1 returns just one pair if you are "drawing lots" one at a time. Otherwise, up the limit to however many pairs you need.

\n

The above query assumes that you can get

\n
1 - 2\n2 - 7\n
\n

and that the pairing 2 - 7 is valid since it doesn't exist, even if 2 is featured again. If you only want a person to feature in only one pair ever, then

\n
select a.id, b.id\nfrom people1 a\ninner join people1 b on a.id < b.id\nwhere not exists (\n    select *\n    from pairs1 c\n    where c.person_a_id in (a.id, b.id))\n  and not exists (\n    select *\n    from pairs1 c\n    where c.person_b_id in (a.id, b.id))\norder by a.id * rand()\nlimit 1;\n
\n

If multiple pairs are to be generated in one single query, AND the destination table is still empty, you could use this single query. Take note that LIMIT 6 returns only 3 pairs.

\n
select min(a) a, min(b) b\nfrom\n(\n    select\n      case when mod(@p,2) = 1 then id end a,\n      case when mod(@p,2) = 0 then id end b,\n      @p:=@p+1 grp\n    from (\n        select id\n        from (select @p:=1) p, people1\n        order by rand()\n        limit 6\n    ) x\n) y\ngroup by floor(grp/2)\n
\n soup wrap:
select a.id, b.id
from people1 a
inner join people1 b on a.id < b.id
where not exists (
    select *
    from pairs1 c
    where c.person_a_id = a.id
      and c.person_b_id = b.id)
order by a.id * rand()
limit 1;

Limit 1 returns just one pair if you are "drawing lots" one at a time. Otherwise, up the limit to however many pairs you need.

The above query assumes that you can get

1 - 2
2 - 7

and that the pairing 2 - 7 is valid since it doesn't exist, even if 2 is featured again. If you only want a person to feature in only one pair ever, then

select a.id, b.id
from people1 a
inner join people1 b on a.id < b.id
where not exists (
    select *
    from pairs1 c
    where c.person_a_id in (a.id, b.id))
  and not exists (
    select *
    from pairs1 c
    where c.person_b_id in (a.id, b.id))
order by a.id * rand()
limit 1;

If multiple pairs are to be generated in one single query, AND the destination table is still empty, you could use this single query. Take note that LIMIT 6 returns only 3 pairs.

select min(a) a, min(b) b
from
(
    select
      case when mod(@p,2) = 1 then id end a,
      case when mod(@p,2) = 0 then id end b,
      @p:=@p+1 grp
    from (
        select id
        from (select @p:=1) p, people1
        order by rand()
        limit 6
    ) x
) y
group by floor(grp/2)
qid & accept id: (4841038, 4841062) query: Force MySQL to use two indexes on a Join soup:

See MySQL Docs for FORCE INDEX.

\n
JOIN survey_customer_similarity AS scs \nFORCE INDEX (CONSUMER_ID_1,CONSUMER_ID_2)\nON\ncr.CONSUMER_ID=scs.CONSUMER_ID_2 \nAND cal.SENDER_CONSUMER_ID=scs.CONSUMER_ID_1 \nOR cr.CONSUMER_ID=scs.CONSUMER_ID_1 \nAND cal.SENDER_CONSUMER_ID=scs.CONSUMER_ID_2\n
\n

As TheScrumMeister has pointed out below, it depends on your data, whether two indexes can actually be used at once.\n


\nHere's an example where you need to force the table to appear twice to control the query execution and intersection.\n\n

Use this to create a table with >100K records, with roughly 1K rows matching the filter i in (2,3) and 1K rows matching j in (2,3):

\n
drop table if exists t1;\ncreate table t1 (id int auto_increment primary key, i int, j int);\ncreate index ix_t1_on_i on t1(i);\ncreate index ix_t1_on_j on t1(j);\ninsert into t1 (i,j) values (2,2),(2,3),(4,5),(6,6),(2,6),(2,7),(3,2);\ninsert into t1 (i,j) select i*2, j*2+i from t1;\ninsert into t1 (i,j) select i*2, j*2+i from t1;\ninsert into t1 (i,j) select i*2, j*2+i from t1;\ninsert into t1 (i,j) select i*2, j*2+i from t1;\ninsert into t1 (i,j) select i*2, j*2+i from t1;\ninsert into t1 (i,j) select i*2, j*2+i from t1;\ninsert into t1 (i,j) select i*2, j*2+i from t1;\ninsert into t1 (i,j) select i*2, j*2+i from t1;\ninsert into t1 (i,j) select i*2, j*2+i from t1;\ninsert into t1 (i,j) select i*2, j*2+i from t1;\ninsert into t1 (i,j) select i*2, j*2+i from t1;\ninsert into t1 (i,j) select i*2, j*2+i from t1;\ninsert into t1 (i,j) select i, j from t1;\ninsert into t1 (i,j) select i, j from t1;\ninsert into t1 (i,j) select 2, j from t1 where not j in (2,3) limit 1000;\ninsert into t1 (i,j) select i, 3 from t1 where not i in (2,3) limit 1000;\n
\n

When doing:

\n
select t.* from t1 as t where t.i=2 and t.j=3 or t.i=3 and t.j=2\n
\n

you get exactly 8 matches:

\n
+-------+------+------+\n| id    | i    | j    |\n+-------+------+------+\n|     7 |    3 |    2 |\n| 28679 |    3 |    2 |\n| 57351 |    3 |    2 |\n| 86023 |    3 |    2 |\n|     2 |    2 |    3 |\n| 28674 |    2 |    3 |\n| 57346 |    2 |    3 |\n| 86018 |    2 |    3 |\n+-------+------+------+\n
\n

Use EXPLAIN on the query above to get:

\n
id | select_type | table | type  | possible_keys         | key        | key_len | ref  | rows | Extra\n1  | SIMPLE      | t     | range | ix_t1_on_i,ix_t1_on_j | ix_t1_on_j | 5       | NULL | 1012 | Using where\n
\n

Even if we add FORCE INDEX to the query on two indexes EXPLAIN will return the exact same thing.

\n

To make it collect across two indexes, and then intersect them, use this:

\n
select t.* from t1 as a force index(ix_t1_on_i)\n\njoin t1 as b force index(ix_t1_on_j) on a.id=b.id\n\nwhere a.i=2 and b.j=3 or a.i=3 and b.j=2\n
\n

Use that query with explain to get:

\n
id | select_type | table | type  | possible_keys | key        | key_len | ref  | rows | Extra\n1  | SIMPLE      | a     | range | ix_t1_on_i    | ix_t1_on_i | 5       | NULL | 1019 | Using where\n1  | SIMPLE      | b     | range | ix_t1_on_j    | ix_t1_on_j | 5       | NULL | 1012 | Using where; Using index\n
\n

This proves that the indexes are being used. But that may or may not be faster depending on many other factors.

\n soup wrap:

See MySQL Docs for FORCE INDEX.

JOIN survey_customer_similarity AS scs 
FORCE INDEX (CONSUMER_ID_1,CONSUMER_ID_2)
ON
cr.CONSUMER_ID=scs.CONSUMER_ID_2 
AND cal.SENDER_CONSUMER_ID=scs.CONSUMER_ID_1 
OR cr.CONSUMER_ID=scs.CONSUMER_ID_1 
AND cal.SENDER_CONSUMER_ID=scs.CONSUMER_ID_2

As TheScrumMeister has pointed out below, it depends on your data, whether two indexes can actually be used at once.


Here's an example where you need to force the table to appear twice to control the query execution and intersection.

Use this to create a table with >100K records, with roughly 1K rows matching the filter i in (2,3) and 1K rows matching j in (2,3):

drop table if exists t1;
create table t1 (id int auto_increment primary key, i int, j int);
create index ix_t1_on_i on t1(i);
create index ix_t1_on_j on t1(j);
insert into t1 (i,j) values (2,2),(2,3),(4,5),(6,6),(2,6),(2,7),(3,2);
insert into t1 (i,j) select i*2, j*2+i from t1;
insert into t1 (i,j) select i*2, j*2+i from t1;
insert into t1 (i,j) select i*2, j*2+i from t1;
insert into t1 (i,j) select i*2, j*2+i from t1;
insert into t1 (i,j) select i*2, j*2+i from t1;
insert into t1 (i,j) select i*2, j*2+i from t1;
insert into t1 (i,j) select i*2, j*2+i from t1;
insert into t1 (i,j) select i*2, j*2+i from t1;
insert into t1 (i,j) select i*2, j*2+i from t1;
insert into t1 (i,j) select i*2, j*2+i from t1;
insert into t1 (i,j) select i*2, j*2+i from t1;
insert into t1 (i,j) select i*2, j*2+i from t1;
insert into t1 (i,j) select i, j from t1;
insert into t1 (i,j) select i, j from t1;
insert into t1 (i,j) select 2, j from t1 where not j in (2,3) limit 1000;
insert into t1 (i,j) select i, 3 from t1 where not i in (2,3) limit 1000;

When doing:

select t.* from t1 as t where t.i=2 and t.j=3 or t.i=3 and t.j=2

you get exactly 8 matches:

+-------+------+------+
| id    | i    | j    |
+-------+------+------+
|     7 |    3 |    2 |
| 28679 |    3 |    2 |
| 57351 |    3 |    2 |
| 86023 |    3 |    2 |
|     2 |    2 |    3 |
| 28674 |    2 |    3 |
| 57346 |    2 |    3 |
| 86018 |    2 |    3 |
+-------+------+------+

Use EXPLAIN on the query above to get:

id | select_type | table | type  | possible_keys         | key        | key_len | ref  | rows | Extra
1  | SIMPLE      | t     | range | ix_t1_on_i,ix_t1_on_j | ix_t1_on_j | 5       | NULL | 1012 | Using where

Even if we add FORCE INDEX to the query on two indexes EXPLAIN will return the exact same thing.

To make it collect across two indexes, and then intersect them, use this:

select t.* from t1 as a force index(ix_t1_on_i)

join t1 as b force index(ix_t1_on_j) on a.id=b.id

where a.i=2 and b.j=3 or a.i=3 and b.j=2

Use that query with explain to get:

id | select_type | table | type  | possible_keys | key        | key_len | ref  | rows | Extra
1  | SIMPLE      | a     | range | ix_t1_on_i    | ix_t1_on_i | 5       | NULL | 1019 | Using where
1  | SIMPLE      | b     | range | ix_t1_on_j    | ix_t1_on_j | 5       | NULL | 1012 | Using where; Using index

This proves that the indexes are being used. But that may or may not be faster depending on many other factors.

qid & accept id: (4857837, 4857878) query: SQL query that can select n rows order by and then return m row soup:

The literal interpretation would lead to

\n
select top 1000 from tbl order by columnname\n
\n

And the next step to

\n
SELECT TOP 100 FROM (select top 1000 from tbl order by columnname) SQ\n
\n

But that gives no different than a direct

\n
select top 100 from tbl order by columnname\n
\n

Unless you are after 2 different orderings

\n
SELECT TOP 100\nFROM (\n   select top 1000 from tbl\n   order by columnname) SQ\nORDER BY othercolumn\n
\n

or switching between asc/desc

\n
SELECT TOP 100\nFROM (\n   select top 1000 from tbl\n   order by columnname ASC) SQ\nORDER BY columnname DESC\n
\n soup wrap:

The literal interpretation would lead to

select top 1000 from tbl order by columnname

And the next step to

SELECT TOP 100 FROM (select top 1000 from tbl order by columnname) SQ

But that gives no different than a direct

select top 100 from tbl order by columnname

Unless you are after 2 different orderings

SELECT TOP 100
FROM (
   select top 1000 from tbl
   order by columnname) SQ
ORDER BY othercolumn

or switching between asc/desc

SELECT TOP 100
FROM (
   select top 1000 from tbl
   order by columnname ASC) SQ
ORDER BY columnname DESC
qid & accept id: (4866013, 4866299) query: Merging rows when counting - Django/SQL soup:

Django/SQL solution as requested:

\n

the count of the different category_codes used:

\n
category_codes_cnt = Item.objects.values('category_codes').distinct().count()\n
\n

count of the different unique_codes used:

\n
unique_codes_cnt = Item.objects.values('unique_codes').distinct().count()\n
\n

count of the different combination of category_code and unique_code used:

\n
codes_cnt = Item.objects.values('category_codes', 'unique_codes').distinct().count()\n
\n soup wrap:

Django/SQL solution as requested:

the count of the different category_codes used:

category_codes_cnt = Item.objects.values('category_codes').distinct().count()

count of the different unique_codes used:

unique_codes_cnt = Item.objects.values('unique_codes').distinct().count()

count of the different combination of category_code and unique_code used:

codes_cnt = Item.objects.values('category_codes', 'unique_codes').distinct().count()
qid & accept id: (4890793, 4890867) query: MySQL database for hashes soup:

Hash column should be a CHAR(32) as that is the length of the hash:

\n
CREATE TABLE `hashes` (\n    `id` INT NOT NULL AUTO_INCREMENT, \n    `hash` CHAR(32), \n    PRIMARY KEY (`id`)\n);\n\nmysql> describe hashes;\n+-------+----------+------+-----+---------+----------------+\n| Field | Type     | Null | Key | Default | Extra          |\n+-------+----------+------+-----+---------+----------------+\n| id    | int(11)  | NO   | PRI | NULL    | auto_increment |\n| hash  | char(32) | YES  |     | NULL    |                |\n+-------+----------+------+-----+---------+----------------+\n
\n

If you want to select from the table given user input:

\n
-- Insert sample data:\nmysql> INSERT INTO `hashes` VALUES (null, MD5('hello'));\nQuery OK, 1 row affected (0.00 sec)\n\n-- Test retrieval:\nmysql> SELECT * FROM `hashes` WHERE `hash` = MD5('hello');\n+----+----------------------------------+\n| id | hash                             |\n+----+----------------------------------+\n|  1 | 5d41402abc4b2a76b9719d911017c592 |\n+----+----------------------------------+\n1 row in set (0.00 sec)\n
\n

You can add a key on hash for better performance.

\n soup wrap:

Hash column should be a CHAR(32) as that is the length of the hash:

CREATE TABLE `hashes` (
    `id` INT NOT NULL AUTO_INCREMENT, 
    `hash` CHAR(32), 
    PRIMARY KEY (`id`)
);

mysql> describe hashes;
+-------+----------+------+-----+---------+----------------+
| Field | Type     | Null | Key | Default | Extra          |
+-------+----------+------+-----+---------+----------------+
| id    | int(11)  | NO   | PRI | NULL    | auto_increment |
| hash  | char(32) | YES  |     | NULL    |                |
+-------+----------+------+-----+---------+----------------+

If you want to select from the table given user input:

-- Insert sample data:
mysql> INSERT INTO `hashes` VALUES (null, MD5('hello'));
Query OK, 1 row affected (0.00 sec)

-- Test retrieval:
mysql> SELECT * FROM `hashes` WHERE `hash` = MD5('hello');
+----+----------------------------------+
| id | hash                             |
+----+----------------------------------+
|  1 | 5d41402abc4b2a76b9719d911017c592 |
+----+----------------------------------+
1 row in set (0.00 sec)

You can add a key on hash for better performance.

qid & accept id: (4914898, 4915305) query: Selecting a record based on integer being in an array field soup:

If your formatting is EXACTLY

\n
N1, N2 (e.g.) one comma and space between each N\n
\n

Then use this WHERE clause

\n
WHERE ', ' + AreaID + ',' LIKE '%, 53,%'\n
\n

The addition of the prefix and suffix makes every number, anywhere in the list, consistently wrapped by comma-space and suffixed by comma. Otherwise, you may get false positives with 53 appearing in part of another number.

\n

Note

\n
    \n
  1. A LIKE expression will be anything but fast, since it will always scan the entire table.
  2. \n
  3. You should consider normalizing the data into two tables:
  4. \n
\n

Tables become

\n
House\n+---------+----------------------+----------+\n| HouseID | HouseType | Description | Title |\n+---------+----------------------+----------+\n| 21      | B         | data        | data  |\n| 23      | B         | data        | data  |\n| 24      | B         | data        | data  |\n| 23      | B         | data        | data  |\n+---------+----------------------+----------+\n\nHouseArea\n+---------+-------\n| HouseID | AreaID\n+---------+-------\n| 21      | 17\n| 21      | 32\n| 21      | 53\n| 23      | 23\n| 23      | 73\n..etc\n
\n

Then you can use

\n
select * from house h\nwhere exists (\n    select *\n    from housearea a\n    where h.houseid=a.houseid and a.areaid=53)\n
\n soup wrap:

If your formatting is EXACTLY

N1, N2 (e.g.) one comma and space between each N

Then use this WHERE clause

WHERE ', ' + AreaID + ',' LIKE '%, 53,%'

The addition of the prefix and suffix makes every number, anywhere in the list, consistently wrapped by comma-space and suffixed by comma. Otherwise, you may get false positives with 53 appearing in part of another number.

Note

  1. A LIKE expression will be anything but fast, since it will always scan the entire table.
  2. You should consider normalizing the data into two tables:

Tables become

House
+---------+----------------------+----------+
| HouseID | HouseType | Description | Title |
+---------+----------------------+----------+
| 21      | B         | data        | data  |
| 23      | B         | data        | data  |
| 24      | B         | data        | data  |
| 23      | B         | data        | data  |
+---------+----------------------+----------+

HouseArea
+---------+-------
| HouseID | AreaID
+---------+-------
| 21      | 17
| 21      | 32
| 21      | 53
| 23      | 23
| 23      | 73
..etc

Then you can use

select * from house h
where exists (
    select *
    from housearea a
    where h.houseid=a.houseid and a.areaid=53)
qid & accept id: (4948269, 4948311) query: SQL : Test if a column has the "Not Null" property soup:

Any particular RDBMS?

\n

In SQL Server

\n
use master\n\nSELECT COLUMNPROPERTY( OBJECT_ID('dbo.spt_values'),'number','AllowsNull')\n
\n

Or (more standard)

\n
select IS_NULLABLE \nfrom INFORMATION_SCHEMA.COLUMNS \nwhere TABLE_SCHEMA='dbo' \n      AND TABLE_NAME='spt_values' \n      AND COLUMN_NAME='number'\n
\n soup wrap:

Any particular RDBMS?

In SQL Server

use master

SELECT COLUMNPROPERTY( OBJECT_ID('dbo.spt_values'),'number','AllowsNull')

Or (more standard)

select IS_NULLABLE 
from INFORMATION_SCHEMA.COLUMNS 
where TABLE_SCHEMA='dbo' 
      AND TABLE_NAME='spt_values' 
      AND COLUMN_NAME='number'
qid & accept id: (4960337, 4960435) query: How find Customers who Bought Product A and D > 6 months apart? soup:
select A.CustID, ElapsedDays = datediff(d, A.InvoiceDate, B.InvoiceDate)\nfrom Orders A\ninner join Orders B on B.CustID = A.CustID\n    and B.ProdID = 312\n    -- more than 6 months ago\n    and B.InvoiceDate > dateadd(m,6,A.InvoiceDate)\nwhere A.ProdID = 105\n
\n

The above query is a simple interpretation of your requirement, where ANY purchase of A(105) and D(312) occurred 6 months apart. If the customer purchased

\n\n

it would return 2 rows for the customer (Jan and March), since both of those are followed by a D purchase more than 6 months later.

\n

The following query instead finds all cases where the LAST A purchase is 6 months or more before the FIRST D purchase.

\n
select A.CustID, ElapsedDays = datediff(d, A.InvoiceDate, B.InvoiceDate)\nfrom (\n    select CustID, Max(InvoiceDate) InvoiceDate\n    from Orders\n    where ProdID = 105\n    group by CustID) A\ninner join (\n    select CustID, Min(InvoiceDate) InvoiceDate\n    from Orders\n    where ProdID = 312\n    group by CustID) B on B.CustID = A.CustID\n    -- more than 6 months ago\n    and B.InvoiceDate > dateadd(m,6,A.InvoiceDate)\n
\n

And if for the same scenario above, you don't want to see this customer because the A (Jul) and D (Sep) purchases are not 6 months apart, you can exclude them from the first query using an EXISTS filter.

\n
select A.CustID, ElapsedDays = datediff(d, A.InvoiceDate, B.InvoiceDate)\nfrom Orders A\ninner join Orders B on B.CustID = A.CustID\n    and B.ProdID = 312\n    -- more than 6 months ago\n    and B.InvoiceDate > dateadd(m,6,A.InvoiceDate)\nwhere A.ProdID = 105\n  AND NOT EXISTS (\n    SELECT *\n    FROM Orders C\n    WHERE C.CustID=A.CustID\n    AND C.InvoiceDate > A.InvoiceDate\n    and C.InvoiceDate < B.InvoiceDate\n    and C.ProdID in (105,312))\n
\n soup wrap:
select A.CustID, ElapsedDays = datediff(d, A.InvoiceDate, B.InvoiceDate)
from Orders A
inner join Orders B on B.CustID = A.CustID
    and B.ProdID = 312
    -- more than 6 months ago
    and B.InvoiceDate > dateadd(m,6,A.InvoiceDate)
where A.ProdID = 105

The above query is a simple interpretation of your requirement, where ANY purchase of A(105) and D(312) occurred 6 months apart. If the customer purchased

it would return 2 rows for the customer (Jan and March), since both of those are followed by a D purchase more than 6 months later.

The following query instead finds all cases where the LAST A purchase is 6 months or more before the FIRST D purchase.

select A.CustID, ElapsedDays = datediff(d, A.InvoiceDate, B.InvoiceDate)
from (
    select CustID, Max(InvoiceDate) InvoiceDate
    from Orders
    where ProdID = 105
    group by CustID) A
inner join (
    select CustID, Min(InvoiceDate) InvoiceDate
    from Orders
    where ProdID = 312
    group by CustID) B on B.CustID = A.CustID
    -- more than 6 months ago
    and B.InvoiceDate > dateadd(m,6,A.InvoiceDate)

And if for the same scenario above, you don't want to see this customer because the A (Jul) and D (Sep) purchases are not 6 months apart, you can exclude them from the first query using an EXISTS filter.

select A.CustID, ElapsedDays = datediff(d, A.InvoiceDate, B.InvoiceDate)
from Orders A
inner join Orders B on B.CustID = A.CustID
    and B.ProdID = 312
    -- more than 6 months ago
    and B.InvoiceDate > dateadd(m,6,A.InvoiceDate)
where A.ProdID = 105
  AND NOT EXISTS (
    SELECT *
    FROM Orders C
    WHERE C.CustID=A.CustID
    AND C.InvoiceDate > A.InvoiceDate
    and C.InvoiceDate < B.InvoiceDate
    and C.ProdID in (105,312))
qid & accept id: (4971561, 4971605) query: traversing a tree upwards soup:

Reverse your comparison!

\n
SELECT * FROM reg WHERE tree='/20/1/1/1/1' OR '/20/1/1/1/1' LIKE CONCAT(tree, "/%");\n
\n

Good luck

\n
\n
mysql> create table temp_reg (tree varchar(255));\nQuery OK, 0 rows affected (0.01 sec)\n\nmysql> insert into temp_reg values ('/20/1/1/1/1'),('/30/1/1/1'),('/20/1');\nQuery OK, 3 rows affected (0.00 sec)\nRecords: 3  Duplicates: 0  Warnings: 0\n\nmysql> select * from temp_reg where '/20/1/1/1/1' LIKE CONCAT(tree, "%");\n+-------------+\n| tree        |\n+-------------+\n| /20/1/1/1/1 |\n| /20/1       |\n+-------------+\n2 rows in set (0.00 sec)\n
\n soup wrap:

Reverse your comparison!

SELECT * FROM reg WHERE tree='/20/1/1/1/1' OR '/20/1/1/1/1' LIKE CONCAT(tree, "/%");

Good luck


mysql> create table temp_reg (tree varchar(255));
Query OK, 0 rows affected (0.01 sec)

mysql> insert into temp_reg values ('/20/1/1/1/1'),('/30/1/1/1'),('/20/1');
Query OK, 3 rows affected (0.00 sec)
Records: 3  Duplicates: 0  Warnings: 0

mysql> select * from temp_reg where '/20/1/1/1/1' LIKE CONCAT(tree, "%");
+-------------+
| tree        |
+-------------+
| /20/1/1/1/1 |
| /20/1       |
+-------------+
2 rows in set (0.00 sec)
qid & accept id: (4986731, 4986748) query: How to select mysql rows in the order of IN clause soup:

Use the FIND_IN_SET function:

\n
SELECT e.* \n  FROM EMPLOYEE e \n WHERE e.code in (1,3,2,4) \nORDER BY FIND_IN_SET(e.code, '1,3,2,4')\n
\n

Or use a CASE statement:

\n
SELECT e.* \n  FROM EMPLOYEE e \n WHERE e.code in (1,3,2,4) \nORDER BY CASE e.code\n           WHEN 1 THEN 1 \n           WHEN 3 THEN 2\n           WHEN 2 THEN 3\n           WHEN 4 THEN 4\n         END\n
\n soup wrap:

Use the FIND_IN_SET function:

SELECT e.* 
  FROM EMPLOYEE e 
 WHERE e.code in (1,3,2,4) 
ORDER BY FIND_IN_SET(e.code, '1,3,2,4')

Or use a CASE statement:

SELECT e.* 
  FROM EMPLOYEE e 
 WHERE e.code in (1,3,2,4) 
ORDER BY CASE e.code
           WHEN 1 THEN 1 
           WHEN 3 THEN 2
           WHEN 2 THEN 3
           WHEN 4 THEN 4
         END
qid & accept id: (5020149, 5020178) query: Limit SQL result by type (column value) soup:
select * from daily_meal where type = 'fruit' limit 1\nunion\nselect * from daily_meal where type = 'vegetable'\n
\n

example

\n
mysql> desc daily_meal;\n+-------+--------------+------+-----+---------+-------+\n| Field | Type         | Null | Key | Default | Extra |\n+-------+--------------+------+-----+---------+-------+\n| name  | varchar(100) | YES  |     | NULL    |       |\n| type  | varchar(100) | YES  |     | NULL    |       |\n+-------+--------------+------+-----+---------+-------+\n2 rows in set (0.00 sec)\n\nmysql> select * from daily_meal;\n+----------+-----------+\n| name     | type      |\n+----------+-----------+\n| apple    | fruit     |\n| potato   | vegetable |\n| eggplant | vegetable |\n| cucumber | vegetable |\n| lemon    | fruit     |\n| orange   | fruit     |\n| carrot   | vegetable |\n+----------+-----------+\n7 rows in set (0.00 sec)\n\nmysql> select * from daily_meal where type = 'fruit' limit 1\n    -> union\n    -> select * from daily_meal where type = 'vegetable';\n+----------+-----------+\n| name     | type      |\n+----------+-----------+\n| apple    | fruit     |\n| potato   | vegetable |\n| eggplant | vegetable |\n| cucumber | vegetable |\n| carrot   | vegetable |\n+----------+-----------+\n5 rows in set (0.00 sec)\n
\n soup wrap:
select * from daily_meal where type = 'fruit' limit 1
union
select * from daily_meal where type = 'vegetable'

example

mysql> desc daily_meal;
+-------+--------------+------+-----+---------+-------+
| Field | Type         | Null | Key | Default | Extra |
+-------+--------------+------+-----+---------+-------+
| name  | varchar(100) | YES  |     | NULL    |       |
| type  | varchar(100) | YES  |     | NULL    |       |
+-------+--------------+------+-----+---------+-------+
2 rows in set (0.00 sec)

mysql> select * from daily_meal;
+----------+-----------+
| name     | type      |
+----------+-----------+
| apple    | fruit     |
| potato   | vegetable |
| eggplant | vegetable |
| cucumber | vegetable |
| lemon    | fruit     |
| orange   | fruit     |
| carrot   | vegetable |
+----------+-----------+
7 rows in set (0.00 sec)

mysql> select * from daily_meal where type = 'fruit' limit 1
    -> union
    -> select * from daily_meal where type = 'vegetable';
+----------+-----------+
| name     | type      |
+----------+-----------+
| apple    | fruit     |
| potato   | vegetable |
| eggplant | vegetable |
| cucumber | vegetable |
| carrot   | vegetable |
+----------+-----------+
5 rows in set (0.00 sec)
qid & accept id: (5081080, 5081132) query: Oracle Pl/SQl: custom function with intermediate results soup:

If I understand correctly you just need to define the "var" variable...

\n
create or replace FUNCTION EXAMPLE (param IN VARCHAR2)\nRETURN NUMBER\nAS\n   var VARCHAR2(100);  -- This datatype may need modification\nBEGIN\n  select \n  into   var\n  from   dual;\n\n  return to_number();\nEND EXAMPLE ;\n
\n

Depending on exactly what you're doing, there may be a better approach that doesn't need the SELECT ... FROM DUAL:

\n
create or replace FUNCTION EXAMPLE (param IN VARCHAR2)\nRETURN NUMBER\nAS\n   var VARCHAR2(100);  -- This datatype may need modification\nBEGIN\n  var := ;\n\n  return to_number();\nEND EXAMPLE ;\n
\n soup wrap:

If I understand correctly you just need to define the "var" variable...

create or replace FUNCTION EXAMPLE (param IN VARCHAR2)
RETURN NUMBER
AS
   var VARCHAR2(100);  -- This datatype may need modification
BEGIN
  select 
  into   var
  from   dual;

  return to_number();
END EXAMPLE ;

Depending on exactly what you're doing, there may be a better approach that doesn't need the SELECT ... FROM DUAL:

create or replace FUNCTION EXAMPLE (param IN VARCHAR2)
RETURN NUMBER
AS
   var VARCHAR2(100);  -- This datatype may need modification
BEGIN
  var := ;

  return to_number();
END EXAMPLE ;
qid & accept id: (5087616, 5087839) query: Dynamically get the maximum and minimum allowable value for a number column? soup:

It seems that you want the records whose value for money = 0 appear last.

\n

If this is the case you would go by such an order clause:

\n
order by \ncase when money = 0 then 0\n                    else 1 \nend desc,\nmoney desc\n
\n

With a working example, that would be

\n
create table tq84_order_by (\n  txt   varchar2(10),\n  money number not null\n);\n\ninsert into tq84_order_by values ('aaa', 0);\ninsert into tq84_order_by values ('bbb', 2);\ninsert into tq84_order_by values ('ccc',-3);\ninsert into tq84_order_by values ('ddd', 4);\ninsert into tq84_order_by values ('eee', 1);\n\nselect * from tq84_order_by\norder by \ncase when money = 0 then 0\n                    else 1 \n                    end desc,\n                    money desc;\n
\n

resulting in

\n
TXT             MONEY\n---------- ----------\nddd                 4\nbbb                 2\neee                 1\nccc                -3\naaa                 0    \n
\n soup wrap:

It seems that you want the records whose value for money = 0 appear last.

If this is the case you would go by such an order clause:

order by 
case when money = 0 then 0
                    else 1 
end desc,
money desc

With a working example, that would be

create table tq84_order_by (
  txt   varchar2(10),
  money number not null
);

insert into tq84_order_by values ('aaa', 0);
insert into tq84_order_by values ('bbb', 2);
insert into tq84_order_by values ('ccc',-3);
insert into tq84_order_by values ('ddd', 4);
insert into tq84_order_by values ('eee', 1);

select * from tq84_order_by
order by 
case when money = 0 then 0
                    else 1 
                    end desc,
                    money desc;

resulting in

TXT             MONEY
---------- ----------
ddd                 4
bbb                 2
eee                 1
ccc                -3
aaa                 0    
qid & accept id: (5093557, 5671243) query: Updating intersection tables, alternative to delete->insert soup:

Let's say the table starts like this.

\n
order_accessories\nPK_refno  PK_acc\n1         73\n1         74\n1         75\n1         86\n1         92\n
\n

Let's also say that 75 is supposed to be 76. Assuming a sane user interface, the user can just change 75 to 76. A sane user interface would send this statement to the dbms.

\n
update order_accessories\nset PK_acc = 76\nwhere (PK_refno = 1 and PK_acc = 75);\n
\n

If 75 were not supposed to be there in the first place, then the user would just delete that one row. A sane user interface would send this statement to the dbms.

\n
delete from order_accessories\nwhere (PK_refno = 1 and PK_acc = 75);\n
\n soup wrap:

Let's say the table starts like this.

order_accessories
PK_refno  PK_acc
1         73
1         74
1         75
1         86
1         92

Let's also say that 75 is supposed to be 76. Assuming a sane user interface, the user can just change 75 to 76. A sane user interface would send this statement to the dbms.

update order_accessories
set PK_acc = 76
where (PK_refno = 1 and PK_acc = 75);

If 75 were not supposed to be there in the first place, then the user would just delete that one row. A sane user interface would send this statement to the dbms.

delete from order_accessories
where (PK_refno = 1 and PK_acc = 75);
qid & accept id: (5111728, 5111823) query: sql select puzzle: remove children when parent is filtered out soup:

ANSI compliant. Each specific DBMS may have a faster implementation

\n
select *\nfrom tbl\nwhere id in-- PARENTS of CHILDREN that match\n(   select parent_id from tbl\n    where values0 > 10 and has_children = 0)\nor id in   -- ONE CHILD ONLY\n(   select MIN(id) from tbl\n    where values0 > 10 and has_children = 0\n    group by parent_id)\nor id in   -- PARENTS\n(   select id from tbl\n    where values0 > 10 and has_children = 1)\n
\n

Better written as a JOIN

\n
select t.*\nfrom \n(   select parent_id as ID from tbl\n    where values0 > 10 and has_children = 0\n    UNION\n    select MIN(id) from tbl\n    where values0 > 10 and has_children = 0\n    group by parent_id\n    UNION\n    select id from tbl\n    where values0 > 10 and has_children = 1) X\njoin tbl t on X.ID = t.ID\n
\n soup wrap:

ANSI compliant. Each specific DBMS may have a faster implementation

select *
from tbl
where id in-- PARENTS of CHILDREN that match
(   select parent_id from tbl
    where values0 > 10 and has_children = 0)
or id in   -- ONE CHILD ONLY
(   select MIN(id) from tbl
    where values0 > 10 and has_children = 0
    group by parent_id)
or id in   -- PARENTS
(   select id from tbl
    where values0 > 10 and has_children = 1)

Better written as a JOIN

select t.*
from 
(   select parent_id as ID from tbl
    where values0 > 10 and has_children = 0
    UNION
    select MIN(id) from tbl
    where values0 > 10 and has_children = 0
    group by parent_id
    UNION
    select id from tbl
    where values0 > 10 and has_children = 1) X
join tbl t on X.ID = t.ID
qid & accept id: (5171809, 5171866) query: Edit query based on parameters in SQL Reporting Services soup:

There are two ways you could do it:

\n
    \n
  1. Write multiple queries (one for each table), then switch among then based upon the parameter value
  2. \n
  3. Use dynamic SQL
  4. \n
\n

For 1, you'd do something like this:

\n
if @param = 'value'\n    select Col1, Col2 from Table1\nelse\n    select Col1, Col2 from Table2\n
\n

For 2, you'd do something like this:

\n
declare @sql nvarchar(4000)\n\nselect @sql = 'select Col1, Col2 from' + (case when @param = 'value' then 'Table1' else 'Table2' end)\n\nsp_executesql @sql\n
\n

WARNING: Be very careful of option 2. If option 1 is feasible, then it is the safer option, as dynamically constructing SQL based upon user-supplied values is always a dangerous affair. While this particular example doesn't use the parameter directly in the SQL, it would be very easy to write something that did, and thus very easy to find a way to exploit it.

\n soup wrap:

There are two ways you could do it:

  1. Write multiple queries (one for each table), then switch among then based upon the parameter value
  2. Use dynamic SQL

For 1, you'd do something like this:

if @param = 'value'
    select Col1, Col2 from Table1
else
    select Col1, Col2 from Table2

For 2, you'd do something like this:

declare @sql nvarchar(4000)

select @sql = 'select Col1, Col2 from' + (case when @param = 'value' then 'Table1' else 'Table2' end)

sp_executesql @sql

WARNING: Be very careful of option 2. If option 1 is feasible, then it is the safer option, as dynamically constructing SQL based upon user-supplied values is always a dangerous affair. While this particular example doesn't use the parameter directly in the SQL, it would be very easy to write something that did, and thus very easy to find a way to exploit it.

qid & accept id: (5182723, 5182773) query: SQL - searching database with the LIKE operator soup:

If you're using SQL Server, have a look at SOUNDEX.

\n

For your example:

\n
select SOUNDEX('Dinosaurs'), SOUNDEX('Dinosores')\n
\n

Returns identical values (D526) .

\n

You can also use DIFFERENCE function (on same link as soundex) that will compare levels of similarity (4 being the most similar, 0 being the least).

\n
SELECT DIFFERENCE('Dinosaurs', 'Dinosores'); --returns 4\n
\n

Edit:

\n

After hunting around a bit for a multi-text option, it seems that this isn't all that easy. I would refer you to the link on the Fuzzt Logic answer provided by @Neil Knight (+1 to that, for me!).

\n

This stackoverflow article also details possible sources for implentations for Fuzzy Logic in TSQL. Once respondant also outlined Full text Indexing as a potential that you might want to investigate.

\n soup wrap:

If you're using SQL Server, have a look at SOUNDEX.

For your example:

select SOUNDEX('Dinosaurs'), SOUNDEX('Dinosores')

Returns identical values (D526) .

You can also use DIFFERENCE function (on same link as soundex) that will compare levels of similarity (4 being the most similar, 0 being the least).

SELECT DIFFERENCE('Dinosaurs', 'Dinosores'); --returns 4

Edit:

After hunting around a bit for a multi-text option, it seems that this isn't all that easy. I would refer you to the link on the Fuzzt Logic answer provided by @Neil Knight (+1 to that, for me!).

This stackoverflow article also details possible sources for implentations for Fuzzy Logic in TSQL. Once respondant also outlined Full text Indexing as a potential that you might want to investigate.

qid & accept id: (5282607, 5282641) query: How to Filter grouped query result set (SQL) soup:

You can add condition that tells "This Code must have a row with CT" sa do a sub-query:

\n
SELECT Code FROM transaction WHERE kind='CT' GROUP BY Code ;\n
\n

And to your first query add a filter to show only those records which have Code in previous subquery:

\n
... AND Code IN (SELECT Code FROM transaction WHERE kind='CT' GROUP BY Code ) ...\n
\n

This will get rid of record Code 2, because 2 will no be in results from first query

\n soup wrap:

You can add condition that tells "This Code must have a row with CT" sa do a sub-query:

SELECT Code FROM transaction WHERE kind='CT' GROUP BY Code ;

And to your first query add a filter to show only those records which have Code in previous subquery:

... AND Code IN (SELECT Code FROM transaction WHERE kind='CT' GROUP BY Code ) ...

This will get rid of record Code 2, because 2 will no be in results from first query

qid & accept id: (5290418, 5290539) query: How to insert a row's primary key in to another one of its columns? soup:

You can do it in a single call from php to mysql if you use a stored procedure:

\n

Example calls

\n
call insert_employee('f00',32);\ncall insert_employee('bar',64);\n\n$sql = sprintf("call insert_employee('%s',%d)", $name, $age);\n
\n

Script

\n
drop table if exists employees;\ncreate table employees\n(\nid int unsigned not null auto_increment primary key,\nname varchar(32) not null,\nage tinyint unsigned not null default 0,\npid int unsigned not null default 0\n)\nengine=innodb;\n\ndrop procedure if exists insert_employee;\n\ndelimiter #\n\ncreate procedure insert_employee\n(\nin p_name varchar(32),\nin p_age tinyint unsigned\n)\nbegin\n\ndeclare v_id int unsigned default 0;\n\n  insert into employees(name, age) values (p_name, p_age);\n  set v_id = last_insert_id();\n  update employees set pid = v_id where id = v_id;\nend#\n\ndelimiter ;\n
\n soup wrap:

You can do it in a single call from php to mysql if you use a stored procedure:

Example calls

call insert_employee('f00',32);
call insert_employee('bar',64);

$sql = sprintf("call insert_employee('%s',%d)", $name, $age);

Script

drop table if exists employees;
create table employees
(
id int unsigned not null auto_increment primary key,
name varchar(32) not null,
age tinyint unsigned not null default 0,
pid int unsigned not null default 0
)
engine=innodb;

drop procedure if exists insert_employee;

delimiter #

create procedure insert_employee
(
in p_name varchar(32),
in p_age tinyint unsigned
)
begin

declare v_id int unsigned default 0;

  insert into employees(name, age) values (p_name, p_age);
  set v_id = last_insert_id();
  update employees set pid = v_id where id = v_id;
end#

delimiter ;
qid & accept id: (5292145, 5292209) query: SQL query for maximum date soup:

I think you want something like

\n
SELECT E.UserID\n    , E.EntryDate\n    , (SELECT TOP 1 Detail\n       FROM Status AS S\n       WHERE S.UserID = E.UserID\n       AND S.StatusDate <= E.EntryDate\n       ORDER BY S.StatusDate DESC)\nFROM Entry AS E\n
\n

If your database doesn't support TOP or for performance reasons you would prefer to avoid the ORDER BY you could try something like:

\n
SELECT E.UserID\n    , E.EntryDate\n    , (SELECT S1.Detail\n       FROM Status AS S1\n       WHERE S1.UserID = E.UserID\n       AND S1.StatusDate = (SELECT MAX(S2.StatusDate)\n                            FROM Status AS S2\n                            WHERE S2.UserID = E.UserID\n                            AND S2.StatusDate <= E.EntryDate))\nFROM Entry AS E\n
\n soup wrap:

I think you want something like

SELECT E.UserID
    , E.EntryDate
    , (SELECT TOP 1 Detail
       FROM Status AS S
       WHERE S.UserID = E.UserID
       AND S.StatusDate <= E.EntryDate
       ORDER BY S.StatusDate DESC)
FROM Entry AS E

If your database doesn't support TOP or for performance reasons you would prefer to avoid the ORDER BY you could try something like:

SELECT E.UserID
    , E.EntryDate
    , (SELECT S1.Detail
       FROM Status AS S1
       WHERE S1.UserID = E.UserID
       AND S1.StatusDate = (SELECT MAX(S2.StatusDate)
                            FROM Status AS S2
                            WHERE S2.UserID = E.UserID
                            AND S2.StatusDate <= E.EntryDate))
FROM Entry AS E
qid & accept id: (5331808, 5331839) query: How do I combine the results of two queries with ordering? soup:

You can use UNION ALL to get rows from both tables:

\n
SELECT id, article, author, tag, date FROM table1 WHERE tag = '1'\nUNION ALL\nSELECT id, article, author, tag, date FROM table2 WHERE tag = '3'\nORDER BY date\n
\n
\n

You may also want to consider restructuring your database so that instead of using two tables you use just a single table with a field to distinguish the type of each row. Then the query can simplify to:

\n
SELECT id, article, author, tag, date\nFROM yourtable\nWHERE (tag, type) IN (('1','type1'), ('3','type2'))\nORDER BY date\n
\n soup wrap:

You can use UNION ALL to get rows from both tables:

SELECT id, article, author, tag, date FROM table1 WHERE tag = '1'
UNION ALL
SELECT id, article, author, tag, date FROM table2 WHERE tag = '3'
ORDER BY date

You may also want to consider restructuring your database so that instead of using two tables you use just a single table with a field to distinguish the type of each row. Then the query can simplify to:

SELECT id, article, author, tag, date
FROM yourtable
WHERE (tag, type) IN (('1','type1'), ('3','type2'))
ORDER BY date
qid & accept id: (5355585, 5355648) query: how to sort order of LEFT JOIN in SQL query? soup:

Try using MAX with a GROUP BY.

\n
SELECT u.userName, MAX(c.carPrice)\nFROM users u\n    LEFT JOIN cars c ON u.id = c.belongsToUser\nWHERE u.id = 4;\nGROUP BY u.userName;\n
\n
\n

Further information on GROUP BY

\n

The group by clause is used to split the selected records into groups based on unique combinations of the group by columns. This then allows us to use aggregate functions (eg. MAX, MIN, SUM, AVG, ...) that will be applied to each group of records in turn. The database will return a single result record for each grouping.

\n

For example, if we have a set of records representing temperatures over time and location in a table like this:

\n
Location   Time    Temperature\n--------   ----    -----------\nLondon     12:00          10.0\nBristol    12:00          12.0\nGlasgow    12:00           5.0\nLondon     13:00          14.0\nBristol    13:00          13.0\nGlasgow    13:00           7.0\n...\n
\n

Then if we want to find the maximum temperature by location, then we need to split the temperature records into groupings, where each record in a particular group has the same location. We then want to find the maximum temperature of each group. The query to do this would be as follows:

\n
SELECT Location, MAX(Temperature)\nFROM Temperatures\nGROUP BY Location;\n
\n soup wrap:

Try using MAX with a GROUP BY.

SELECT u.userName, MAX(c.carPrice)
FROM users u
    LEFT JOIN cars c ON u.id = c.belongsToUser
WHERE u.id = 4;
GROUP BY u.userName;

Further information on GROUP BY

The group by clause is used to split the selected records into groups based on unique combinations of the group by columns. This then allows us to use aggregate functions (eg. MAX, MIN, SUM, AVG, ...) that will be applied to each group of records in turn. The database will return a single result record for each grouping.

For example, if we have a set of records representing temperatures over time and location in a table like this:

Location   Time    Temperature
--------   ----    -----------
London     12:00          10.0
Bristol    12:00          12.0
Glasgow    12:00           5.0
London     13:00          14.0
Bristol    13:00          13.0
Glasgow    13:00           7.0
...

Then if we want to find the maximum temperature by location, then we need to split the temperature records into groupings, where each record in a particular group has the same location. We then want to find the maximum temperature of each group. The query to do this would be as follows:

SELECT Location, MAX(Temperature)
FROM Temperatures
GROUP BY Location;
qid & accept id: (5380843, 5380919) query: Polymorphic ORM database pattern soup:

You're having difficulty finding it because it's not a real (in the sense of widely adopted and encouraged) database design pattern.

\n

Stay away from patterns like this. While ORM's make mapping database tables to types easier, tables are not types, and vice versa. While it's not clear what the model you've described is supposed to do, you should not have columns that serve as fake foreign keys to multiple tables (when I say "fake", I mean that you're storing a simple identifier value that corresponds to the primary key of another table, but you can't actually define the column as a foreign key).

\n

Model your database to represent the data, model your objects to represent the process, and use your ORM and intermediate layers to do the translation; don't try to push the database into your code, and don't push your code into the database.

\n

Edit in reponse to comment

\n

You're mixing database and OO terminology; while I'm not familiar with the syntax you're using to define that function, I'm assuming it's an instance function on the User type called getLocation that takes no parameters and returns a Location object. Databases don't support the concepts of instance (or any type-based) functions; relational databases can have user-defined functions, but these are simple procedural functions that take parameters and return either values or result sets. They do not correspond to particular tables or field in any way, other than the fact that you can use them within the body of the function.

\n

That being said, there are two questions to answer here: how to do what you've asked, and what might be a better solution.

\n

For what you've asked, it sounds like you have a supertype-subtype relationship, which is a standard database design pattern. In this case, you have a single supertype table that represents the parent:

\n
Location\n---------------\nLocationID (PK)\n...other common attributes\n
\n

(Note here that I'm using LocationID for the sake of simplicity; you should have more specific and logical attributes to define the primary key, if possible)

\n

Then you have one or more tables that define subtypes:

\n
Address\n-----------\nLocationID (PK, FK to Location)\n...address-specific attributes\n\nCountry\n-----------\nLocationID (PK, FK to Location)\n...country-specific attributes\n
\n

If a specific instance of Location can only be one of the subtypes, then you should add a discriminator value to the parent table (Location) that indicates which of the subtypes it corresponds to. You can use CHECK constraints to ensure that only valid values are in this field for a given row.

\n

In the end, though, it sounds like you might be better served with a hybrid approach. You're fundamentally representing two different types of locations, from what I can see:

\n\n

Given this, a simple model would look like this:

\n
Location\n------------\nLocationID (PK)\nLocationType (non-nullable) ('C' for coordinate, 'P' for postal)\n\nLocationCoordinate\n------------------\nLocationID (PK; FK to Location)\nLatitude (non-nullable)\nLongitude (non-nullable)\n\nLocationPostal\n------------------\nLocationID (PK, FK to Location)\nCountry (non-nullable)\nCity (nullable)\nAddress (nullable)\n
\n

Now the only problem that remains is that we have nullable columns. If you want to keep your queries simple but take (justified!) flak from people about leaving nullable columns, then you can leave it as-is. If you want to go to what most people would consider a better-designed database, you can move to 6NF for our two nullable columns. Doing this will also have the nice side-effect of giving us a little more control over how these fields are populated without having to do anything extra.

\n

Our two nullable fields are City and Address. I am going to assume that having an Address without a City would be nonsense. In this case, we remove these two attributes from the LocationPostal table and create two more tables:

\n
LocationPostalCity\n------------------\nLocationID (PK; FK to LocationPostal)\nCity (non-nullable)\n\nLocationPostalCityAddress\n-------------------------\nLocationID (PK; FK to LocationPostalCity)\nAddress (non-nullable)\n
\n soup wrap:

You're having difficulty finding it because it's not a real (in the sense of widely adopted and encouraged) database design pattern.

Stay away from patterns like this. While ORM's make mapping database tables to types easier, tables are not types, and vice versa. While it's not clear what the model you've described is supposed to do, you should not have columns that serve as fake foreign keys to multiple tables (when I say "fake", I mean that you're storing a simple identifier value that corresponds to the primary key of another table, but you can't actually define the column as a foreign key).

Model your database to represent the data, model your objects to represent the process, and use your ORM and intermediate layers to do the translation; don't try to push the database into your code, and don't push your code into the database.

Edit in reponse to comment

You're mixing database and OO terminology; while I'm not familiar with the syntax you're using to define that function, I'm assuming it's an instance function on the User type called getLocation that takes no parameters and returns a Location object. Databases don't support the concepts of instance (or any type-based) functions; relational databases can have user-defined functions, but these are simple procedural functions that take parameters and return either values or result sets. They do not correspond to particular tables or field in any way, other than the fact that you can use them within the body of the function.

That being said, there are two questions to answer here: how to do what you've asked, and what might be a better solution.

For what you've asked, it sounds like you have a supertype-subtype relationship, which is a standard database design pattern. In this case, you have a single supertype table that represents the parent:

Location
---------------
LocationID (PK)
...other common attributes

(Note here that I'm using LocationID for the sake of simplicity; you should have more specific and logical attributes to define the primary key, if possible)

Then you have one or more tables that define subtypes:

Address
-----------
LocationID (PK, FK to Location)
...address-specific attributes

Country
-----------
LocationID (PK, FK to Location)
...country-specific attributes

If a specific instance of Location can only be one of the subtypes, then you should add a discriminator value to the parent table (Location) that indicates which of the subtypes it corresponds to. You can use CHECK constraints to ensure that only valid values are in this field for a given row.

In the end, though, it sounds like you might be better served with a hybrid approach. You're fundamentally representing two different types of locations, from what I can see:

Given this, a simple model would look like this:

Location
------------
LocationID (PK)
LocationType (non-nullable) ('C' for coordinate, 'P' for postal)

LocationCoordinate
------------------
LocationID (PK; FK to Location)
Latitude (non-nullable)
Longitude (non-nullable)

LocationPostal
------------------
LocationID (PK, FK to Location)
Country (non-nullable)
City (nullable)
Address (nullable)

Now the only problem that remains is that we have nullable columns. If you want to keep your queries simple but take (justified!) flak from people about leaving nullable columns, then you can leave it as-is. If you want to go to what most people would consider a better-designed database, you can move to 6NF for our two nullable columns. Doing this will also have the nice side-effect of giving us a little more control over how these fields are populated without having to do anything extra.

Our two nullable fields are City and Address. I am going to assume that having an Address without a City would be nonsense. In this case, we remove these two attributes from the LocationPostal table and create two more tables:

LocationPostalCity
------------------
LocationID (PK; FK to LocationPostal)
City (non-nullable)

LocationPostalCityAddress
-------------------------
LocationID (PK; FK to LocationPostalCity)
Address (non-nullable)
qid & accept id: (5393244, 5393325) query: PLSQL read value from XML (Again)? soup:

Do the same thing as in the answer you referenced, but change the XPath expression (second argument to XMLTYPE) from

\n
'//SOAProxyResult'\n
\n

to e.g.

\n
'//t:ItemId/@Id'\n
\n

or

\n
'//t:ItemId/@ChangeKey'\n
\n

The third argument will need to declare the t namespace prefix:

\n
'xmlns:t="foobarbaz"'\n
\n

and of course your input XML will need to declare that namespace prefix too.

\n soup wrap:

Do the same thing as in the answer you referenced, but change the XPath expression (second argument to XMLTYPE) from

'//SOAProxyResult'

to e.g.

'//t:ItemId/@Id'

or

'//t:ItemId/@ChangeKey'

The third argument will need to declare the t namespace prefix:

'xmlns:t="foobarbaz"'

and of course your input XML will need to declare that namespace prefix too.

qid & accept id: (5410918, 5410950) query: composing a SQL query with a date offset soup:

This will return tomorrow's data

\n
WHERE ChangingDate > = dateadd(dd, datediff(dd, 0, getdate())+1, 0)\nand ChangingDate < dateadd(dd, datediff(dd, 0, getdate())+2, 0)\n
\n

This will return today's data

\n
WHERE ChangingDate > = dateadd(dd, datediff(dd, 0, getdate())+0, 0)\nand ChangingDate < dateadd(dd, datediff(dd, 0, getdate())+1, 0)\n
\n

See also How Does Between Work With Dates In SQL Server?

\n soup wrap:

This will return tomorrow's data

WHERE ChangingDate > = dateadd(dd, datediff(dd, 0, getdate())+1, 0)
and ChangingDate < dateadd(dd, datediff(dd, 0, getdate())+2, 0)

This will return today's data

WHERE ChangingDate > = dateadd(dd, datediff(dd, 0, getdate())+0, 0)
and ChangingDate < dateadd(dd, datediff(dd, 0, getdate())+1, 0)

See also How Does Between Work With Dates In SQL Server?

qid & accept id: (5455914, 5456435) query: TRIGGER based on spatial data soup:

This doesn't work?

\n
CREATE TRIGGER trig_pano_raw BEFORE INSERT ON pano_raw\nFOR EACH ROW\nBEGIN\n    SET NEW.latlng = PointFromWKB( POINT( NEW.lat, NEW.lng ) );\nEND;\n
\n

Regarding the Update trigger, note that

\n\n

update trigger

\n
DELIMITER $$\nCREATE TRIGGER trig_Update_pano_raw BEFORE UPDATE ON pano_raw\nFOR EACH ROW\nBEGIN\n    IF ((NEW.lat != OLD.lat) OR (NEW.lng != OLD.lng))\n    THEN\n        SET NEW.latlng = PointFromWKB( POINT( NEW.lat, NEW.lng ) );\n    ELSEIF (NEW.latlng != OLD.latlng)\n    THEN\n        BEGIN\n            SET NEW.lat = X(NEW.latlng);\n            SET NEW.lng = Y(NEW.latlng);\n        END;\n    END IF;\nEND;$$\nDELIMITER ;\n
\n soup wrap:

This doesn't work?

CREATE TRIGGER trig_pano_raw BEFORE INSERT ON pano_raw
FOR EACH ROW
BEGIN
    SET NEW.latlng = PointFromWKB( POINT( NEW.lat, NEW.lng ) );
END;

Regarding the Update trigger, note that

update trigger

DELIMITER $$
CREATE TRIGGER trig_Update_pano_raw BEFORE UPDATE ON pano_raw
FOR EACH ROW
BEGIN
    IF ((NEW.lat != OLD.lat) OR (NEW.lng != OLD.lng))
    THEN
        SET NEW.latlng = PointFromWKB( POINT( NEW.lat, NEW.lng ) );
    ELSEIF (NEW.latlng != OLD.latlng)
    THEN
        BEGIN
            SET NEW.lat = X(NEW.latlng);
            SET NEW.lng = Y(NEW.latlng);
        END;
    END IF;
END;$$
DELIMITER ;
qid & accept id: (5462205, 5462250) query: MySQL SELECT function to sum current data soup:

This is called cumulative sum.

\n

In Oracle and PostgreSQL, it is calculated using a window function:

\n
SELECT  id, val, SUM() OVER (ORDER BY id ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)\nFROM    mytable\n
\n

However, MySQL does not support it.

\n

In MySQL, you can calculate it using session variables:

\n
SET @s = 0;\n\nSELECT  id, val, @s := @s + val\nFROM    mytable\nORDER BY\n        id\n;\n
\n

or in a pure set-based but less efficient way:

\n
SELECT  t1.id, t1.val, SUM(t2.val)\nFROM    mytable t1\nJOIN    mytable t2\nON      t2.id <= t1.id\nGROUP BY\n        t1.id\n;\n
\n soup wrap:

This is called cumulative sum.

In Oracle and PostgreSQL, it is calculated using a window function:

SELECT  id, val, SUM() OVER (ORDER BY id ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)
FROM    mytable

However, MySQL does not support it.

In MySQL, you can calculate it using session variables:

SET @s = 0;

SELECT  id, val, @s := @s + val
FROM    mytable
ORDER BY
        id
;

or in a pure set-based but less efficient way:

SELECT  t1.id, t1.val, SUM(t2.val)
FROM    mytable t1
JOIN    mytable t2
ON      t2.id <= t1.id
GROUP BY
        t1.id
;
qid & accept id: (5479975, 5480412) query: query for a set in a relational database soup:

I won't comment on whether there is a better suited schema for doing this (it's quite possible), but for a schema having columns name and item, the following query should work. (mysql syntax)

\n
SELECT k.name\nFROM (SELECT DISTINCT name FROM sets) AS k\nINNER JOIN sets i1 ON (k.name = i1.name AND i1.item = 1)\nINNER JOIN sets i2 ON (k.name = i2.name AND i2.item = 3)\nINNER JOIN sets i3 ON (k.name = i3.name AND i3.item = 5)\nLEFT JOIN sets ix ON (k.name = ix.name AND ix.item NOT IN (1, 3, 5))\nWHERE ix.name IS NULL;\n
\n

The idea is that we have all the set keys in k, which we then join with the set item data in sets once for each set item in the set we are searching for, three in this case. Each of the three inner joins with table aliases i1, i2 and i3 filter out all set names that don't contain the item searched for with that join. Finally, we have a left join with sets with table alias ix, which brings in all the extra items in the set, that is, every item we were not searching for. ix.name is NULL in the case that no extra items are found, which is exactly what we want, thus the WHERE clause. The query returns a row containing the set key if the set is found, no rows otherwise.

\n
\n

Edit: The idea behind collapsars answer seems to be much better than mine, so here's a bit shorter version of that with explanation.

\n
SELECT sets.name\nFROM sets\nLEFT JOIN (\n    SELECT DISTINCT name\n    FROM sets\n    WHERE item NOT IN (1, 3, 5)\n) s1\nON (sets.name = s1.name)\nWHERE s1.name IS NULL\nGROUP BY sets.name\nHAVING COUNT(sets.item) = 3;\n
\n

The idea here is that subquery s1 selects the keys of all sets that contain items other that the ones we are looking for. Thus, when we left join sets with s1, s1.name is NULL when the set only contains items we are searching for. We then group by set key and filter out any sets having the wrong number of items. We are then left with only sets which contain only items we are searching for and are of the correct length. Since sets can only contain an item once, there can only be one set satisfying that criteria, and that's the one we're looking for.

\n
\n

Edit: It just dawned on me how to do this without the exclusion.

\n
SELECT totals.name\nFROM (\n    SELECT name, COUNT(*) count\n    FROM sets\n    GROUP BY name\n) totals\nINNER JOIN (\n    SELECT name, COUNT(*) count\n    FROM sets\n    WHERE item IN (1, 3, 5)\n    GROUP BY name\n) matches\nON (totals.name = matches.name)\nWHERE totals.count = 3 AND matches.count = 3;\n
\n

The first subquery finds the total count of items in each set and the second one finds out the count of matching items in each set. When matches.count is 3, the set has all the items we're looking for, and if totals.count is also 3, the set doesn't have any extra items.

\n soup wrap:

I won't comment on whether there is a better suited schema for doing this (it's quite possible), but for a schema having columns name and item, the following query should work. (mysql syntax)

SELECT k.name
FROM (SELECT DISTINCT name FROM sets) AS k
INNER JOIN sets i1 ON (k.name = i1.name AND i1.item = 1)
INNER JOIN sets i2 ON (k.name = i2.name AND i2.item = 3)
INNER JOIN sets i3 ON (k.name = i3.name AND i3.item = 5)
LEFT JOIN sets ix ON (k.name = ix.name AND ix.item NOT IN (1, 3, 5))
WHERE ix.name IS NULL;

The idea is that we have all the set keys in k, which we then join with the set item data in sets once for each set item in the set we are searching for, three in this case. Each of the three inner joins with table aliases i1, i2 and i3 filter out all set names that don't contain the item searched for with that join. Finally, we have a left join with sets with table alias ix, which brings in all the extra items in the set, that is, every item we were not searching for. ix.name is NULL in the case that no extra items are found, which is exactly what we want, thus the WHERE clause. The query returns a row containing the set key if the set is found, no rows otherwise.


Edit: The idea behind collapsars answer seems to be much better than mine, so here's a bit shorter version of that with explanation.

SELECT sets.name
FROM sets
LEFT JOIN (
    SELECT DISTINCT name
    FROM sets
    WHERE item NOT IN (1, 3, 5)
) s1
ON (sets.name = s1.name)
WHERE s1.name IS NULL
GROUP BY sets.name
HAVING COUNT(sets.item) = 3;

The idea here is that subquery s1 selects the keys of all sets that contain items other that the ones we are looking for. Thus, when we left join sets with s1, s1.name is NULL when the set only contains items we are searching for. We then group by set key and filter out any sets having the wrong number of items. We are then left with only sets which contain only items we are searching for and are of the correct length. Since sets can only contain an item once, there can only be one set satisfying that criteria, and that's the one we're looking for.


Edit: It just dawned on me how to do this without the exclusion.

SELECT totals.name
FROM (
    SELECT name, COUNT(*) count
    FROM sets
    GROUP BY name
) totals
INNER JOIN (
    SELECT name, COUNT(*) count
    FROM sets
    WHERE item IN (1, 3, 5)
    GROUP BY name
) matches
ON (totals.name = matches.name)
WHERE totals.count = 3 AND matches.count = 3;

The first subquery finds the total count of items in each set and the second one finds out the count of matching items in each set. When matches.count is 3, the set has all the items we're looking for, and if totals.count is also 3, the set doesn't have any extra items.

qid & accept id: (5501347, 5501454) query: Increment Oracle time in varchar field by a certain amount? soup:

you could use the built-in date (and interval -- thanks Alex for the link) calculation:

\n
to_char(to_date(:x, 'hh24:mi') + INTERVAL :y MINUTE,'hh24:mi')\n
\n

for instance:

\n
SQL> WITH my_data AS (\n  2     SELECT '12:15' t FROM dual\n  3     UNION ALL SELECT '10:30' FROM dual\n  4  )\n  5  SELECT t, \n  6         to_char(to_date(t, 'hh24:mi') + INTERVAL '15' MINUTE,'hh24:mi')"t+15"\n  7    FROM my_data;\n\nT     t+15\n----- -----\n12:15 12:30\n10:30 10:45\n
\n soup wrap:

you could use the built-in date (and interval -- thanks Alex for the link) calculation:

to_char(to_date(:x, 'hh24:mi') + INTERVAL :y MINUTE,'hh24:mi')

for instance:

SQL> WITH my_data AS (
  2     SELECT '12:15' t FROM dual
  3     UNION ALL SELECT '10:30' FROM dual
  4  )
  5  SELECT t, 
  6         to_char(to_date(t, 'hh24:mi') + INTERVAL '15' MINUTE,'hh24:mi')"t+15"
  7    FROM my_data;

T     t+15
----- -----
12:15 12:30
10:30 10:45
qid & accept id: (5606689, 5607174) query: SQL Server: Is there a way to check what is the resulting data type of implicit conversion? soup:

The result of the expression is numeric (17,6). To see this

\n
DECLARE @i  INT, @v SQL_VARIANT\n\nSET @i = 3\nSET @v = @i  / 9.0\n\nSELECT\n    CAST(SQL_VARIANT_PROPERTY(@v, 'BaseType') AS VARCHAR(30)) AS BaseType,\n    CAST(SQL_VARIANT_PROPERTY(@v, 'Precision') AS INT) AS Precision,\n    CAST(SQL_VARIANT_PROPERTY(@v, 'Scale') AS INT) AS Scale\n
\n

Returns

\n
BaseType   Precision   Scale\n---------- ----------- -----------\nnumeric    17          6\n
\n

Edit:

\n
SELECT SQL_VARIANT_PROPERTY(9.0, 'BaseType'),\n       SQL_VARIANT_PROPERTY(9.0, 'Precision'),\n       SQL_VARIANT_PROPERTY(9.0, 'Scale')\n
\n

So the literal 9.0 is treated as numeric(2,1) (Can be seen from the query above)

\n

@i is numeric(10,0) (as per Mikael's answer)

\n

The rules that govern why numeric(10,0)/numeric(2,1) gives numeric (17,6) are covered here

\n
Operation:        e1 / e2\nResult precision: p1 - s1 + s2 + max(6, s1 + p2 + 1)\nResult scale:     max(6, s1 + p2 + 1)\n
\n

Substituting the relevant values in gives

\n
10 - 0 + 1 + max(6, 0 + 2 + 1)  = 17\nmax(6, 0 + 2 + 1)               =  6 \n
\n soup wrap:

The result of the expression is numeric (17,6). To see this

DECLARE @i  INT, @v SQL_VARIANT

SET @i = 3
SET @v = @i  / 9.0

SELECT
    CAST(SQL_VARIANT_PROPERTY(@v, 'BaseType') AS VARCHAR(30)) AS BaseType,
    CAST(SQL_VARIANT_PROPERTY(@v, 'Precision') AS INT) AS Precision,
    CAST(SQL_VARIANT_PROPERTY(@v, 'Scale') AS INT) AS Scale

Returns

BaseType   Precision   Scale
---------- ----------- -----------
numeric    17          6

Edit:

SELECT SQL_VARIANT_PROPERTY(9.0, 'BaseType'),
       SQL_VARIANT_PROPERTY(9.0, 'Precision'),
       SQL_VARIANT_PROPERTY(9.0, 'Scale')

So the literal 9.0 is treated as numeric(2,1) (Can be seen from the query above)

@i is numeric(10,0) (as per Mikael's answer)

The rules that govern why numeric(10,0)/numeric(2,1) gives numeric (17,6) are covered here

Operation:        e1 / e2
Result precision: p1 - s1 + s2 + max(6, s1 + p2 + 1)
Result scale:     max(6, s1 + p2 + 1)

Substituting the relevant values in gives

10 - 0 + 1 + max(6, 0 + 2 + 1)  = 17
max(6, 0 + 2 + 1)               =  6 
qid & accept id: (5719384, 5732491) query: Insert line into a query result (sum) soup:

Thanks for everyone's feedback/help, it at least got me thinking of different approaches. I came up with something that doesn't depend on what version of SQL Server I'm using (our vendor changes versions often so I have to be as cross-compliant as possible).

\n

This might be considered a hack (ok, it is a hack) but it works, and it gets the job done:

\n
SELECT company\n   , product\n   , price\nFROM companyMaster\nORDER BY company,\n   , product,\n   , price\n\nUNION\n\nSELECT company + 'Total'\n   , ''\n   , SUM(price)\nFROM companyMaster\nGROUP BY company\n\nORDER BY company;\n
\n

This solution basically uses the UNION of two select statements. The first is exactly like the orginal, the second produces the sum line I needed. In order to correctly locate the sum line, I did a string concatenation on the company name (appending the word 'Total'), so that when I sort alphabetically on company name, the Total row will show up at the bottom of each company section.

\n

Here's what the final report looks like (not exactly what I wanted but functionally equivalent, just not very pretty to look at:

\n
CompanyA    Product 7    14.99  \nCompanyA    Product 3    45.95\nCompanyA    Product 4    12.00\nCompanyA Total           72.94\nCompanyB    Product 3    45.95\nCompanyB Total           45.95\nCompanyC    Product 7    14.99\nCompanyC    Product 3    45.95\nCompanyC Total           60.94\n
\n soup wrap:

Thanks for everyone's feedback/help, it at least got me thinking of different approaches. I came up with something that doesn't depend on what version of SQL Server I'm using (our vendor changes versions often so I have to be as cross-compliant as possible).

This might be considered a hack (ok, it is a hack) but it works, and it gets the job done:

SELECT company
   , product
   , price
FROM companyMaster
ORDER BY company,
   , product,
   , price

UNION

SELECT company + 'Total'
   , ''
   , SUM(price)
FROM companyMaster
GROUP BY company

ORDER BY company;

This solution basically uses the UNION of two select statements. The first is exactly like the orginal, the second produces the sum line I needed. In order to correctly locate the sum line, I did a string concatenation on the company name (appending the word 'Total'), so that when I sort alphabetically on company name, the Total row will show up at the bottom of each company section.

Here's what the final report looks like (not exactly what I wanted but functionally equivalent, just not very pretty to look at:

CompanyA    Product 7    14.99  
CompanyA    Product 3    45.95
CompanyA    Product 4    12.00
CompanyA Total           72.94
CompanyB    Product 3    45.95
CompanyB Total           45.95
CompanyC    Product 7    14.99
CompanyC    Product 3    45.95
CompanyC Total           60.94
qid & accept id: (5760020, 5760379) query: ORACLE/SQL - Joining 3 tables that aren't all interconnected soup:

I know this is a matter of style but in my opinion ansi style joins make this much clearer:

\n
SELECT c.*\nFROM c\nJOIN a ON a.model = c.model\nJOIN b on b.type = a.type\n
\n

In case you have multiple matching elements in a or b, this query will return duplicates. You can either add a DISTINCT or rewrite it as an EXISTS query:

\n
SELECT *\nFROM c\nWHERE EXISTS (SELECT 1\n              FROM a\n              JOIN b ON b.type = a.type\n              WHERE a.model = c.model)\n
\n

I think this should also give the same result, as long as there are no NULL values in model:

\n
SELECT *\nFROM c\nWHERE c.model IN (SELECT a.model\n                  FROM a\n                  JOIN b ON b.type = a.type)\n
\n soup wrap:

I know this is a matter of style but in my opinion ansi style joins make this much clearer:

SELECT c.*
FROM c
JOIN a ON a.model = c.model
JOIN b on b.type = a.type

In case you have multiple matching elements in a or b, this query will return duplicates. You can either add a DISTINCT or rewrite it as an EXISTS query:

SELECT *
FROM c
WHERE EXISTS (SELECT 1
              FROM a
              JOIN b ON b.type = a.type
              WHERE a.model = c.model)

I think this should also give the same result, as long as there are no NULL values in model:

SELECT *
FROM c
WHERE c.model IN (SELECT a.model
                  FROM a
                  JOIN b ON b.type = a.type)
qid & accept id: (5795541, 5795767) query: sql query: no payments in last 90 days soup:

Ensure there's an index on payments(client_id), or even better, payments(client_id, created_at).

\n

For alternative way to write your query, you could try a not exists, like:

\n
select  *\nfrom    clients c\nwhere   not exists\n        (\n        select  *\n        from    payments p\n        where   p.payments.client_id = clients.id\n                and payments.created_at > utc_timestamp() - interval 90 day\n        )\n
\n

Or an exclusive left join:

\n
select  *\nfrom    clients c\nleft join\n        payments p\non      p.payments.client_id = clients.id\n        and payments.created_at > utc_timestamp() - interval 90 day\nwhere   p.client_id is null\n
\n

If both are slow, add the explain extended output to your question, so we can see why.

\n soup wrap:

Ensure there's an index on payments(client_id), or even better, payments(client_id, created_at).

For alternative way to write your query, you could try a not exists, like:

select  *
from    clients c
where   not exists
        (
        select  *
        from    payments p
        where   p.payments.client_id = clients.id
                and payments.created_at > utc_timestamp() - interval 90 day
        )

Or an exclusive left join:

select  *
from    clients c
left join
        payments p
on      p.payments.client_id = clients.id
        and payments.created_at > utc_timestamp() - interval 90 day
where   p.client_id is null

If both are slow, add the explain extended output to your question, so we can see why.

qid & accept id: (5803133, 5803148) query: How to use SELECT INTO with static values included? soup:
SELECT foo.id, 'R' AS type INTO bar FROM foo;\n
\n

In MySQL this would normally be done with:

\n

Lazy with no indexes

\n
CREATE TABLE bar SELECT id, 'R' AS type FROM foo;\n
\n

Nicer way (assuming you've created table bar already)

\n
INSERT INTO bar SELECT id, 'R' AS type FROM foo;\n
\n soup wrap:
SELECT foo.id, 'R' AS type INTO bar FROM foo;

In MySQL this would normally be done with:

Lazy with no indexes

CREATE TABLE bar SELECT id, 'R' AS type FROM foo;

Nicer way (assuming you've created table bar already)

INSERT INTO bar SELECT id, 'R' AS type FROM foo;
qid & accept id: (5816567, 5816696) query: select n rows in sql soup:
SELECT *\nFROM (\n   SELECT country, capitol, rownum as rn\n   FROM your_table\n   ORDER BY country\n) \nWHERE rn > 1\n
\n

If the "first one" is not defined through sorting by country, then you need to apply a different ORDER BY in the inner query.

\n

Edit

\n

For completeness, the ANSI SQL solution to this would be:

\n
SELECT *\nFROM (\n   SELECT country, \n          capitol, \n          row_number() over (order by country) as rn\n   FROM your_table\n) \nWHERE rn > 1\n
\n

That is a portable solution that works on almost all major DBMS

\n soup wrap:
SELECT *
FROM (
   SELECT country, capitol, rownum as rn
   FROM your_table
   ORDER BY country
) 
WHERE rn > 1

If the "first one" is not defined through sorting by country, then you need to apply a different ORDER BY in the inner query.

Edit

For completeness, the ANSI SQL solution to this would be:

SELECT *
FROM (
   SELECT country, 
          capitol, 
          row_number() over (order by country) as rn
   FROM your_table
) 
WHERE rn > 1

That is a portable solution that works on almost all major DBMS

qid & accept id: (5943678, 6862842) query: MySQL - How to pivot NVP? soup:

thats a pretty standard implementation

\n
SELECT\nproduct_id,\nGROUP_CONCAT(if(name = 'Author', value, NULL)) AS 'Author',\nGROUP_CONCAT(if(name = 'Publisher', value, NULL)) AS 'Publisher',\nFROM product_attribute\nGROUP BY product_id; \n
\n

you have to

\n
select distinct(name) from product_attribute\n
\n

so you can build the above query \nbut NO it will not work with identical names , GROUP_CONCAT will concat the values .

\n

i ve seen an implementation which adds a column and populates it with increment values so that it can then pivot the table using variables and a counter. but i dont have that in mysql

\n soup wrap:

thats a pretty standard implementation

SELECT
product_id,
GROUP_CONCAT(if(name = 'Author', value, NULL)) AS 'Author',
GROUP_CONCAT(if(name = 'Publisher', value, NULL)) AS 'Publisher',
FROM product_attribute
GROUP BY product_id; 

you have to

select distinct(name) from product_attribute

so you can build the above query but NO it will not work with identical names , GROUP_CONCAT will concat the values .

i ve seen an implementation which adds a column and populates it with increment values so that it can then pivot the table using variables and a counter. but i dont have that in mysql

qid & accept id: (5992308, 6072328) query: How to create a function in DB2 that returns the value of a sequence? soup:
CREATE FUNCTION "MYSCHEMA"."MY_FUNC"(PARAM1 VARCHAR(4000))\n     RETURNS INT\nSPECIFIC SQL110520140321900 BEGIN ATOMIC\n     DECLARE VAR1 INT;\n     DECLARE VAR2 INT;\n     SET VAR1  = NEXTVAL FOR MY_SEQ;\n     SET VAR2 = VAR1 + 2000; --or whatever magic you want to do\n     RETURN VAR2;\nEND\n
\n

To try it out:

\n
SELECT MY_FUNC('aa') FROM SYSIBM.SYSDUMMY1;\n
\n soup wrap:
CREATE FUNCTION "MYSCHEMA"."MY_FUNC"(PARAM1 VARCHAR(4000))
     RETURNS INT
SPECIFIC SQL110520140321900 BEGIN ATOMIC
     DECLARE VAR1 INT;
     DECLARE VAR2 INT;
     SET VAR1  = NEXTVAL FOR MY_SEQ;
     SET VAR2 = VAR1 + 2000; --or whatever magic you want to do
     RETURN VAR2;
END

To try it out:

SELECT MY_FUNC('aa') FROM SYSIBM.SYSDUMMY1;
qid & accept id: (6031181, 6032080) query: Find conflicted date intervals using SQL soup:
declare @T table (ItemId int, IntervalID int, StartDate datetime,   EndDate datetime)\n\ninsert into @T\nselect 1, 1,  NULL,        '2011-01-15' union all\nselect 2, 1, '2011-01-16', '2011-01-25' union all\nselect 3, 1, '2011-01-26',  NULL        union all\nselect 4, 2,  NULL,        '2011-01-17' union all\nselect 5, 2, '2011-01-16', '2011-01-25' union all\nselect 6, 2, '2011-01-26',  NULL\n\nselect T1.*\nfrom @T as T1\n  inner join @T as T2\n    on coalesce(T1.StartDate, '1753-01-01') < coalesce(T2.EndDate, '9999-12-31') and\n       coalesce(T1.EndDate, '9999-12-31') > coalesce(T2.StartDate, '1753-01-01') and\n       T1.IntervalID = T2.IntervalID and\n       T1.ItemId <> T2.ItemId\n
\n

Result:

\n
ItemId      IntervalID  StartDate               EndDate\n----------- ----------- ----------------------- -----------------------\n5           2           2011-01-16 00:00:00.000 2011-01-25 00:00:00.000\n4           2           NULL                    2011-01-17 00:00:00.000\n
\n soup wrap:
declare @T table (ItemId int, IntervalID int, StartDate datetime,   EndDate datetime)

insert into @T
select 1, 1,  NULL,        '2011-01-15' union all
select 2, 1, '2011-01-16', '2011-01-25' union all
select 3, 1, '2011-01-26',  NULL        union all
select 4, 2,  NULL,        '2011-01-17' union all
select 5, 2, '2011-01-16', '2011-01-25' union all
select 6, 2, '2011-01-26',  NULL

select T1.*
from @T as T1
  inner join @T as T2
    on coalesce(T1.StartDate, '1753-01-01') < coalesce(T2.EndDate, '9999-12-31') and
       coalesce(T1.EndDate, '9999-12-31') > coalesce(T2.StartDate, '1753-01-01') and
       T1.IntervalID = T2.IntervalID and
       T1.ItemId <> T2.ItemId

Result:

ItemId      IntervalID  StartDate               EndDate
----------- ----------- ----------------------- -----------------------
5           2           2011-01-16 00:00:00.000 2011-01-25 00:00:00.000
4           2           NULL                    2011-01-17 00:00:00.000
qid & accept id: (6057352, 6057388) query: Find duplicates in SQL soup:

A grouping on SSN should do it

\n

\n
SELECT\n   ssn\nFROM\n   Table t1\nGROUP BY\n   ssn\nHAVING COUNT(*) > 1\n
\n

..or if you have many rows per ssn and only want to find duplicate names)

\n
...\nHAVING COUNT(DISTINCT name) > 1 \n
\n

\n

Edit, oops, misunderstood

\n
SELECT\n   ssn\nFROM\n   Table t1\nGROUP BY\n   ssn\nHAVING MIN(name) <> MAX(name)\n
\n soup wrap:

A grouping on SSN should do it

SELECT
   ssn
FROM
   Table t1
GROUP BY
   ssn
HAVING COUNT(*) > 1

..or if you have many rows per ssn and only want to find duplicate names)

...
HAVING COUNT(DISTINCT name) > 1 

Edit, oops, misunderstood

SELECT
   ssn
FROM
   Table t1
GROUP BY
   ssn
HAVING MIN(name) <> MAX(name)
qid & accept id: (6070894, 6071196) query: Detect overlapping ranges and correct then in oracle soup:

Analytic functions could help:

\n
select userid, map\n, case when prevend >= startday then prevend+1 else startday end newstart\n, endday\nfrom\n( select userid, map, startday, endday\n  , lag(endday) over (partition by userid order by startday) prevend\n  from mytable\n)\norder by userid, startday\n
\n

Gives:

\n
USERID  MAP     NEWSTART        ENDDAY\n1       A       01/01/2011      01/05/2011\n1       B       01/06/2011      01/10/2011\n2       A       01/01/2011      01/07/2011\n2       B       01/08/2011      01/10/2011\n
\n soup wrap:

Analytic functions could help:

select userid, map
, case when prevend >= startday then prevend+1 else startday end newstart
, endday
from
( select userid, map, startday, endday
  , lag(endday) over (partition by userid order by startday) prevend
  from mytable
)
order by userid, startday

Gives:

USERID  MAP     NEWSTART        ENDDAY
1       A       01/01/2011      01/05/2011
1       B       01/06/2011      01/10/2011
2       A       01/01/2011      01/07/2011
2       B       01/08/2011      01/10/2011
qid & accept id: (6093085, 6098034) query: Mapping values without a table soup:

Use a Common Table Expression (CTE) within your function will make it easy to replace the CTE with a base table later e.g.

\n
WITH YearCodes (year_code, year) AS\n     ( SELECT year_code, year\n         FROM ( VALUES ( 'Y', 2000 ), \n                       ( '1', 2001 ), \n                       ( '2', 2002 ) ) \n              AS YearCodes ( year_code, year ) )\nSELECT ...;\n
\n

Alternatively, a derived table:

\n
SELECT *\n  FROM ( VALUES ( 'Y', 2000 ), \n                ( '1', 2001 ), \n                ( '2', 2002 ) ) \n       AS YearCodes ( year_code, year )\n       -- other stuff here;\n
\n

Perhaps that later base table could be a calendar table.

\n soup wrap:

Use a Common Table Expression (CTE) within your function will make it easy to replace the CTE with a base table later e.g.

WITH YearCodes (year_code, year) AS
     ( SELECT year_code, year
         FROM ( VALUES ( 'Y', 2000 ), 
                       ( '1', 2001 ), 
                       ( '2', 2002 ) ) 
              AS YearCodes ( year_code, year ) )
SELECT ...;

Alternatively, a derived table:

SELECT *
  FROM ( VALUES ( 'Y', 2000 ), 
                ( '1', 2001 ), 
                ( '2', 2002 ) ) 
       AS YearCodes ( year_code, year )
       -- other stuff here;

Perhaps that later base table could be a calendar table.

qid & accept id: (6094039, 6094075) query: Oracle: Updating a table column using ROWNUM in conjunction with ORDER BY clause soup:

This should work (works for me)

\n
update table_a outer \nset sequence_column = (\n    select rnum from (\n\n           -- evaluate row_number() for all rows ordered by your columns\n           -- BEFORE updating those values into table_a\n           select id, row_number() over (order by column1, column2) rnum  \n           from table_a) inner \n\n    -- join on the primary key to be sure you'll only get one value\n    -- for rnum\n    where inner.id = outer.id);\n
\n

OR you use the MERGE statement. Something like this.

\n
merge into table_a u\nusing (\n  select id, row_number() over (order by column1, column2) rnum \n  from table_a\n) s\non (u.id = s.id)\nwhen matched then update set u.sequence_column = s.rnum\n
\n soup wrap:

This should work (works for me)

update table_a outer 
set sequence_column = (
    select rnum from (

           -- evaluate row_number() for all rows ordered by your columns
           -- BEFORE updating those values into table_a
           select id, row_number() over (order by column1, column2) rnum  
           from table_a) inner 

    -- join on the primary key to be sure you'll only get one value
    -- for rnum
    where inner.id = outer.id);

OR you use the MERGE statement. Something like this.

merge into table_a u
using (
  select id, row_number() over (order by column1, column2) rnum 
  from table_a
) s
on (u.id = s.id)
when matched then update set u.sequence_column = s.rnum
qid & accept id: (6121779, 6123939) query: MYSQL subset operation soup:

From your pseudo code I guess that you want to check if a (dynamic) list of values is a subset of another list provided by a SELECT. If yes, then a whole table will be shown. If not, no rows will be shown.

\n

Here's how to achieve that:

\n
SELECT *\nFROM tb_values\nWHERE \n    ( SELECT COUNT(DISTINCT value)\n      FROM tb_value\n      WHERE isgoodvalue = true\n        AND value IN (value1, value2, value3)\n    ) = 3\n
\n
\n

UPDATED after OP's explanation:

\n
SELECT *\nFROM project\n  JOIN \n    ( SELECT projectid\n      FROM projectTagMap\n      WHERE isgoodvalue = true\n        AND tag IN (tag1, tag2, tag3)\n      GROUP BY projectid\n      HAVING COUNT(*) = 3\n    ) AS ok\n    ON ok.projectid = project.id\n
\n soup wrap:

From your pseudo code I guess that you want to check if a (dynamic) list of values is a subset of another list provided by a SELECT. If yes, then a whole table will be shown. If not, no rows will be shown.

Here's how to achieve that:

SELECT *
FROM tb_values
WHERE 
    ( SELECT COUNT(DISTINCT value)
      FROM tb_value
      WHERE isgoodvalue = true
        AND value IN (value1, value2, value3)
    ) = 3

UPDATED after OP's explanation:

SELECT *
FROM project
  JOIN 
    ( SELECT projectid
      FROM projectTagMap
      WHERE isgoodvalue = true
        AND tag IN (tag1, tag2, tag3)
      GROUP BY projectid
      HAVING COUNT(*) = 3
    ) AS ok
    ON ok.projectid = project.id
qid & accept id: (6127338, 6127471) query: SQL/mysql - Select distinct/UNIQUE but return all columns? soup:

You're looking for a group by:

\n
select *\nfrom table\ngroup by field1\n
\n

Which can occasionally be written with a distinct on statement:

\n
select distinct on field1 *\nfrom table\n
\n

On most platforms, however, neither of the above will work because the behavior on the other columns is unspecified. (The first works in MySQL, if that's what you're using.)

\n

You could fetch the distinct fields and stick to picking a single arbitrary row each time.

\n

On some platforms (e.g. PostgreSQL, Oracle, T-SQL) this can be done directly using window functions:

\n
select *\nfrom (\n   select *,\n          row_number() over (partition by field1 order by field2) as row_number\n   from table\n   ) as rows\nwhere row_number = 1\n
\n

On others (MySQL, SQLite), you'll need to write subqueries that will make you join the entire table with itself (example), so not recommended.

\n soup wrap:

You're looking for a group by:

select *
from table
group by field1

Which can occasionally be written with a distinct on statement:

select distinct on field1 *
from table

On most platforms, however, neither of the above will work because the behavior on the other columns is unspecified. (The first works in MySQL, if that's what you're using.)

You could fetch the distinct fields and stick to picking a single arbitrary row each time.

On some platforms (e.g. PostgreSQL, Oracle, T-SQL) this can be done directly using window functions:

select *
from (
   select *,
          row_number() over (partition by field1 order by field2) as row_number
   from table
   ) as rows
where row_number = 1

On others (MySQL, SQLite), you'll need to write subqueries that will make you join the entire table with itself (example), so not recommended.

qid & accept id: (6159814, 6159840) query: RSS to Database - How to Insert String with Any Character? soup:

Using mysql_real_escape_string with the magic quotes enabled will escape your data twice.

\n
\n

Note: If magic_quotes_gpc is enabled,\n first apply stripslashes() to the\n data. Using this function\n [mysql_real_escape_string] on data\n which has already been escaped will\n escape the data twice.

\n
\n

While outputting those content you can use stripslashes function.

\n
echo stripslashes($data['description']);\n
\n

EDIT

\n

desc is mysql reserved word and you must enclose desc in backticks ``

\n
$query = "INSERT INTO FEED_CONTENT (title, link, `desc`)\n          VALUES (\n                  '".mysql_real_escape_string($title)."',\n                  '".$href."',\n                  '".mysql_real_escape_string($desc)."'\n                 )";\n
\n soup wrap:

Using mysql_real_escape_string with the magic quotes enabled will escape your data twice.

Note: If magic_quotes_gpc is enabled, first apply stripslashes() to the data. Using this function [mysql_real_escape_string] on data which has already been escaped will escape the data twice.

While outputting those content you can use stripslashes function.

echo stripslashes($data['description']);

EDIT

desc is mysql reserved word and you must enclose desc in backticks ``

$query = "INSERT INTO FEED_CONTENT (title, link, `desc`)
          VALUES (
                  '".mysql_real_escape_string($title)."',
                  '".$href."',
                  '".mysql_real_escape_string($desc)."'
                 )";
qid & accept id: (6174355, 6174409) query: How to copy tables avoiding cursors in SQL? soup:

You can use the output clause with the merge statement to get a mapping between source id and target id.\nDescribed in this question. Using merge..output to get mapping between source.id and target.id

\n

Here is some code that you can test. I use table variables instead of real tables.

\n

Setup sample data:

\n
-- @A and @B is the source tables\ndeclare @A as table\n(\n  id int,\n  FK_A_B int,\n  name varchar(10)\n)\n\ndeclare @B as table\n(\n  id int,\n  visible bit\n)  \n\n-- Sample data in @A and @B\ninsert into @B values (21, 1),(32, 0)\ninsert into @A values (1, 21, 'n1'),(5, 32, 'n2')\n\n\n-- @C and @D is the target tables with id as identity columns\ndeclare @C as table\n(\n  id int identity,\n  FK_C_D int not null,\n  name varchar(10)\n)\n\ndeclare @D as table\n(\n  id int identity,\n  visible bit\n)  \n\n-- Sample data already in @C and @D\ninsert into @D values (1),(0)\ninsert into @C values (1, 'x1'),(1, 'x2'),(2, 'x3')\n
\n

Copy data:

\n
-- The @IdMap is a table that holds the mapping between\n-- the @B.id and @D.id (@D.id is an identity column)\ndeclare @IdMap table(TargetID int, SourceID int)\n\n-- Merge from @B to @D.\nmerge @D as D             -- Target table\nusing @B as B             -- Source table\non 0=1                    -- 0=1 means that there are no matches for merge\nwhen not matched then\n  insert (visible) values(visible)    -- Insert to @D\noutput inserted.id, B.id into @IdMap; -- Capture the newly created inserted.id and\n                                      -- map that to the source (@B.id)\n\n-- Add rows to @C from @A with a join to\n-- @IdMap to get the new id for the FK relation\ninsert into @C(FK_C_D, name)\nselect I.TargetID, A.name \nfrom @A as A\n  inner join @IdMap as I\n    on A.FK_A_B = I.SourceID\n
\n

Result:

\n
select *\nfrom @D as D\n  inner join @C as C\n    on D.id = C.FK_C_D\n\nid          visible id          FK_C_D      name\n----------- ------- ----------- ----------- ----------\n1           1       1           1           x1\n1           1       2           1           x2\n2           0       3           2           x3\n3           1       4           3           n1\n4           0       5           4           n2\n
\n

You can test the code here: http://data.stackexchange.com/stackoverflow/q/101643/using-merge-to-map-source-id-to-target-id

\n soup wrap:

You can use the output clause with the merge statement to get a mapping between source id and target id. Described in this question. Using merge..output to get mapping between source.id and target.id

Here is some code that you can test. I use table variables instead of real tables.

Setup sample data:

-- @A and @B is the source tables
declare @A as table
(
  id int,
  FK_A_B int,
  name varchar(10)
)

declare @B as table
(
  id int,
  visible bit
)  

-- Sample data in @A and @B
insert into @B values (21, 1),(32, 0)
insert into @A values (1, 21, 'n1'),(5, 32, 'n2')


-- @C and @D is the target tables with id as identity columns
declare @C as table
(
  id int identity,
  FK_C_D int not null,
  name varchar(10)
)

declare @D as table
(
  id int identity,
  visible bit
)  

-- Sample data already in @C and @D
insert into @D values (1),(0)
insert into @C values (1, 'x1'),(1, 'x2'),(2, 'x3')

Copy data:

-- The @IdMap is a table that holds the mapping between
-- the @B.id and @D.id (@D.id is an identity column)
declare @IdMap table(TargetID int, SourceID int)

-- Merge from @B to @D.
merge @D as D             -- Target table
using @B as B             -- Source table
on 0=1                    -- 0=1 means that there are no matches for merge
when not matched then
  insert (visible) values(visible)    -- Insert to @D
output inserted.id, B.id into @IdMap; -- Capture the newly created inserted.id and
                                      -- map that to the source (@B.id)

-- Add rows to @C from @A with a join to
-- @IdMap to get the new id for the FK relation
insert into @C(FK_C_D, name)
select I.TargetID, A.name 
from @A as A
  inner join @IdMap as I
    on A.FK_A_B = I.SourceID

Result:

select *
from @D as D
  inner join @C as C
    on D.id = C.FK_C_D

id          visible id          FK_C_D      name
----------- ------- ----------- ----------- ----------
1           1       1           1           x1
1           1       2           1           x2
2           0       3           2           x3
3           1       4           3           n1
4           0       5           4           n2

You can test the code here: http://data.stackexchange.com/stackoverflow/q/101643/using-merge-to-map-source-id-to-target-id

qid & accept id: (6226690, 6227078) query: Creating a variable on database to hold global stats soup:

You could use an indexed view, that SQL Server will automatically maintain:

\n
create table dbo.users (\n    ID int not null,\n    Activated bit not null\n)\ngo\ncreate view dbo.user_status_stats (Activated,user_count)\nwith schemabinding\nas\n    select Activated,COUNT_BIG(*) from dbo.users group by Activated\ngo\ncreate unique clustered index IX_user_status_stats on dbo.user_status_stats (Activated)\ngo\n
\n

This just has two possible statuses, but could expand to more using a different data type. As I say, in this case, SQL Server will maintain the counts behind the scenes, so you can just query the view:

\n
SELECT user_count from user_status_stats with (NOEXPAND) where Activated = 1\n
\n

and it won't have to query the underlying table. You need to use the WITH (NOEXPAND) hint on editions below (Enterprise/Developer).

\n
\n

Although as @Jim suggested, doing a COUNT(*) against an index when the index column(s) can satisfy the query criteria using equality comparisons should be pretty quick also.

\n soup wrap:

You could use an indexed view, that SQL Server will automatically maintain:

create table dbo.users (
    ID int not null,
    Activated bit not null
)
go
create view dbo.user_status_stats (Activated,user_count)
with schemabinding
as
    select Activated,COUNT_BIG(*) from dbo.users group by Activated
go
create unique clustered index IX_user_status_stats on dbo.user_status_stats (Activated)
go

This just has two possible statuses, but could expand to more using a different data type. As I say, in this case, SQL Server will maintain the counts behind the scenes, so you can just query the view:

SELECT user_count from user_status_stats with (NOEXPAND) where Activated = 1

and it won't have to query the underlying table. You need to use the WITH (NOEXPAND) hint on editions below (Enterprise/Developer).


Although as @Jim suggested, doing a COUNT(*) against an index when the index column(s) can satisfy the query criteria using equality comparisons should be pretty quick also.

qid & accept id: (6227934, 6229720) query: Create a view/temporary table from a column with CSV soup:

I don't think this is an exact duplicate of the question referenced in the close votes. Similar yes, but not the same.

\n

Not exactly beautiful, but:

\n
CREATE OR REPLACE VIEW your_view AS\nSELECT tt.ID, SUBSTR(value, sp, ep-sp) split, other_col1, other_col2...\n  FROM (SELECT id, value\n             , INSTR(','||value, ',', 1, L) sp  -- 1st posn of substr at this level\n             , INSTR(value||',', ',', 1, L) ep  -- posn of delimiter at this level\n          FROM tt JOIN (SELECT LEVEL L FROM dual CONNECT BY LEVEL < 20) q -- 20 is max #substrings\n                    ON LENGTH(value)-LENGTH(REPLACE(value,','))+1 >= L \n) qq JOIN tt on qq.id = tt.id;\n
\n

where tt is your table.

\n

Works for csv values longer than 1 or null. The CONNECT BY LEVEL < 20 is arbitrary, adjust for your situation.

\n

To illustrate:

\n
    SQL> CREATE TABLE tt (ID INTEGER, c VARCHAR2(20), othercol VARCHAR2(20));\n\n    Table created\n    SQL> INSERT INTO tt VALUES (1, 'a,b,c', 'val1');\n\n    1 row inserted\n    SQL> INSERT INTO tt VALUES (2, 'd,e,f,g', 'val2');\n\n    1 row inserted\n    SQL> INSERT INTO tt VALUES (3, 'a,f', 'val3');\n\n    1 row inserted\n    SQL> INSERT INTO tt VALUES (4,'aa,bbb,cccc', 'val4');\n\n    1 row inserted\n    SQL> CREATE OR REPLACE VIEW myview AS\n      2  SELECT tt.ID, SUBSTR(c, sp, ep-sp+1) splitval, othercol\n      3    FROM (SELECT ID\n      4               , INSTR(','||c,',',1,L) sp, INSTR(c||',',',',1,L)-1 ep\n      5            FROM tt JOIN (SELECT LEVEL L FROM dual CONNECT BY LEVEL < 20) q\n      6                      ON LENGTH(c)-LENGTH(REPLACE(c,','))+1 >= L\n      7  ) q JOIN tt ON q.id =tt.id;\n\n    View created\n    SQL> select * from myview order by 1,2;\n\n                                     ID SPLITVAL             OTHERCOL\n--------------------------------------- -------------------- --------------------\n                                      1 a                    val1\n                                      1 b                    val1\n                                      1 c                    val1\n                                      2 d                    val2\n                                      2 e                    val2\n                                      2 f                    val2\n                                      2 g                    val2\n                                      3 a                    val3\n                                      3 f                    val3\n                                      4 aa                   val4\n                                      4 bbb                  val4\n                                      4 cccc                 val4\n\n12 rows selected\n\nSQL> \n
\n soup wrap:

I don't think this is an exact duplicate of the question referenced in the close votes. Similar yes, but not the same.

Not exactly beautiful, but:

CREATE OR REPLACE VIEW your_view AS
SELECT tt.ID, SUBSTR(value, sp, ep-sp) split, other_col1, other_col2...
  FROM (SELECT id, value
             , INSTR(','||value, ',', 1, L) sp  -- 1st posn of substr at this level
             , INSTR(value||',', ',', 1, L) ep  -- posn of delimiter at this level
          FROM tt JOIN (SELECT LEVEL L FROM dual CONNECT BY LEVEL < 20) q -- 20 is max #substrings
                    ON LENGTH(value)-LENGTH(REPLACE(value,','))+1 >= L 
) qq JOIN tt on qq.id = tt.id;

where tt is your table.

Works for csv values longer than 1 or null. The CONNECT BY LEVEL < 20 is arbitrary, adjust for your situation.

To illustrate:

    SQL> CREATE TABLE tt (ID INTEGER, c VARCHAR2(20), othercol VARCHAR2(20));

    Table created
    SQL> INSERT INTO tt VALUES (1, 'a,b,c', 'val1');

    1 row inserted
    SQL> INSERT INTO tt VALUES (2, 'd,e,f,g', 'val2');

    1 row inserted
    SQL> INSERT INTO tt VALUES (3, 'a,f', 'val3');

    1 row inserted
    SQL> INSERT INTO tt VALUES (4,'aa,bbb,cccc', 'val4');

    1 row inserted
    SQL> CREATE OR REPLACE VIEW myview AS
      2  SELECT tt.ID, SUBSTR(c, sp, ep-sp+1) splitval, othercol
      3    FROM (SELECT ID
      4               , INSTR(','||c,',',1,L) sp, INSTR(c||',',',',1,L)-1 ep
      5            FROM tt JOIN (SELECT LEVEL L FROM dual CONNECT BY LEVEL < 20) q
      6                      ON LENGTH(c)-LENGTH(REPLACE(c,','))+1 >= L
      7  ) q JOIN tt ON q.id =tt.id;

    View created
    SQL> select * from myview order by 1,2;

                                     ID SPLITVAL             OTHERCOL
--------------------------------------- -------------------- --------------------
                                      1 a                    val1
                                      1 b                    val1
                                      1 c                    val1
                                      2 d                    val2
                                      2 e                    val2
                                      2 f                    val2
                                      2 g                    val2
                                      3 a                    val3
                                      3 f                    val3
                                      4 aa                   val4
                                      4 bbb                  val4
                                      4 cccc                 val4

12 rows selected

SQL> 
qid & accept id: (6254626, 6255892) query: performing a sort of "reverse lookup" in sql server soup:

Why not get both sets of comments at once?

\n
SELECT\n   ...\nFROM\n   Products P\n   LEFT JOIN Comments C\n      ON P.ProductID LIKE C.SpecID + '%'\n      OR P.ProductID LIKE '%-' + C.SpecID\n
\n

Also you could consider:

\n
SELECT\n   ...\nFROM\n   Products P\n   LEFT JOIN Comments C\n      ON (Len(C.SpecID) = 2 AND P.ProductID LIKE C.SpecID + '%')\n      OR (Len(C.SpecID) > 2 AND P.ProductID LIKE '%-' + C.SpecID)\n
\n

Testing is in order to see if one performs better than the other. If you find the queries to be too slow, then trying adding some persisted calculated columns: in Products to specify whether the product ID has a dash in it or not, and in Comments add two columns, one with only product IDs and one with only suffices. Indexes on these columns could help.

\n
ALTER TABLE Comments ADD ExactSpecID AS \n   (CASE WHEN Len(SpecID) > 2 THEN SpecID ELSE NULL END) PERSISTED\nALTER TABLE Comments ADD Suffix AS \n   (CASE WHEN Len(SpecID) = 2 THEN SpecID ELSE NULL END) PERSISTED\n
\n soup wrap:

Why not get both sets of comments at once?

SELECT
   ...
FROM
   Products P
   LEFT JOIN Comments C
      ON P.ProductID LIKE C.SpecID + '%'
      OR P.ProductID LIKE '%-' + C.SpecID

Also you could consider:

SELECT
   ...
FROM
   Products P
   LEFT JOIN Comments C
      ON (Len(C.SpecID) = 2 AND P.ProductID LIKE C.SpecID + '%')
      OR (Len(C.SpecID) > 2 AND P.ProductID LIKE '%-' + C.SpecID)

Testing is in order to see if one performs better than the other. If you find the queries to be too slow, then trying adding some persisted calculated columns: in Products to specify whether the product ID has a dash in it or not, and in Comments add two columns, one with only product IDs and one with only suffices. Indexes on these columns could help.

ALTER TABLE Comments ADD ExactSpecID AS 
   (CASE WHEN Len(SpecID) > 2 THEN SpecID ELSE NULL END) PERSISTED
ALTER TABLE Comments ADD Suffix AS 
   (CASE WHEN Len(SpecID) = 2 THEN SpecID ELSE NULL END) PERSISTED
qid & accept id: (6267954, 6268173) query: SQL SELECT complex expression in column - additional boolean soup:

you can go with left outer join

\n
select \na.article_id, a.article_body, \nua.article_id as as been_read --will be not null for read articles\nfrom Articles a \nleft outer join Users_Articles ua \n    on (ua.article_id = a.article_id and ua.user_id = $current_user_id)\n
\n

or with subselect

\n
select \na.article_id, a.article_body, \n(select 1 from Users_Articles ua \n    where ua.article_id = a.article_id \n    and ua.user_id = $current_user_id) as been_read --will be not null for read articles\nfrom Articles a\n
\n soup wrap:

you can go with left outer join

select 
a.article_id, a.article_body, 
ua.article_id as as been_read --will be not null for read articles
from Articles a 
left outer join Users_Articles ua 
    on (ua.article_id = a.article_id and ua.user_id = $current_user_id)

or with subselect

select 
a.article_id, a.article_body, 
(select 1 from Users_Articles ua 
    where ua.article_id = a.article_id 
    and ua.user_id = $current_user_id) as been_read --will be not null for read articles
from Articles a
qid & accept id: (6280565, 6284057) query: Unique constraint over multiple tables soup:

You could try the following. You have to create a redundant UNIQUE constraint on (id, aId) in Parent (SQL is pretty dumb isn't it?!).

\n
CREATE TABLE Child\n(parentId INTEGER NOT NULL,\n aId INTEGER NOT NULL UNIQUE,\nFOREIGN KEY (parentId,aId) REFERENCES Parent (id,aId),\ncreatedOn TIMESTAMP NOT NULL);\n
\n

Possibly a much better solution would be to drop parentId from the Child table altogether, add bId instead and just reference the Parent table based on (aId, bId):

\n
CREATE TABLE Child\n(aId INTEGER NOT NULL UNIQUE,\n bId INTEGER NOT NULL,\nFOREIGN KEY (aId,bId) REFERENCES Parent (aId,bId),\ncreatedOn TIMESTAMP NOT NULL);\n
\n

Is there any reason why you can't do that?

\n soup wrap:

You could try the following. You have to create a redundant UNIQUE constraint on (id, aId) in Parent (SQL is pretty dumb isn't it?!).

CREATE TABLE Child
(parentId INTEGER NOT NULL,
 aId INTEGER NOT NULL UNIQUE,
FOREIGN KEY (parentId,aId) REFERENCES Parent (id,aId),
createdOn TIMESTAMP NOT NULL);

Possibly a much better solution would be to drop parentId from the Child table altogether, add bId instead and just reference the Parent table based on (aId, bId):

CREATE TABLE Child
(aId INTEGER NOT NULL UNIQUE,
 bId INTEGER NOT NULL,
FOREIGN KEY (aId,bId) REFERENCES Parent (aId,bId),
createdOn TIMESTAMP NOT NULL);

Is there any reason why you can't do that?

qid & accept id: (6295231, 6295559) query: Ordering a MySQL query with LEFT JOIN soup:

I think I've cracked it! The following query seems to give me what I need:

\n
SELECT c.id, c.name, h.winner\nFROM championships c\nLEFT JOIN title_history h\nON c.id = h.championship\nGROUP BY c.id\nORDER BY c.rank ASC, h.date_from ASC\n
\n

EDIT: I haven't cracked it. As I'm grouping by championship ID, I'm now only getting the first title winner, even if there have been title winners after.

\n

EDIT 2: Solved with the following query:

\n
SELECT friendly_name,\n(SELECT winner FROM title_history WHERE championship = c.id ORDER BY date_from DESC LIMIT 1) \nFROM championships AS c\nORDER BY name\n
\n soup wrap:

I think I've cracked it! The following query seems to give me what I need:

SELECT c.id, c.name, h.winner
FROM championships c
LEFT JOIN title_history h
ON c.id = h.championship
GROUP BY c.id
ORDER BY c.rank ASC, h.date_from ASC

EDIT: I haven't cracked it. As I'm grouping by championship ID, I'm now only getting the first title winner, even if there have been title winners after.

EDIT 2: Solved with the following query:

SELECT friendly_name,
(SELECT winner FROM title_history WHERE championship = c.id ORDER BY date_from DESC LIMIT 1) 
FROM championships AS c
ORDER BY name
qid & accept id: (6295650, 6295878) query: SQL query to search by day/month/year/day&month/day&year etc soup:

You can write maintainable queries that additionally are fast by using the pg/temporal extension:

\n

https://github.com/jeff-davis/PostgreSQL-Temporal

\n
create index on events using gist(period(start_date, end_date));\n\nselect *\nfrom events\nwhere period(start_date, end_date) @> :date;\n\nselect *\nfrom events\nwhere period(start_date, end_date) && period(:start, :end);\n
\n

You can even use it to disallow overlaps as a table constraint:

\n
alter table events\nadd constraint overlap_excl\nexclude using gist(period(start_date, end_date) WITH &&);\n
\n
\n
\n

write all possible from, to and day/month/year combinations - not maintable

\n
\n

It's actually more maintainable than you might think, e.g.:

\n
select *\nfrom events\njoin generate_series(:start_date, :end_date, :interval) as datetime\non start_date <= datetime and datetime < end_date;\n
\n

But it's much better to use the above-mentioned period type.

\n soup wrap:

You can write maintainable queries that additionally are fast by using the pg/temporal extension:

https://github.com/jeff-davis/PostgreSQL-Temporal

create index on events using gist(period(start_date, end_date));

select *
from events
where period(start_date, end_date) @> :date;

select *
from events
where period(start_date, end_date) && period(:start, :end);

You can even use it to disallow overlaps as a table constraint:

alter table events
add constraint overlap_excl
exclude using gist(period(start_date, end_date) WITH &&);

write all possible from, to and day/month/year combinations - not maintable

It's actually more maintainable than you might think, e.g.:

select *
from events
join generate_series(:start_date, :end_date, :interval) as datetime
on start_date <= datetime and datetime < end_date;

But it's much better to use the above-mentioned period type.

qid & accept id: (6333687, 6333737) query: TSQL counting how many occurrences on each day soup:
SELECT\n   DateWithNoTimePortion = DateAdd(Day, DateDiff(Day, '19000101', DateCol), '19000101'),\n   VisitorCount = Count(*)\nFROM Log\nGROUP BY DateDiff(Day, 0, DateCol);\n
\n

For some reason I assumed you were using SQL Server. If that is not true, please let us know. I think the DateDiff method could work for you in other DBMSes depending on the functions they support, but they may have better ways to do the job (such as TRUNC in Oracle).

\n

In SQL Server the above method is one of the fastest ways of doing the job. There are only two faster ways:

\n\n

When SQL Server 2008 is not available, I think the method I posted is the best mix of speed and clarity for future developers looking at the code, avoiding doing magic stuff that isn't clear. You can see the tests backing up my speed claims.

\n soup wrap:
SELECT
   DateWithNoTimePortion = DateAdd(Day, DateDiff(Day, '19000101', DateCol), '19000101'),
   VisitorCount = Count(*)
FROM Log
GROUP BY DateDiff(Day, 0, DateCol);

For some reason I assumed you were using SQL Server. If that is not true, please let us know. I think the DateDiff method could work for you in other DBMSes depending on the functions they support, but they may have better ways to do the job (such as TRUNC in Oracle).

In SQL Server the above method is one of the fastest ways of doing the job. There are only two faster ways:

When SQL Server 2008 is not available, I think the method I posted is the best mix of speed and clarity for future developers looking at the code, avoiding doing magic stuff that isn't clear. You can see the tests backing up my speed claims.

qid & accept id: (6355613, 6355807) query: Xml elements present in spite of null values soup:
\n

without changing the FOR XML PATH into\n FOR XML ELEMENTS to use the XSINIL\n switch

\n
\n

You can use elements xsinil with for xml path.

\n
declare @T table (ID int identity, Name varchar(50))\n\ninsert into @T values ('Name1')\ninsert into @T values (null)\ninsert into @T values ('Name2')\n\nselect\n  ID,\n  Name\nfrom @T\nfor xml path('item'), root('root'), elements xsinil\n
\n

Result:

\n
\n  \n    1\n    Name1\n  \n  \n    2\n    \n  \n  \n    3\n    Name2\n  \n\n
\n soup wrap:

without changing the FOR XML PATH into FOR XML ELEMENTS to use the XSINIL switch

You can use elements xsinil with for xml path.

declare @T table (ID int identity, Name varchar(50))

insert into @T values ('Name1')
insert into @T values (null)
insert into @T values ('Name2')

select
  ID,
  Name
from @T
for xml path('item'), root('root'), elements xsinil

Result:


  
    1
    Name1
  
  
    2
    
  
  
    3
    Name2
  

qid & accept id: (6404158, 6404187) query: How to remove a prefix name from every table name in a mysql database soup:

You can generate the necessary statements with a single query:

\n
select 'RENAME TABLE ' || table_name ||  ' TO ' || substr(table_name, 5) ||';'\nfrom information_schema.tables\n
\n

Save the output of that query to a file and you have all the statements you need.

\n

Or if that returns 0s and 1s rather the statemenets, here's the version using concat instead:

\n
select concat('RENAME TABLE ', concat(table_name, concat(' TO ', concat(substr(table_name, 5), ';'))))\nfrom information_schema.tables;\n
\n soup wrap:

You can generate the necessary statements with a single query:

select 'RENAME TABLE ' || table_name ||  ' TO ' || substr(table_name, 5) ||';'
from information_schema.tables

Save the output of that query to a file and you have all the statements you need.

Or if that returns 0s and 1s rather the statemenets, here's the version using concat instead:

select concat('RENAME TABLE ', concat(table_name, concat(' TO ', concat(substr(table_name, 5), ';'))))
from information_schema.tables;
qid & accept id: (6418214, 6419482) query: Table Normalization (Parse comma separated fields into individual records) soup:

-- Setup:

\n
declare @Device table(DeviceId int primary key, Parts varchar(1000))\ndeclare @Part table(PartId int identity(1,1) primary key, PartName varchar(100))\ndeclare @DevicePart table(DeviceId int, PartId int)\n\ninsert @Device\nvalues\n    (1, 'Part1, Part2, Part3'),\n    (2, 'Part2, Part3, Part4'),\n    (3, 'Part1')\n
\n

--Script:

\n
declare @DevicePartTemp table(DeviceId int, PartName varchar(100))\n\ninsert @DevicePartTemp\nselect DeviceId, ltrim(x.value('.', 'varchar(100)'))\nfrom\n(\n    select DeviceId, cast('' + replace(Parts, ',', '') + '' as xml) XmlColumn\n    from @Device\n)tt\ncross apply\n    XmlColumn.nodes('x') as Nodes(x)\n\n\ninsert @Part\nselect distinct PartName\nfrom @DevicePartTemp\n\ninsert @DevicePart\nselect tmp.DeviceId, prt.PartId\nfrom @DevicePartTemp tmp \n    join @Part prt on\n        prt.PartName = tmp.PartName\n
\n

-- Result:

\n
select *\nfrom @Part\n\nPartId      PartName\n----------- ---------\n1           Part1\n2           Part2\n3           Part3\n4           Part4\n\n\nselect *\nfrom @DevicePart\n\nDeviceId    PartId\n----------- -----------\n1           1\n1           2\n1           3\n2           2\n2           3\n2           4\n3           1   \n
\n soup wrap:

-- Setup:

declare @Device table(DeviceId int primary key, Parts varchar(1000))
declare @Part table(PartId int identity(1,1) primary key, PartName varchar(100))
declare @DevicePart table(DeviceId int, PartId int)

insert @Device
values
    (1, 'Part1, Part2, Part3'),
    (2, 'Part2, Part3, Part4'),
    (3, 'Part1')

--Script:

declare @DevicePartTemp table(DeviceId int, PartName varchar(100))

insert @DevicePartTemp
select DeviceId, ltrim(x.value('.', 'varchar(100)'))
from
(
    select DeviceId, cast('' + replace(Parts, ',', '') + '' as xml) XmlColumn
    from @Device
)tt
cross apply
    XmlColumn.nodes('x') as Nodes(x)


insert @Part
select distinct PartName
from @DevicePartTemp

insert @DevicePart
select tmp.DeviceId, prt.PartId
from @DevicePartTemp tmp 
    join @Part prt on
        prt.PartName = tmp.PartName

-- Result:

select *
from @Part

PartId      PartName
----------- ---------
1           Part1
2           Part2
3           Part3
4           Part4


select *
from @DevicePart

DeviceId    PartId
----------- -----------
1           1
1           2
1           3
2           2
2           3
2           4
3           1   
qid & accept id: (6434996, 6455095) query: Manipulate the sort result considering the user preference - database soup:

If you want each user to have independent sort orders, you need another table.

\n
CREATE TABLE user_sort_order (\n    name VARCHAR(?) NOT NULL REFERENCES your-other-table (name),\n    user_id INTEGER NOT NULL REFERENCES users (user_id),\n    sort_order INTEGER NOT NULL                -- Could be float or decimal\n);\n
\n

Then ordering is easy.

\n
SELECT name \nFROM user_sort_order\nWHERE user_id = ?\nORDER BY sort_order\n
\n

There's no magic bullet for updating.

\n\n soup wrap:

If you want each user to have independent sort orders, you need another table.

CREATE TABLE user_sort_order (
    name VARCHAR(?) NOT NULL REFERENCES your-other-table (name),
    user_id INTEGER NOT NULL REFERENCES users (user_id),
    sort_order INTEGER NOT NULL                -- Could be float or decimal
);

Then ordering is easy.

SELECT name 
FROM user_sort_order
WHERE user_id = ?
ORDER BY sort_order

There's no magic bullet for updating.

qid & accept id: (6440318, 6440437) query: Oracle : Automatic modification date on update soup:

You thought wrongly, Oracle does what you order it to do.

\n

You can either try

\n
update mytable a set title = \n      (select title from mytable2 b \n        where b.id     = a.id and \n              b.title != a.title)\n
\n

or change the trigger to specifically check for a different title name.

\n
create or replace\nTRIGGER schema.name_of_trigger\nBEFORE UPDATE ON schema.name_of_table\nFOR EACH ROW\nBEGIN\n--  Check for modification of title:\n    if :new.title != :old.title then\n       :new.modify_date := sysdate;\n    end if;\nEND;\n
\n soup wrap:

You thought wrongly, Oracle does what you order it to do.

You can either try

update mytable a set title = 
      (select title from mytable2 b 
        where b.id     = a.id and 
              b.title != a.title)

or change the trigger to specifically check for a different title name.

create or replace
TRIGGER schema.name_of_trigger
BEFORE UPDATE ON schema.name_of_table
FOR EACH ROW
BEGIN
--  Check for modification of title:
    if :new.title != :old.title then
       :new.modify_date := sysdate;
    end if;
END;
qid & accept id: (6468506, 6468848) query: Can I delete the most recent record without sub-select in Oracle? soup:

The most readable way is probably what you wrote. But it can be very wasteful depending on various factors. In particular, if there is no index on process_date it likely has to do 2 full table scans.

\n

The difficulty in writing something that is both simple and more efficient, is that any view of the table that includes a ranking or ordering will also not allow modifications.

\n

Here's one alternate way to approach it, using PL/SQL, that will probably be more efficient in some cases but is clearly less readable.

\n
DECLARE\n  CURSOR delete_cur IS\n    SELECT /*+ FIRST_ROWS(1) */\n      NULL\n    FROM daily_statistics\n    ORDER BY process_date DESC\n    FOR UPDATE;\n  trash  CHAR(1);\nBEGIN\n  OPEN delete_cur;\n  FETCH delete_cur INTO trash;\n  IF delete_cur%FOUND THEN\n    DELETE FROM daily_statistics WHERE CURRENT OF delete_cur;\n  END IF;\n  CLOSE delete_cur;\nEND;\n/\n
\n

Also note this potentially produces different results from your statement if there can be multiple rows with the same process_date value. To make it handle duplicates requires a little more complexity:

\n
DECLARE\n  CURSOR delete_cur IS\n    SELECT /*+ FIRST_ROWS(1) */\n      process_date\n    FROM daily_statistics\n    ORDER BY process_date DESC\n    FOR UPDATE;\n  del_date  DATE;\n  next_date DATE;\nBEGIN\n  OPEN delete_cur;\n  FETCH delete_cur INTO del_date;\n  IF delete_cur%FOUND THEN\n    DELETE FROM daily_statistics WHERE CURRENT OF delete_cur;\n  END IF;\n  LOOP\n    FETCH delete_cur INTO next_date;\n    EXIT WHEN delete_cur%NOTFOUND OR next_date <> del_date;\n    DELETE FROM daily_statistics WHERE CURRENT OF delete_cur;\n  END LOOP;\n  CLOSE delete_cur;\nEND;\n/\n
\n soup wrap:

The most readable way is probably what you wrote. But it can be very wasteful depending on various factors. In particular, if there is no index on process_date it likely has to do 2 full table scans.

The difficulty in writing something that is both simple and more efficient, is that any view of the table that includes a ranking or ordering will also not allow modifications.

Here's one alternate way to approach it, using PL/SQL, that will probably be more efficient in some cases but is clearly less readable.

DECLARE
  CURSOR delete_cur IS
    SELECT /*+ FIRST_ROWS(1) */
      NULL
    FROM daily_statistics
    ORDER BY process_date DESC
    FOR UPDATE;
  trash  CHAR(1);
BEGIN
  OPEN delete_cur;
  FETCH delete_cur INTO trash;
  IF delete_cur%FOUND THEN
    DELETE FROM daily_statistics WHERE CURRENT OF delete_cur;
  END IF;
  CLOSE delete_cur;
END;
/

Also note this potentially produces different results from your statement if there can be multiple rows with the same process_date value. To make it handle duplicates requires a little more complexity:

DECLARE
  CURSOR delete_cur IS
    SELECT /*+ FIRST_ROWS(1) */
      process_date
    FROM daily_statistics
    ORDER BY process_date DESC
    FOR UPDATE;
  del_date  DATE;
  next_date DATE;
BEGIN
  OPEN delete_cur;
  FETCH delete_cur INTO del_date;
  IF delete_cur%FOUND THEN
    DELETE FROM daily_statistics WHERE CURRENT OF delete_cur;
  END IF;
  LOOP
    FETCH delete_cur INTO next_date;
    EXIT WHEN delete_cur%NOTFOUND OR next_date <> del_date;
    DELETE FROM daily_statistics WHERE CURRENT OF delete_cur;
  END LOOP;
  CLOSE delete_cur;
END;
/
qid & accept id: (6524697, 6524885) query: using like operator in html5 database query soup:

If you simply replace the '=' with a LIKE operator, you will get the same exact match answer as your current query. I assume you would like to use the LIKE operator to do something different (such as a begins with search).

\n

I provided you how SQL databases normally does this, but if this works for you depends on how SQL compatible the SQL dialect being used by the HTML5 engine is.

\n

Firstly, it depends on concaternation syntax. Secondly, it depends on concaternation using NULL + string produces NULL or the string. Most professional databases would yield NULL (this is good for you, because then this will work).

\n

The following should work on MySQL or Oracle and some other databases:

\n
SELECT * FROM bdreminders\nWHERE firstname LIKE IFNULL( CONCAT(?,'%'), firstname)\nAND lastname LIKE IFNULL( CONCAT(?,'%'), lastname)\nAND baughtgift LIKE IFNULL( CONCAT(?,'%'), baughtgift)\nORDER BY firstname asc\n
\n

or (for Oracle, Postgre and others)

\n
SELECT * FROM bdreminders\nWHERE firstname LIKE IFNULL( ? ||'%', firstname)\nAND lastname LIKE IFNULL( ? || '%', lastname)\nAND baughtgift LIKE IFNULL( ? || '%', baughtgift)\nORDER BY firstname asc\n
\n

or (for SQL server and others)

\n
SELECT * FROM bdreminders\nWHERE firstname LIKE IFNULL( ? +'%', firstname)\nAND lastname LIKE IFNULL( ? + '%', lastname)\nAND baughtgift LIKE IFNULL( ? + '%', baughtgift)\nORDER BY firstname asc\n
\n

I would try the last one first. If the above does not work and you get all bdreminders, the database does not concaternate NULL+string to NULL. In this case, I don't think you can use ISNULL as it will return the first non null value and thus, always return '%'.

\n soup wrap:

If you simply replace the '=' with a LIKE operator, you will get the same exact match answer as your current query. I assume you would like to use the LIKE operator to do something different (such as a begins with search).

I provided you how SQL databases normally does this, but if this works for you depends on how SQL compatible the SQL dialect being used by the HTML5 engine is.

Firstly, it depends on concaternation syntax. Secondly, it depends on concaternation using NULL + string produces NULL or the string. Most professional databases would yield NULL (this is good for you, because then this will work).

The following should work on MySQL or Oracle and some other databases:

SELECT * FROM bdreminders
WHERE firstname LIKE IFNULL( CONCAT(?,'%'), firstname)
AND lastname LIKE IFNULL( CONCAT(?,'%'), lastname)
AND baughtgift LIKE IFNULL( CONCAT(?,'%'), baughtgift)
ORDER BY firstname asc

or (for Oracle, Postgre and others)

SELECT * FROM bdreminders
WHERE firstname LIKE IFNULL( ? ||'%', firstname)
AND lastname LIKE IFNULL( ? || '%', lastname)
AND baughtgift LIKE IFNULL( ? || '%', baughtgift)
ORDER BY firstname asc

or (for SQL server and others)

SELECT * FROM bdreminders
WHERE firstname LIKE IFNULL( ? +'%', firstname)
AND lastname LIKE IFNULL( ? + '%', lastname)
AND baughtgift LIKE IFNULL( ? + '%', baughtgift)
ORDER BY firstname asc

I would try the last one first. If the above does not work and you get all bdreminders, the database does not concaternate NULL+string to NULL. In this case, I don't think you can use ISNULL as it will return the first non null value and thus, always return '%'.

qid & accept id: (6536396, 6537272) query: How to convert two lists into adjacency matrix SQL Server T-SQL? soup:

It was hard to avoid those null values in the pivot.

\n
declare @t table (fruit varchar(10), colour varchar(10))\n\ninsert @t\nselect 'Apple',     'Red'   union all\nselect 'Orange',    'Red'   union all\nselect 'Berry',     'Green' union all\nselect 'PineApple', 'Green'\n\nselect * from (\nselect a.fruit, b.colour, case when c.fruit is null then 0 else 1 end found from \n(select distinct fruit, colour from @t) a\ncross join \n(select distinct colour from @t) b\nleft outer join \n(select distinct fruit, colour from @t) c\non a.fruit = c.fruit and b.colour = c.colour) d\nPIVOT\n(max(found)  \nFOR colour\nin([red],[green])  \n)AS p\norder by 3, 1   \n
\n

Output

\n
fruit      red         green\n---------- ----------- -----------\nApple      1           0\nOrange     1           0\nBerry      0           1\nPineApple  0           1\n
\n soup wrap:

It was hard to avoid those null values in the pivot.

declare @t table (fruit varchar(10), colour varchar(10))

insert @t
select 'Apple',     'Red'   union all
select 'Orange',    'Red'   union all
select 'Berry',     'Green' union all
select 'PineApple', 'Green'

select * from (
select a.fruit, b.colour, case when c.fruit is null then 0 else 1 end found from 
(select distinct fruit, colour from @t) a
cross join 
(select distinct colour from @t) b
left outer join 
(select distinct fruit, colour from @t) c
on a.fruit = c.fruit and b.colour = c.colour) d
PIVOT
(max(found)  
FOR colour
in([red],[green])  
)AS p
order by 3, 1   

Output

fruit      red         green
---------- ----------- -----------
Apple      1           0
Orange     1           0
Berry      0           1
PineApple  0           1
qid & accept id: (6551214, 6556239) query: MySQL GROUP BY DateTime +/- 3 seconds soup:

I'm using Tom H.'s excellent idea but doing it a little differently here:

\n

Instead of finding all the rows that are the beginnings of chains, we can find all times that are the beginnings of chains, then go back and ifnd the rows that match the times.

\n

Query #1 here should tell you which times are the beginnings of chains by finding which times do not have any times below them but within 3 seconds:

\n
SELECT DISTINCT Timestamp\nFROM Table a\nLEFT JOIN Table b\nON (b.Timestamp >= a.TimeStamp - INTERVAL 3 SECONDS\n    AND b.Timestamp < a.Timestamp)\nWHERE b.Timestamp IS NULL\n
\n

And then for each row, we can find the largest chain-starting timestamp that is less than our timestamp with Query #2:

\n
SELECT Table.id, MAX(StartOfChains.TimeStamp) AS ChainStartTime\nFROM Table\nJOIN ([query #1]) StartofChains\nON Table.Timestamp >= StartOfChains.TimeStamp\nGROUP BY Table.id\n
\n

Once we have that, we can GROUP BY it as you wanted.

\n
SELECT COUNT(*) --or whatever\nFROM Table\nJOIN ([query #2]) GroupingQuery\nON Table.id = GroupingQuery.id\nGROUP BY GroupingQuery.ChainStartTime\n
\n

I'm not entirely sure this is distinct enough from Tom H's answer to be posted separately, but it sounded like you were having trouble with implementation, and I was thinking about it, so I thought I'd post again. Good luck!

\n soup wrap:

I'm using Tom H.'s excellent idea but doing it a little differently here:

Instead of finding all the rows that are the beginnings of chains, we can find all times that are the beginnings of chains, then go back and ifnd the rows that match the times.

Query #1 here should tell you which times are the beginnings of chains by finding which times do not have any times below them but within 3 seconds:

SELECT DISTINCT Timestamp
FROM Table a
LEFT JOIN Table b
ON (b.Timestamp >= a.TimeStamp - INTERVAL 3 SECONDS
    AND b.Timestamp < a.Timestamp)
WHERE b.Timestamp IS NULL

And then for each row, we can find the largest chain-starting timestamp that is less than our timestamp with Query #2:

SELECT Table.id, MAX(StartOfChains.TimeStamp) AS ChainStartTime
FROM Table
JOIN ([query #1]) StartofChains
ON Table.Timestamp >= StartOfChains.TimeStamp
GROUP BY Table.id

Once we have that, we can GROUP BY it as you wanted.

SELECT COUNT(*) --or whatever
FROM Table
JOIN ([query #2]) GroupingQuery
ON Table.id = GroupingQuery.id
GROUP BY GroupingQuery.ChainStartTime

I'm not entirely sure this is distinct enough from Tom H's answer to be posted separately, but it sounded like you were having trouble with implementation, and I was thinking about it, so I thought I'd post again. Good luck!

qid & accept id: (6591613, 6591653) query: DB: saving user's height and weight soup:

There are several ways... one is to just have two numeric columns, one for height, one for weight, then do the conversions (if necessary) at display time. Another is to create a "height" table and a "weight" table, each with a primary key that is linked from another table. Then you can store both English and metric values in these tables (along with any other meta info you want):

\n
CREATE TABLE height (\n    id          SERIAL PRIMARY KEY,\n    english     VARCHAR,\n    inches      INT,\n    cm          INT,\n    hands       INT  // As in, the height of a horse\n);\n\nINSERT INTO height VALUES\n    (1,'4 feet',           48, 122, 12),\n    (2,'4 feet, 1 inch',   49, 124, 12),\n    (3,'4 feet, 2 inches', 50, 127, 12),\n    (3,'4 feet, 3 inches', 51, 130, 12),\n    ....\n
\n

You get the idea...

\n

Then your users table will reference the height and weight tables--and possibly many other dimension tables--astrological sign, marital status, etc.

\n
CREATE TABLE users (\n    uid         SERIAL PRIMARY KEY,\n    height      INT REFERENCES height(id),\n    weight      INT references weight(id),\n    sign        INT references sign(id),\n    ...\n);\n
\n

Then to do a search for users between 4 and 5 feet:

\n
SELECT *\nFROM users\nJOIN height ON users.height = height.id\nWHERE height.inches >= 48 AND height.inches <= 60;\n
\n

Several advantages to this method:

\n\n soup wrap:

There are several ways... one is to just have two numeric columns, one for height, one for weight, then do the conversions (if necessary) at display time. Another is to create a "height" table and a "weight" table, each with a primary key that is linked from another table. Then you can store both English and metric values in these tables (along with any other meta info you want):

CREATE TABLE height (
    id          SERIAL PRIMARY KEY,
    english     VARCHAR,
    inches      INT,
    cm          INT,
    hands       INT  // As in, the height of a horse
);

INSERT INTO height VALUES
    (1,'4 feet',           48, 122, 12),
    (2,'4 feet, 1 inch',   49, 124, 12),
    (3,'4 feet, 2 inches', 50, 127, 12),
    (3,'4 feet, 3 inches', 51, 130, 12),
    ....

You get the idea...

Then your users table will reference the height and weight tables--and possibly many other dimension tables--astrological sign, marital status, etc.

CREATE TABLE users (
    uid         SERIAL PRIMARY KEY,
    height      INT REFERENCES height(id),
    weight      INT references weight(id),
    sign        INT references sign(id),
    ...
);

Then to do a search for users between 4 and 5 feet:

SELECT *
FROM users
JOIN height ON users.height = height.id
WHERE height.inches >= 48 AND height.inches <= 60;

Several advantages to this method:

qid & accept id: (6611453, 6612326) query: PostgreSQL: trying to find miss and mister of the last month with highest rating soup:

Say you run it once on the first day of the month, and cache the results, since counting votes on every page is kinda useless.

\n

First some date arithmetic :

\n
SELECT now(), \n       date_trunc( 'month', now() ) - '1 MONTH'::INTERVAL, \n       date_trunc( 'month', now() );\n\n              now              |        ?column?        |       date_trunc       \n-------------------------------+------------------------+------------------------\n 2011-07-07 16:24:38.765559+02 | 2011-06-01 00:00:00+02 | 2011-07-01 00:00:00+02\n
\n

OK, we got the bounds for the "last month" datetime range.\nNow we need some window function to get the first rows per gender :

\n
SELECT * FROM (\n   SELECT *, rank( ) over (partition by gender order by score desc ) \n   FROM (\n      SELECT user_id, count(*) AS score FROM pref_rep \n      WHERE nice=true \n      AND last_rated >= date_trunc( 'month', now() ) - '1 MONTH'::INTERVAL\n      AND last_rated <  date_trunc( 'month', now() )\n      GROUP BY user_id) s1 \n   JOIN users USING (user_id)) s2 \nWHERE rank=1;\n
\n

Note this can give you several rows in case of ex-aequo.

\n

EDIT :

\n
\n

I've got a nice suggestion to cast timestamps to strings in order to\n find records for the last month (not for the last 30 days)

\n
\n

date_trunc() works much better.

\n

If you make 2 queries, you'll have to make the count() twice. Since users can potentially vote many times for other users, that table will probably be the larger one, so scanning it once is a good thing.

\n

You can't "leave joining back onto the users table to the outer part of the query too" because you need genders...

\n

Query above takes about 30 ms with 1k users and 100k votes so you'd definitely want to cache it.

\n soup wrap:

Say you run it once on the first day of the month, and cache the results, since counting votes on every page is kinda useless.

First some date arithmetic :

SELECT now(), 
       date_trunc( 'month', now() ) - '1 MONTH'::INTERVAL, 
       date_trunc( 'month', now() );

              now              |        ?column?        |       date_trunc       
-------------------------------+------------------------+------------------------
 2011-07-07 16:24:38.765559+02 | 2011-06-01 00:00:00+02 | 2011-07-01 00:00:00+02

OK, we got the bounds for the "last month" datetime range. Now we need some window function to get the first rows per gender :

SELECT * FROM (
   SELECT *, rank( ) over (partition by gender order by score desc ) 
   FROM (
      SELECT user_id, count(*) AS score FROM pref_rep 
      WHERE nice=true 
      AND last_rated >= date_trunc( 'month', now() ) - '1 MONTH'::INTERVAL
      AND last_rated <  date_trunc( 'month', now() )
      GROUP BY user_id) s1 
   JOIN users USING (user_id)) s2 
WHERE rank=1;

Note this can give you several rows in case of ex-aequo.

EDIT :

I've got a nice suggestion to cast timestamps to strings in order to find records for the last month (not for the last 30 days)

date_trunc() works much better.

If you make 2 queries, you'll have to make the count() twice. Since users can potentially vote many times for other users, that table will probably be the larger one, so scanning it once is a good thing.

You can't "leave joining back onto the users table to the outer part of the query too" because you need genders...

Query above takes about 30 ms with 1k users and 100k votes so you'd definitely want to cache it.

qid & accept id: (6616800, 6616899) query: SQL Insert into 2 tables, passing the new PK from one table as the FK in the other soup:

Despite what others have answered, this absolutely is possible, although it takes 2 queries made consecutively with the same connection (to maintain the session state).

\n

Here's the mysql solution (with executable test code below):

\n
INSERT INTO Table1 (col1, col2) VALUES ( val1, val2 );\nINSERT INTO Table2 (foreign_key_column) VALUES (LAST_INSERT_ID());\n
\n

Note: These should be executed using a single connection.

\n

Here's the test code:

\n
create table tab1 (id int auto_increment primary key, note text);\ncreate table tab2 (id int auto_increment primary key, tab2_id int references tab1, note text);\ninsert into tab1 values (null, 'row 1');\ninsert into tab2 values (null, LAST_INSERT_ID(), 'row 1');\nselect * from tab1;\nselect * from tab2;\nmysql> select * from tab1;\n+----+-------+\n| id | note  |\n+----+-------+\n|  1 | row 1 |\n+----+-------+\n1 row in set (0.00 sec)\n\nmysql> select * from tab2;\n+----+---------+-------+\n| id | tab2_id | note  |\n+----+---------+-------+\n|  1 |       1 | row 1 |\n+----+---------+-------+\n1 row in set (0.00 sec)\n
\n soup wrap:

Despite what others have answered, this absolutely is possible, although it takes 2 queries made consecutively with the same connection (to maintain the session state).

Here's the mysql solution (with executable test code below):

INSERT INTO Table1 (col1, col2) VALUES ( val1, val2 );
INSERT INTO Table2 (foreign_key_column) VALUES (LAST_INSERT_ID());

Note: These should be executed using a single connection.

Here's the test code:

create table tab1 (id int auto_increment primary key, note text);
create table tab2 (id int auto_increment primary key, tab2_id int references tab1, note text);
insert into tab1 values (null, 'row 1');
insert into tab2 values (null, LAST_INSERT_ID(), 'row 1');
select * from tab1;
select * from tab2;
mysql> select * from tab1;
+----+-------+
| id | note  |
+----+-------+
|  1 | row 1 |
+----+-------+
1 row in set (0.00 sec)

mysql> select * from tab2;
+----+---------+-------+
| id | tab2_id | note  |
+----+---------+-------+
|  1 |       1 | row 1 |
+----+---------+-------+
1 row in set (0.00 sec)
qid & accept id: (6621502, 6623392) query: how to a query to match the records in two different tables and if a match update with new values, no match prompt me to fill in the details? soup:

Here are the steps you can do:

\n

  • Load the CSV files (using any of BCP, BULK INSERT, Import export wizard, SSIS packages) for loading tableB. This process is independent of updating tableA.
  • \n

  • Now for TableA create an update trigger that checks for all the SNOs present in B and NOT in A while updating it. See below DDLs and queries as example and accordingly modify:
  • \n
    \n\n    create  table TABLEA (\n     PartNo varchar(30),\n     SNo varchar(30),\n     PO varchar(10),\n     DO varchar(30))\n\n     insert into TABLEA \n     select '1AB1009', 'GR7764', 'ST', 'OND'\n     union\n    select '1AB1009','GR7765','ST','OND'\n\n    create  table TABLEB ( \n     SNo varchar(30)\n    )\n     insert into TABLEB\n     select 'GR7764'\n     union\n     select 'GR7765'\n\n     select * from TABLEA\n     select * from TABLEB\n     GO\n\n
    \n

    Now create an instead of Update trigger on tableA to warn about SNOs missing in tableA when trying to insert from front end app

    \n
    \n\n    CREATE TRIGGER missingSNOs ON TABLEA\n    INSTEAD OF UPDATE\n    AS  \n\n        BEGIN\n            if EXISTS (SELECT 1\n                            FROM TABLEB B\n                            LEFT OUTER JOIN\n                            INSERTED I\n                            ON B.SNO = I.SNO\n                            WHERE I.SNO IS NULL\n                            )\n            begin\n                     SELECT B.SNO\n                            FROM TABLEB B\n                            LEFT OUTER JOIN\n                            INSERTED I\n                            ON B.SNO = I.SNO\n                            WHERE I.SNO IS NULL\n                RAISERROR('S.nos are missing in tableA which are present in tableB',16,1);\n                ROLLBACK;\n            end     \n        END\n    GO\n\n
    \n

    Test if the trigger fires when the Sno are missing

    \n
    \n\n-- Errors with message as the SNO is missing\nupdate TABLEA\nset PartNo = 'newPartNo'\nwhere SNO = 'SnoNOTinB'\n\n-- works no errors as both SNOS are present in tableB\nupdate TABLEA\nset PartNo = 'new one'\nwhere SNO in ('GR7764', 'GR7765')\n\n-- Also you dont have to join with tableB now and modify query as below\nUPDATE A\nset A.Mat_No ='"+ Mat_No+"',WO_No='"+WO_No+"',\nCode = '"+Code+"',Desc = '"+Desc+"',\nCenter='"+Center+"',\nDate='"+Date+"',\nRemarks='"+Remarks+"' \nFROM TableA A                   \nWHERE A.Status = 'IN' \n\n
    \n

    Finally clean up the code

    \n
    \n\n    drop table TABLEA\n      drop table TABLEB\n\n
    \n soup wrap:

    Here are the steps you can do:

  • Load the CSV files (using any of BCP, BULK INSERT, Import export wizard, SSIS packages) for loading tableB. This process is independent of updating tableA.
  • Now for TableA create an update trigger that checks for all the SNOs present in B and NOT in A while updating it. See below DDLs and queries as example and accordingly modify:
  • 
    
        create  table TABLEA (
         PartNo varchar(30),
         SNo varchar(30),
         PO varchar(10),
         DO varchar(30))
    
         insert into TABLEA 
         select '1AB1009', 'GR7764', 'ST', 'OND'
         union
        select '1AB1009','GR7765','ST','OND'
    
        create  table TABLEB ( 
         SNo varchar(30)
        )
         insert into TABLEB
         select 'GR7764'
         union
         select 'GR7765'
    
         select * from TABLEA
         select * from TABLEB
         GO
    
    

    Now create an instead of Update trigger on tableA to warn about SNOs missing in tableA when trying to insert from front end app

    
    
        CREATE TRIGGER missingSNOs ON TABLEA
        INSTEAD OF UPDATE
        AS  
    
            BEGIN
                if EXISTS (SELECT 1
                                FROM TABLEB B
                                LEFT OUTER JOIN
                                INSERTED I
                                ON B.SNO = I.SNO
                                WHERE I.SNO IS NULL
                                )
                begin
                         SELECT B.SNO
                                FROM TABLEB B
                                LEFT OUTER JOIN
                                INSERTED I
                                ON B.SNO = I.SNO
                                WHERE I.SNO IS NULL
                    RAISERROR('S.nos are missing in tableA which are present in tableB',16,1);
                    ROLLBACK;
                end     
            END
        GO
    
    

    Test if the trigger fires when the Sno are missing

    
    
    -- Errors with message as the SNO is missing
    update TABLEA
    set PartNo = 'newPartNo'
    where SNO = 'SnoNOTinB'
    
    -- works no errors as both SNOS are present in tableB
    update TABLEA
    set PartNo = 'new one'
    where SNO in ('GR7764', 'GR7765')
    
    -- Also you dont have to join with tableB now and modify query as below
    UPDATE A
    set A.Mat_No ='"+ Mat_No+"',WO_No='"+WO_No+"',
    Code = '"+Code+"',Desc = '"+Desc+"',
    Center='"+Center+"',
    Date='"+Date+"',
    Remarks='"+Remarks+"' 
    FROM TableA A                   
    WHERE A.Status = 'IN' 
    
    

    Finally clean up the code

    
    
        drop table TABLEA
          drop table TABLEB
    
    
    qid & accept id: (6673667, 6673781) query: Searching words in a database soup:

    Strictly speaking your query is correct, however what you're really looking for is "words starting with 'hyperlink'" which means there will be a space character or it will be the start of the text field.

    \n
    select          O_ObjectID, \n            rtrim(O_Name) as O_Name\nfrom            A_Object\nwhere           O_Name like @NamePrefix + '%' OR O_Name like '% ' + @NamePrefix + '%'\norder by        O_Name\n
    \n

    note the added space character in '% ' + @NamePrefix + '%'

    \n

    Your other option would be to use full text search which would mean your query would look like this:

    \n
    select          O_ObjectID, \n            rtrim(O_Name) as O_Name\nfrom            A_Object\nwhere           CONTAINS(O_Name, '"'+ @NamePrefix + '*"')\norder by        O_Name\n
    \n

    and performance on this will be significantly faster as it will be indexed at a word level.

    \n soup wrap:

    Strictly speaking your query is correct, however what you're really looking for is "words starting with 'hyperlink'" which means there will be a space character or it will be the start of the text field.

    select          O_ObjectID, 
                rtrim(O_Name) as O_Name
    from            A_Object
    where           O_Name like @NamePrefix + '%' OR O_Name like '% ' + @NamePrefix + '%'
    order by        O_Name
    

    note the added space character in '% ' + @NamePrefix + '%'

    Your other option would be to use full text search which would mean your query would look like this:

    select          O_ObjectID, 
                rtrim(O_Name) as O_Name
    from            A_Object
    where           CONTAINS(O_Name, '"'+ @NamePrefix + '*"')
    order by        O_Name
    

    and performance on this will be significantly faster as it will be indexed at a word level.

    qid & accept id: (6680228, 6680689) query: Managing Oracle Synonyms soup:

    At least up to 10g, PUBLIC is not a real user. You cannot create objects in the "Public schema":

    \n
    SQL> CREATE TABLE public.foobar (id integer);\n\nCREATE TABLE public.foobar (id integer)\n\nORA-00903: invalid table name\n\nSQL> CREATE TABLE system.foobar (id integer);\n\nTable created\n\nSQL> \n
    \n

    If you run this query:

    \n
    SELECT object_name \n  FROM dba_objects \n WHERE owner='PUBLIC' \n   AND object_type IN ('TABLE', 'VIEW');\n
    \n

    You can answer the question about pre-defined tables/views in the PUBLIC "schema".

    \n soup wrap:

    At least up to 10g, PUBLIC is not a real user. You cannot create objects in the "Public schema":

    SQL> CREATE TABLE public.foobar (id integer);
    
    CREATE TABLE public.foobar (id integer)
    
    ORA-00903: invalid table name
    
    SQL> CREATE TABLE system.foobar (id integer);
    
    Table created
    
    SQL> 
    

    If you run this query:

    SELECT object_name 
      FROM dba_objects 
     WHERE owner='PUBLIC' 
       AND object_type IN ('TABLE', 'VIEW');
    

    You can answer the question about pre-defined tables/views in the PUBLIC "schema".

    qid & accept id: (6688196, 6689227) query: What is the most efficient way to concatenate a string from all parent rows using T-SQL? soup:

    To know for sure about performance you need to test. I have done some testing using your version (slightly modified) and a recursive CTE versions suggested by others.

    \n

    I used your sample table with 2048 rows all in one single folder hierarchy so when passing 2048 as parameter to the function there are 2048 concatenations done.

    \n

    The loop version:

    \n
    create function GetEntireLineage1 (@id int)\nreturns varchar(max)\nas\nbegin\n  declare @ret varchar(max)\n\n  select @ret = folder_name,\n         @id = parent_id\n  from Folder\n  where id = @id\n\n  while @@rowcount > 0\n  begin\n    select @ret = @ret + '-' + folder_name,\n           @id = parent_id\n    from Folder\n    where id = @id\n  end\n  return @ret\nend\n
    \n

    Statistics:

    \n
     SQL Server Execution Times:\n   CPU time = 125 ms,  elapsed time = 122 ms.\n
    \n

    The recursive CTE version:

    \n
    create function GetEntireLineage2(@id int)\nreturns varchar(max)\nbegin\n  declare @ret varchar(max);\n\n  with cte(id, name) as\n  (\n    select f.parent_id,\n           cast(f.folder_name as varchar(max))\n    from Folder as f\n    where f.id = @id\n    union all\n    select f.parent_id,\n           c.name + '-' + f.folder_name\n    from Folder as f\n      inner join cte as c\n        on f.id = c.id\n  )\n  select @ret = name\n  from cte\n  where id is null\n  option (maxrecursion 0)\n\n  return @ret\nend\n
    \n

    Statistics:

    \n
     SQL Server Execution Times:\n   CPU time = 187 ms,  elapsed time = 183 ms.\n
    \n

    So between these two it is the loop version that is more efficient, at least on my test data. You need to test on your actual data to be sure.

    \n

    Edit

    \n

    Recursive CTE with for xml path('') trick.

    \n
    create function [dbo].[GetEntireLineage4](@id int)\nreturns varchar(max)\nbegin\n  declare @ret varchar(max) = '';\n\n  with cte(id, lvl, name) as\n  (\n    select f.parent_id,\n           1,\n           f.folder_name\n    from Folder as f\n    where f.id = @id\n    union all\n    select f.parent_id,\n           lvl + 1,\n           f.folder_name\n    from Folder as f\n      inner join cte as c\n        on f.id = c.id\n  )\n  select @ret = (select '-'+name\n                 from cte\n                 order by lvl\n                 for xml path(''), type).value('.', 'varchar(max)')\n  option (maxrecursion 0)\n\n  return stuff(@ret, 1, 1, '')\nend\n
    \n

    Statistics:

    \n
     SQL Server Execution Times:\n   CPU time = 31 ms,  elapsed time = 37 ms.\n
    \n soup wrap:

    To know for sure about performance you need to test. I have done some testing using your version (slightly modified) and a recursive CTE versions suggested by others.

    I used your sample table with 2048 rows all in one single folder hierarchy so when passing 2048 as parameter to the function there are 2048 concatenations done.

    The loop version:

    create function GetEntireLineage1 (@id int)
    returns varchar(max)
    as
    begin
      declare @ret varchar(max)
    
      select @ret = folder_name,
             @id = parent_id
      from Folder
      where id = @id
    
      while @@rowcount > 0
      begin
        select @ret = @ret + '-' + folder_name,
               @id = parent_id
        from Folder
        where id = @id
      end
      return @ret
    end
    

    Statistics:

     SQL Server Execution Times:
       CPU time = 125 ms,  elapsed time = 122 ms.
    

    The recursive CTE version:

    create function GetEntireLineage2(@id int)
    returns varchar(max)
    begin
      declare @ret varchar(max);
    
      with cte(id, name) as
      (
        select f.parent_id,
               cast(f.folder_name as varchar(max))
        from Folder as f
        where f.id = @id
        union all
        select f.parent_id,
               c.name + '-' + f.folder_name
        from Folder as f
          inner join cte as c
            on f.id = c.id
      )
      select @ret = name
      from cte
      where id is null
      option (maxrecursion 0)
    
      return @ret
    end
    

    Statistics:

     SQL Server Execution Times:
       CPU time = 187 ms,  elapsed time = 183 ms.
    

    So between these two it is the loop version that is more efficient, at least on my test data. You need to test on your actual data to be sure.

    Edit

    Recursive CTE with for xml path('') trick.

    create function [dbo].[GetEntireLineage4](@id int)
    returns varchar(max)
    begin
      declare @ret varchar(max) = '';
    
      with cte(id, lvl, name) as
      (
        select f.parent_id,
               1,
               f.folder_name
        from Folder as f
        where f.id = @id
        union all
        select f.parent_id,
               lvl + 1,
               f.folder_name
        from Folder as f
          inner join cte as c
            on f.id = c.id
      )
      select @ret = (select '-'+name
                     from cte
                     order by lvl
                     for xml path(''), type).value('.', 'varchar(max)')
      option (maxrecursion 0)
    
      return stuff(@ret, 1, 1, '')
    end
    

    Statistics:

     SQL Server Execution Times:
       CPU time = 31 ms,  elapsed time = 37 ms.
    
    qid & accept id: (6691865, 6691997) query: How do I name a column as a date value soup:

    Try this technique:

    \n
    declare @dt datetime\ndeclare @sql varchar(100)\nset @dt = getdate()\nset @sql = 'select 1 as [ ' + convert( varchar(25),@dt,120) + ']'  \nexec (@sql)\n
    \n

    In your Case:

    \n
    declare @dt datetime\ndeclare @sql varchar(100)\nset @dt = getdate()\nset @sql = 'select 0 as [ ' + convert( varchar(25),@dt,120) + ']'  \nexec (@sql)\n
    \n soup wrap:

    Try this technique:

    declare @dt datetime
    declare @sql varchar(100)
    set @dt = getdate()
    set @sql = 'select 1 as [ ' + convert( varchar(25),@dt,120) + ']'  
    exec (@sql)
    

    In your Case:

    declare @dt datetime
    declare @sql varchar(100)
    set @dt = getdate()
    set @sql = 'select 0 as [ ' + convert( varchar(25),@dt,120) + ']'  
    exec (@sql)
    
    qid & accept id: (6743541, 6743635) query: Condition based on column in data soup:

    It is possible if you know the number of "custom" columns in advance.

    \n

    you can replace

    \n
    and table1.(value of table2.field) = 'Red'\n
    \n

    with

    \n
    and    case table2.field\n         when 'custom1' then table1.custom1\n         when 'custom2' then table1.custom2\n         when 'custom3' then table1.custom2\n         ...\n         else NULL\n       end\n       = 'Red'\n
    \n soup wrap:

    It is possible if you know the number of "custom" columns in advance.

    you can replace

    and table1.(value of table2.field) = 'Red'
    

    with

    and    case table2.field
             when 'custom1' then table1.custom1
             when 'custom2' then table1.custom2
             when 'custom3' then table1.custom2
             ...
             else NULL
           end
           = 'Red'
    
    qid & accept id: (6745525, 6789844) query: Oracle logging changes to XML soup:

    I have found little work-arround:

    \n

    First get a little information about table:

    \n
    select 'xmlelement("'|| column_name||'",new.' || column_name || '),'  from all_tab_columns where lower(table_name) = 'my_table';\n
    \n

    and just copy paste result into

    \n
    select xmlelement("doc",\n\n--paste here\n\n) from dual;\n
    \n

    Ugly, but working.

    \n soup wrap:

    I have found little work-arround:

    First get a little information about table:

    select 'xmlelement("'|| column_name||'",new.' || column_name || '),'  from all_tab_columns where lower(table_name) = 'my_table';
    

    and just copy paste result into

    select xmlelement("doc",
    
    --paste here
    
    ) from dual;
    

    Ugly, but working.

    qid & accept id: (6810923, 6812237) query: Oracle SQL - How do i output data from a table based on the day of the week from a hiredate column? soup:

    Hoons's answer is correct, but will only work if your Oracle session is using English language (NLS_LANGUAGE).

    \n

    Another query that work for all languages is

    \n
    select name, position, hiredate\n from table\nwhere to_char(sysdate, 'D') in (1, 2); -- 1 monday; 2 tuesday\n
    \n

    to_char(sysdate, 'D') returns the following values for each day of week:

    \n
    1 monday\n2 tuesday\n3 wednesday\n4 thrusday\n5 friday\n6 saturday\n7 sunday\n
    \n soup wrap:

    Hoons's answer is correct, but will only work if your Oracle session is using English language (NLS_LANGUAGE).

    Another query that work for all languages is

    select name, position, hiredate
     from table
    where to_char(sysdate, 'D') in (1, 2); -- 1 monday; 2 tuesday
    

    to_char(sysdate, 'D') returns the following values for each day of week:

    1 monday
    2 tuesday
    3 wednesday
    4 thrusday
    5 friday
    6 saturday
    7 sunday
    
    qid & accept id: (6811449, 6811612) query: Using IFNULL to set NULLs to zero soup:

    EDIT: NEW INFO BASED ON FULL QUERY

    \n

    The reason the counts can be null in the query you specify is because a left join will return nulls on unmatched records. So the subquery itself is not returning null counts (hence all the responses and confusion). You need to specify the IFNULL in the outer-most select, as follows:

    \n
    SELECT  qa.*, user_profiles.*, c.*, n.pid, ifnull(n.ans_count, 0) as ans_count\nFROM    qa\n        JOIN user_profiles\n          ON user_id = author_id\n        LEFT JOIN (SELECT cm_id,\n                          cm_author_id,\n                          id_fk,\n                          cm_text,\n                          cm_timestamp,\n                          first_name AS cm_first_name,\n                          last_name AS cm_last_name,\n                          facebook_id AS cm_fb_id,\n                          picture AS cm_picture\n                    FROM  cm\n                    JOIN  user_profiles\n                      ON  user_id = cm_author_id) AS c\n          ON id = c.id_fk\n        LEFT JOIN (SELECT   parent_id AS pid, COUNT(*) AS ans_count\n                     FROM   qa\n                    GROUP   BY parent_id) AS n\n          ON id = n.pid\nWHERE   id  LIKE '%'\nORDER   BY id DESC\n
    \n

    OLD RESPONSE

    \n

    Can you explain in more detail what you are seeing and what you expect to see? Count can't return NULLs.

    \n

    Run this set of queries and you'll see that the counts are always 2. You can change the way the NULL parent_ids are displayed (as NULL or 0), but the count itself will always return.

    \n
    create temporary table if not exists SO_Test(\n    parent_id int null);\n\ninsert into SO_Test(parent_id)\nselect 2 union all select 4 union all select 6 union all select null union all select null union all select 45 union all select 2;\n\n\nSELECT IFNULL(parent_id, 0) AS pid, COUNT(*) AS ans_count\n   FROM SO_Test\n  GROUP BY IFNULL(parent_id, 0);\n\nSELECT parent_id AS pid, COUNT(*) AS ans_count\n   FROM SO_Test\n  GROUP BY parent_id;\n\ndrop table SO_Test;\n
    \n soup wrap:

    EDIT: NEW INFO BASED ON FULL QUERY

    The reason the counts can be null in the query you specify is because a left join will return nulls on unmatched records. So the subquery itself is not returning null counts (hence all the responses and confusion). You need to specify the IFNULL in the outer-most select, as follows:

    SELECT  qa.*, user_profiles.*, c.*, n.pid, ifnull(n.ans_count, 0) as ans_count
    FROM    qa
            JOIN user_profiles
              ON user_id = author_id
            LEFT JOIN (SELECT cm_id,
                              cm_author_id,
                              id_fk,
                              cm_text,
                              cm_timestamp,
                              first_name AS cm_first_name,
                              last_name AS cm_last_name,
                              facebook_id AS cm_fb_id,
                              picture AS cm_picture
                        FROM  cm
                        JOIN  user_profiles
                          ON  user_id = cm_author_id) AS c
              ON id = c.id_fk
            LEFT JOIN (SELECT   parent_id AS pid, COUNT(*) AS ans_count
                         FROM   qa
                        GROUP   BY parent_id) AS n
              ON id = n.pid
    WHERE   id  LIKE '%'
    ORDER   BY id DESC
    

    OLD RESPONSE

    Can you explain in more detail what you are seeing and what you expect to see? Count can't return NULLs.

    Run this set of queries and you'll see that the counts are always 2. You can change the way the NULL parent_ids are displayed (as NULL or 0), but the count itself will always return.

    create temporary table if not exists SO_Test(
        parent_id int null);
    
    insert into SO_Test(parent_id)
    select 2 union all select 4 union all select 6 union all select null union all select null union all select 45 union all select 2;
    
    
    SELECT IFNULL(parent_id, 0) AS pid, COUNT(*) AS ans_count
       FROM SO_Test
      GROUP BY IFNULL(parent_id, 0);
    
    SELECT parent_id AS pid, COUNT(*) AS ans_count
       FROM SO_Test
      GROUP BY parent_id;
    
    drop table SO_Test;
    
    qid & accept id: (6814426, 6814665) query: How to delete smaller records for each group? soup:

    you can write following query, if you are working in oracle -

    \n
    delete from item_table where rowid not in\n(\n     select rowid from item_table \n     where (item,price1) in (select item,max(price1) from item_table group by item)\n        or (item,price2) in (select item,max(price2) from item_table group by item)\n)\n
    \n

    i heard that rowid is not there in sql server or mysql ...\nplease tell us about your database name which one you are using.

    \n

    you can write as follow also..

    \n
    delete from item_table where (item,date,shift,price1,price2 ) not in\n    (\n        select item,date,shift,price1,price2  from item_table \n        where (item,price1) in (select item,max(price1) from item_table group by item)\n           or (item,price2) in (select item,max(price2) from item_table group by item)\n    )\n
    \n soup wrap:

    you can write following query, if you are working in oracle -

    delete from item_table where rowid not in
    (
         select rowid from item_table 
         where (item,price1) in (select item,max(price1) from item_table group by item)
            or (item,price2) in (select item,max(price2) from item_table group by item)
    )
    

    i heard that rowid is not there in sql server or mysql ... please tell us about your database name which one you are using.

    you can write as follow also..

    delete from item_table where (item,date,shift,price1,price2 ) not in
        (
            select item,date,shift,price1,price2  from item_table 
            where (item,price1) in (select item,max(price1) from item_table group by item)
               or (item,price2) in (select item,max(price2) from item_table group by item)
        )
    
    qid & accept id: (6814563, 6816558) query: get attribute list from mongodb object soup:

    the code:

    \n
    > db.mycoll.insert( {num:3, text:"smth", date: new Date(), childs:[1,2,3]})\n> var rec = db.mycoll.findOne();\n\n> for (key in rec) { \n    var val = rec[key];\n    print( key + "(" + typeof(val) + "): " + val ) }\n
    \n

    will print:

    \n
    _id(object): 4e2d688cb2f2b62248c1c6bb\nnum(number): 3\ntext(string): smth\ndate(object): Mon Jul 25 2011 15:58:52 GMT+0300\nchilds(object): 1,2,3\n
    \n

    (javascript array and date are just "object")

    \n

    This shows "schema" of only top level, if you want to look deeper, some recursive tree-walking function is needed.

    \n soup wrap:

    the code:

    > db.mycoll.insert( {num:3, text:"smth", date: new Date(), childs:[1,2,3]})
    > var rec = db.mycoll.findOne();
    
    > for (key in rec) { 
        var val = rec[key];
        print( key + "(" + typeof(val) + "): " + val ) }
    

    will print:

    _id(object): 4e2d688cb2f2b62248c1c6bb
    num(number): 3
    text(string): smth
    date(object): Mon Jul 25 2011 15:58:52 GMT+0300
    childs(object): 1,2,3
    

    (javascript array and date are just "object")

    This shows "schema" of only top level, if you want to look deeper, some recursive tree-walking function is needed.

    qid & accept id: (6836478, 6836613) query: Codeigniter run query before a update soup:

    you can write your own function in the file core/MY_Model.php to do that:

    \n
    function queryThenUpdate($query,$update)\n{\n   $query = $this->db->query($query);\n   //use as you need $query\n   $this->db->update($update['table'],$update['data']);\n}\n
    \n

    where:

    \n
      \n
    1. $query is your actual query: SELECT * FROM ...
    2. \n
    3. $update is an array of two elements $update['table'] is the table to update and $update['data'] is the updating data as specified on codeigniter active record's documentation
    4. \n
    \n

    then make every model extend MY_Model

    \n
    class Your_Model extend MY_Model\n
    \n

    and every time you need to update something:

    \n
    $this->Your_Model->queryThenUpdate($query,$update)\n
    \n soup wrap:

    you can write your own function in the file core/MY_Model.php to do that:

    function queryThenUpdate($query,$update)
    {
       $query = $this->db->query($query);
       //use as you need $query
       $this->db->update($update['table'],$update['data']);
    }
    

    where:

    1. $query is your actual query: SELECT * FROM ...
    2. $update is an array of two elements $update['table'] is the table to update and $update['data'] is the updating data as specified on codeigniter active record's documentation

    then make every model extend MY_Model

    class Your_Model extend MY_Model
    

    and every time you need to update something:

    $this->Your_Model->queryThenUpdate($query,$update)
    
    qid & accept id: (6934563, 6934919) query: Lock a database or table in sqlite (Android) soup:

    Let's say SYNCHRONICED is 0 when the record is inserted or updated, 1 when the record is sent to the server, and 2 when the server has acknowledged the sync.

    \n

    The T1 thread should do:

    \n
    BEGIN;\nSELECT ID, VALUE FROM TAB WHERE SYNCHRONICED = 0;\nUPDATE TAB SET SYNCHRONICED = 1 WHERE SYNCHRONICED = 0;\nCOMMIT;\n
    \n

    The select statement gives the records to send to the server.

    \n

    Now any insert or update to TAB should set SYNCHRONICED = 0;

    \n

    When the server responds with ack,

    \n
    UPDATE TAB SET SYNCHRONICED = 2 WHERE SYNCHRONICED = 1;\n
    \n

    This will not affect any records updated or inserted since their SYNCHRONICED is 0.

    \n soup wrap:

    Let's say SYNCHRONICED is 0 when the record is inserted or updated, 1 when the record is sent to the server, and 2 when the server has acknowledged the sync.

    The T1 thread should do:

    BEGIN;
    SELECT ID, VALUE FROM TAB WHERE SYNCHRONICED = 0;
    UPDATE TAB SET SYNCHRONICED = 1 WHERE SYNCHRONICED = 0;
    COMMIT;
    

    The select statement gives the records to send to the server.

    Now any insert or update to TAB should set SYNCHRONICED = 0;

    When the server responds with ack,

    UPDATE TAB SET SYNCHRONICED = 2 WHERE SYNCHRONICED = 1;
    

    This will not affect any records updated or inserted since their SYNCHRONICED is 0.

    qid & accept id: (6937080, 6937175) query: how to add primary key to table having duplicate values? soup:

    Add PK as AUTO_INCREMENT, it will change all 0 values automatically -

    \n
    ALTER TABLE table_a\n  CHANGE COLUMN id id INT(11) NOT NULL AUTO_INCREMENT,\n  ADD PRIMARY KEY (id);\n
    \n

    After, AUTO_INCREMENT property can be removed -

    \n
    ALTER TABLE table_a\n  CHANGE COLUMN id id INT(11) NOT NULL;\n
    \n soup wrap:

    Add PK as AUTO_INCREMENT, it will change all 0 values automatically -

    ALTER TABLE table_a
      CHANGE COLUMN id id INT(11) NOT NULL AUTO_INCREMENT,
      ADD PRIMARY KEY (id);
    

    After, AUTO_INCREMENT property can be removed -

    ALTER TABLE table_a
      CHANGE COLUMN id id INT(11) NOT NULL;
    
    qid & accept id: (6994843, 6994915) query: MySQL query where JOIN depends on CASE soup:

    It probably needs tweaking to return the correct results but I hope you get the idea:

    \n
    SELECT ft1.task, COUNT(ft1.id) AS count\nFROM feed_tasks ft1\nLEFT JOIN pages p1 ON ft1.type=1 AND p1.id = ft1.reference_id\nLEFT JOIN urls u1 ON ft1.type=2 AND u1.id = ft1.reference_id\nWHERE COALESCE(p1.id, u1.id) IS NOT NULL\nAND ft1.account_id IS NOT NULL\nAND a1.user_id = :user_id\n
    \n

    Edit:

    \n

    A little note about CASE...END. Your original code does not run because, unlike PHP or JavaScript, the SQL CASE is not a flow control structure that allows to choose which part of the code will run. Instead, it returns an expression. So you can do this:

    \n
    SELECT CASE\n    WHEN foo<0 THEN 'Yes'\n    ELSE 'No'\nEND AS is_negative\nFROM bar\n
    \n

    ... but not this:

    \n
    -- Invalid\nCASE \n    WHEN foo<0 THEN SELECT 'Yes' AS is_negative\n    ELSE SELECT 'No' AS is_negative\nEND\nFROM bar\n
    \n soup wrap:

    It probably needs tweaking to return the correct results but I hope you get the idea:

    SELECT ft1.task, COUNT(ft1.id) AS count
    FROM feed_tasks ft1
    LEFT JOIN pages p1 ON ft1.type=1 AND p1.id = ft1.reference_id
    LEFT JOIN urls u1 ON ft1.type=2 AND u1.id = ft1.reference_id
    WHERE COALESCE(p1.id, u1.id) IS NOT NULL
    AND ft1.account_id IS NOT NULL
    AND a1.user_id = :user_id
    

    Edit:

    A little note about CASE...END. Your original code does not run because, unlike PHP or JavaScript, the SQL CASE is not a flow control structure that allows to choose which part of the code will run. Instead, it returns an expression. So you can do this:

    SELECT CASE
        WHEN foo<0 THEN 'Yes'
        ELSE 'No'
    END AS is_negative
    FROM bar
    

    ... but not this:

    -- Invalid
    CASE 
        WHEN foo<0 THEN SELECT 'Yes' AS is_negative
        ELSE SELECT 'No' AS is_negative
    END
    FROM bar
    
    qid & accept id: (7008452, 7008500) query: How do I select any value from SP? soup:

    You execute the stored procedure.

    \n
    exec MySP\n
    \n

    Result:

    \n
    (No column name)\n2011-08-10 00:00:00.000\n
    \n

    Edit

    \n

    Stored procedure with output parameter @startdate

    \n
    alter PROCEDURE MySP\n(\n@startdate datetime = null out,\n@enddate datetime = null\n)\nAS\nBEGIN\n  declare @date datetime \n  Set @date= convert(datetime,convert(varchar(10),getdate(),101))\n  SET @startdate = ISNULL(@startdate,convert (datetime,convert(varchar(10),getdate(),101)))\nEND\n
    \n

    Use like this

    \n
    declare @D datetime\nexec MySP @D out\nselect @D\n
    \n soup wrap:

    You execute the stored procedure.

    exec MySP
    

    Result:

    (No column name)
    2011-08-10 00:00:00.000
    

    Edit

    Stored procedure with output parameter @startdate

    alter PROCEDURE MySP
    (
    @startdate datetime = null out,
    @enddate datetime = null
    )
    AS
    BEGIN
      declare @date datetime 
      Set @date= convert(datetime,convert(varchar(10),getdate(),101))
      SET @startdate = ISNULL(@startdate,convert (datetime,convert(varchar(10),getdate(),101)))
    END
    

    Use like this

    declare @D datetime
    exec MySP @D out
    select @D
    
    qid & accept id: (7112526, 7112793) query: Checking the value of a field and updating it soup:

    One way to find such rows (or tuples) would be a query like:

    \n
    SELECT job_num, item_code, invoice_num\nFROM tablename\nWHERE job_num = 94834 AND item_code = "EFC-ASSOC-01" AND invoice_num = ""\n
    \n

    or follow @Ben's advice if the empty string is a problem. Then you can do an update:

    \n
    UPDATE tablename SET invoice_num = ? WHERE job_num = .........\n
    \n

    However, the problem with this approach is that if you're not using the primary key to choose a row in the update statement, multiple rows could get updated (similarly, the select statement could return multiple rows). So, you'll have to look at the database schema and determine the primary key column(s) of the table, and make sure that all of the primary key columns are used in the WHERE clause of the update. If you just do

    \n
    UPDATE tablename SET invoice_num = value WHERE invoice_num = ""\n
    \n

    all rows with that value of invoice_num will be updated, which may not be what you want.

    \n soup wrap:

    One way to find such rows (or tuples) would be a query like:

    SELECT job_num, item_code, invoice_num
    FROM tablename
    WHERE job_num = 94834 AND item_code = "EFC-ASSOC-01" AND invoice_num = ""
    

    or follow @Ben's advice if the empty string is a problem. Then you can do an update:

    UPDATE tablename SET invoice_num = ? WHERE job_num = .........
    

    However, the problem with this approach is that if you're not using the primary key to choose a row in the update statement, multiple rows could get updated (similarly, the select statement could return multiple rows). So, you'll have to look at the database schema and determine the primary key column(s) of the table, and make sure that all of the primary key columns are used in the WHERE clause of the update. If you just do

    UPDATE tablename SET invoice_num = value WHERE invoice_num = ""
    

    all rows with that value of invoice_num will be updated, which may not be what you want.

    qid & accept id: (7116576, 7123003) query: SQL find consecutive quarters soup:

    First of all, your data model is making it hard for you. You need an easy way to spot consecutive quarters, So, you need a table to hold that information, with a key which is a rising increment: how else do you expect the computer to know that Spring 2009 follows Winter 2008?

    \n

    Anyway, here's my version of your test data. I'm using names to make it easier to see what's going on:

    \n
    SQL> select s.name as student\n  2         , c.name as class\n  3         , q.season||' '||q.year as quarter\n  4         , q.q_id\n  5         , c.base_cost\n  6  from  enrolments e\n  7          join students s\n  8              on (s.s_id = e.s_id)\n  9          join classes c\n 10              on (c.c_id = e.c_id)\n 11          join quarters q\n 12              on (q.q_id = c.q_id)\n 13  order by s.s_id, q.q_id\n 14  /\n\nSTUDENT    CLASS                QUARTER               Q_ID  BASE_COST\n---------- -------------------- --------------- ---------- ----------\nSheldon    Introduction to SQL  Spring 2008            100        100\nSheldon    Advanced SQL         Spring 2009            104        150\nHoward     Introduction to SQL  Spring 2008            100        100\nHoward     Information Theory   Summer 2008            101         75\nRajesh     Information Theory   Summer 2008            101         75\nLeonard    Crypto Foundation    Autumn 2008            102        120\nLeonard    PHP for Dummies      Winter 2008            103         90\nLeonard    Advanced SQL         Spring 2009            104        150\n\n8 rows selected.\n\nSQL>\n
    \n

    As you can see, I have got a table QUARTERS whose primary key Q_ID increments by one in calendrical order.

    \n

    I'm going to use Oracle syntax to solve this, specifically the LAG analytic function:

    \n
    SQL> select s.name as student\n  2         , c.name as class\n  3         , q.season||' '||q.year as quarter\n  4         , q.q_id\n  5         , c.base_cost\n  6         , lag (q.q_id) over (partition by s.s_id order by q.q_id) prev_q_id\n  7  from  enrolments e\n  8          join students s\n  9              on (s.s_id = e.s_id)\n 10          join classes c\n 11              on (c.c_id = e.c_id)\n 12          join quarters q\n 13              on (q.q_id = c.q_id)\n 14  order by s.s_id, q.q_id\n 15  /\n\nSTUDENT    CLASS                QUARTER               Q_ID  BASE_COST  PREV_Q_ID\n---------- -------------------- --------------- ---------- ---------- ----------\nSheldon    Introduction to SQL  Spring 2008            100        100\nSheldon    Advanced SQL         Spring 2009            104        150        100\nHoward     Introduction to SQL  Spring 2008            100        100\nHoward     Information Theory   Summer 2008            101         75        100\nRajesh     Information Theory   Summer 2008            101         75\nLeonard    Crypto Foundation    Autumn 2008            102        120\nLeonard    PHP for Dummies      Winter 2008            103         90        102\nLeonard    Advanced SQL         Spring 2009            104        150        103\n\n8 rows selected.\n\nSQL>\n
    \n

    So, by looking in the PREV_Q_ID columns we can see that Howard, Sheldon and Leonard have each taken more than one course. Only Leonard has taken three courses. By comparing the values in the PREV_Q_ID and Q_ID columns we can see that Howard's two courses are in consective quarters, whereas Sheldon's are not.

    \n

    Now we can do some maths:

    \n
    SQL> select student\n  2          , class\n  3          , quarter\n  4          , base_cost\n  5          , discount*100 as discount_pct\n  6          , base_cost - (base_cost*discount) as actual_cost\n  7  from\n  8          ( select student\n  9                  , class\n 10                  , quarter\n 11                  , base_cost\n 12                  , case\n 13                      when prev_q_id is not null\n 14                           and q_id - prev_q_id = 1\n 15                      then 0.2\n 16                      else 0\n 17                    end       as discount\n 18                 , s_id\n 19                 , q_id\n 20            from\n 21                  (\n 22                  select s.name as student\n 23                         , c.name as class\n 24                         , q.season||' '||q.year as quarter\n 25                         , q.q_id\n 26                         , c.base_cost\n 27                         , lag (q.q_id) over (partition by s.s_id order by q.q_id) prev_q_id\n 28                         , s.s_id\n 29                  from  enrolments e\n 30                          join students s\n 31                              on (s.s_id = e.s_id)\n 32                          join classes c\n 33                              on (c.c_id = e.c_id)\n 34                          join quarters q\n 35                              on (q.q_id = c.q_id)\n 36                  )\n 37          )\n 38  order by s_id, q_id\n 39  /\n
    \n

    (artifical break to obviate the need to scroll down to see the results)

    \n
    STUDENT    CLASS                QUARTER      BASE_COST DISCOUNT_PCT ACTUAL_COST\n---------- -------------------- ----------- ---------- ------------ -----------\nSheldon    Introduction to SQL  Spring 2008        100            0         100\nSheldon    Advanced SQL         Spring 2009        150            0         150\nHoward     Introduction to SQL  Spring 2008        100            0         100\nHoward     Information Theory   Summer 2008         75           20          60\nRajesh     Information Theory   Summer 2008         75            0          75\nLeonard    Crypto Foundation    Autumn 2008        120            0         120\nLeonard    PHP for Dummies      Winter 2008         90           20          72\nLeonard    Advanced SQL         Spring 2009        150           20         120\n\n8 rows selected.\n\nSQL>\n
    \n

    So, Howard and Leonard get discounts for their consecutive classes, and Sheldon and Raj don't.

    \n soup wrap:

    First of all, your data model is making it hard for you. You need an easy way to spot consecutive quarters, So, you need a table to hold that information, with a key which is a rising increment: how else do you expect the computer to know that Spring 2009 follows Winter 2008?

    Anyway, here's my version of your test data. I'm using names to make it easier to see what's going on:

    SQL> select s.name as student
      2         , c.name as class
      3         , q.season||' '||q.year as quarter
      4         , q.q_id
      5         , c.base_cost
      6  from  enrolments e
      7          join students s
      8              on (s.s_id = e.s_id)
      9          join classes c
     10              on (c.c_id = e.c_id)
     11          join quarters q
     12              on (q.q_id = c.q_id)
     13  order by s.s_id, q.q_id
     14  /
    
    STUDENT    CLASS                QUARTER               Q_ID  BASE_COST
    ---------- -------------------- --------------- ---------- ----------
    Sheldon    Introduction to SQL  Spring 2008            100        100
    Sheldon    Advanced SQL         Spring 2009            104        150
    Howard     Introduction to SQL  Spring 2008            100        100
    Howard     Information Theory   Summer 2008            101         75
    Rajesh     Information Theory   Summer 2008            101         75
    Leonard    Crypto Foundation    Autumn 2008            102        120
    Leonard    PHP for Dummies      Winter 2008            103         90
    Leonard    Advanced SQL         Spring 2009            104        150
    
    8 rows selected.
    
    SQL>
    

    As you can see, I have got a table QUARTERS whose primary key Q_ID increments by one in calendrical order.

    I'm going to use Oracle syntax to solve this, specifically the LAG analytic function:

    SQL> select s.name as student
      2         , c.name as class
      3         , q.season||' '||q.year as quarter
      4         , q.q_id
      5         , c.base_cost
      6         , lag (q.q_id) over (partition by s.s_id order by q.q_id) prev_q_id
      7  from  enrolments e
      8          join students s
      9              on (s.s_id = e.s_id)
     10          join classes c
     11              on (c.c_id = e.c_id)
     12          join quarters q
     13              on (q.q_id = c.q_id)
     14  order by s.s_id, q.q_id
     15  /
    
    STUDENT    CLASS                QUARTER               Q_ID  BASE_COST  PREV_Q_ID
    ---------- -------------------- --------------- ---------- ---------- ----------
    Sheldon    Introduction to SQL  Spring 2008            100        100
    Sheldon    Advanced SQL         Spring 2009            104        150        100
    Howard     Introduction to SQL  Spring 2008            100        100
    Howard     Information Theory   Summer 2008            101         75        100
    Rajesh     Information Theory   Summer 2008            101         75
    Leonard    Crypto Foundation    Autumn 2008            102        120
    Leonard    PHP for Dummies      Winter 2008            103         90        102
    Leonard    Advanced SQL         Spring 2009            104        150        103
    
    8 rows selected.
    
    SQL>
    

    So, by looking in the PREV_Q_ID columns we can see that Howard, Sheldon and Leonard have each taken more than one course. Only Leonard has taken three courses. By comparing the values in the PREV_Q_ID and Q_ID columns we can see that Howard's two courses are in consective quarters, whereas Sheldon's are not.

    Now we can do some maths:

    SQL> select student
      2          , class
      3          , quarter
      4          , base_cost
      5          , discount*100 as discount_pct
      6          , base_cost - (base_cost*discount) as actual_cost
      7  from
      8          ( select student
      9                  , class
     10                  , quarter
     11                  , base_cost
     12                  , case
     13                      when prev_q_id is not null
     14                           and q_id - prev_q_id = 1
     15                      then 0.2
     16                      else 0
     17                    end       as discount
     18                 , s_id
     19                 , q_id
     20            from
     21                  (
     22                  select s.name as student
     23                         , c.name as class
     24                         , q.season||' '||q.year as quarter
     25                         , q.q_id
     26                         , c.base_cost
     27                         , lag (q.q_id) over (partition by s.s_id order by q.q_id) prev_q_id
     28                         , s.s_id
     29                  from  enrolments e
     30                          join students s
     31                              on (s.s_id = e.s_id)
     32                          join classes c
     33                              on (c.c_id = e.c_id)
     34                          join quarters q
     35                              on (q.q_id = c.q_id)
     36                  )
     37          )
     38  order by s_id, q_id
     39  /
    

    (artifical break to obviate the need to scroll down to see the results)

    STUDENT    CLASS                QUARTER      BASE_COST DISCOUNT_PCT ACTUAL_COST
    ---------- -------------------- ----------- ---------- ------------ -----------
    Sheldon    Introduction to SQL  Spring 2008        100            0         100
    Sheldon    Advanced SQL         Spring 2009        150            0         150
    Howard     Introduction to SQL  Spring 2008        100            0         100
    Howard     Information Theory   Summer 2008         75           20          60
    Rajesh     Information Theory   Summer 2008         75            0          75
    Leonard    Crypto Foundation    Autumn 2008        120            0         120
    Leonard    PHP for Dummies      Winter 2008         90           20          72
    Leonard    Advanced SQL         Spring 2009        150           20         120
    
    8 rows selected.
    
    SQL>
    

    So, Howard and Leonard get discounts for their consecutive classes, and Sheldon and Raj don't.

    qid & accept id: (7246987, 7247630) query: How to properly index tables used in a query with multiple joins soup:

    Note: SQL Server is what I use. If you're using something else - this may not apply.\nAlso note: I'm going to discuss indexes to aid in accessing data from a table. Covering indexes are a separate topic that I am not addressing here.

    \n

    When accessing a table, there's 3 ways to do it.

    \n\n

    I started by making a list of all tables, with filtering criteria and relational criteria.

    \n
    articles\n\n  articles.expirydate > 'somedate'\n  articles.dateadded > 'somedate'\n  articles.status >= someint\n\n  articles.article_id <-> articles_to_geo.article_id\n  articles.article_id <-> articles_to_badges.article_id\n  articles.site_id <-> sites.id\n\narticles_to_geo\n\n  articles_to_geo.article_id <-> articles.article_id\n  articles_to_geo.whitelist_city_id <-> cities_whitelist.city_id\n\ncities_whitelist\n\n  cities_whitelist.published = someint\n\n  cities_whitelist.city_id <-> articles_to_geo.whitelist_city_id\n  cities_whiltelist.city_id <-> cities.city_id\n\ncities\n\n  cities.city_id <-> cities_whiltelist.city_id\n\narticles_to_badges\n\n  articles_to_badges.badge_id in (some ids)\n\n  articles_to_badges.article_id <-> articles.article_id\n  article_to_badges.badge_id <-> badges.id\n\nbadges\n\n  badges.id <-> article_to_badges.badge_id\n\nsites\n\n  sites.id <-> articles.site_id\n
    \n

    The clumsiest way to approach this is to simply make an index on each table that supports each relational and filtering critera... then let the optimizer choose which indexes it wants to use. This approach is great for IO performance, and simple to do... but it costs a lot of space in un-used indexes.

    \n

    The next best way is to run the query with these options turned on:

    \n
    SET STATISTICS IO ON\nSET STATISTICS TIME ON\n
    \n

    If a particular set of tables is using more IO, indexing efforts can be focused on them. To do this relies on the optimizer plan for the order in which the tables are access to already be pretty good.

    \n
    \n

    If the optimizer can't make a good plan at all because of the lack of indexes, what I do is figure out which order I'd like the tables to be accessed, then add indexes that support those accesses.

    \n

    Note: the first table accessed does not have the option of using relational criteria, as no records are yet read. First table must be accessed by Filtering Criteria or Read the Whole Table.

    \n

    One possible order is the order in the query. This approach might be pretty bad because our Articles Filtering Criteria is based on 3 different ranges. There could be thousands of articles that meet that criteria and it's hard to formulate an index to support those ranges.

    \n
    Articles (Filter)\n  Articles_to_Geo (Relational by Article_Id)\n    Cities_WhiteList (Relational by City_Id) (Filter)\n    Cities (Relational by City_Id) (Filter)\n  Articles_to_Badges (Relational by Article_Id) (Filter)\n    Badges (Relational by Badge_Id)\n  Sites (Relational by Article_Id)\n
    \n

    Another possible order is Cities first. The Criteria for Cities is easily indexable and there might only be 1 row! Finding the articles for a City and then filtering by date should read fewer rows than finding the articles for dates and then filtering down to the City.

    \n
    Cities (Filter)\n  Cities_WhiteList (Relational by City_Id) (Filter)\n  Articles_to_Geo (Relational by City_Id)\n    Articles (Relational by Article_Id) (Filter)\n      Articles_to_Badges (Relational by Article_Id) (Filter)\n        Badges (Relational by Badge_Id)\n      Sites (Relational by Article_Id)\n
    \n

    A third approach could be Badges first. This would be best if articles rarely accumulate Badges and there aren't many Badges.

    \n
    Badges (Read the Whole Table)\n  Articles_to_Badges (Relational by Badge_Id) (Filter)\n    Articles (Relational by Article_Id) (Filter)\n      Articles_to_Geo (Relational by Article_Id)\n        Cities_WhiteList (Relational by City_Id) (Filter)\n        Cities (Relational by City_Id) (Filter)\n    Sites (Relational by Article_Id)\n
    \n soup wrap:

    Note: SQL Server is what I use. If you're using something else - this may not apply. Also note: I'm going to discuss indexes to aid in accessing data from a table. Covering indexes are a separate topic that I am not addressing here.

    When accessing a table, there's 3 ways to do it.

    I started by making a list of all tables, with filtering criteria and relational criteria.

    articles
    
      articles.expirydate > 'somedate'
      articles.dateadded > 'somedate'
      articles.status >= someint
    
      articles.article_id <-> articles_to_geo.article_id
      articles.article_id <-> articles_to_badges.article_id
      articles.site_id <-> sites.id
    
    articles_to_geo
    
      articles_to_geo.article_id <-> articles.article_id
      articles_to_geo.whitelist_city_id <-> cities_whitelist.city_id
    
    cities_whitelist
    
      cities_whitelist.published = someint
    
      cities_whitelist.city_id <-> articles_to_geo.whitelist_city_id
      cities_whiltelist.city_id <-> cities.city_id
    
    cities
    
      cities.city_id <-> cities_whiltelist.city_id
    
    articles_to_badges
    
      articles_to_badges.badge_id in (some ids)
    
      articles_to_badges.article_id <-> articles.article_id
      article_to_badges.badge_id <-> badges.id
    
    badges
    
      badges.id <-> article_to_badges.badge_id
    
    sites
    
      sites.id <-> articles.site_id
    

    The clumsiest way to approach this is to simply make an index on each table that supports each relational and filtering critera... then let the optimizer choose which indexes it wants to use. This approach is great for IO performance, and simple to do... but it costs a lot of space in un-used indexes.

    The next best way is to run the query with these options turned on:

    SET STATISTICS IO ON
    SET STATISTICS TIME ON
    

    If a particular set of tables is using more IO, indexing efforts can be focused on them. To do this relies on the optimizer plan for the order in which the tables are access to already be pretty good.


    If the optimizer can't make a good plan at all because of the lack of indexes, what I do is figure out which order I'd like the tables to be accessed, then add indexes that support those accesses.

    Note: the first table accessed does not have the option of using relational criteria, as no records are yet read. First table must be accessed by Filtering Criteria or Read the Whole Table.

    One possible order is the order in the query. This approach might be pretty bad because our Articles Filtering Criteria is based on 3 different ranges. There could be thousands of articles that meet that criteria and it's hard to formulate an index to support those ranges.

    Articles (Filter)
      Articles_to_Geo (Relational by Article_Id)
        Cities_WhiteList (Relational by City_Id) (Filter)
        Cities (Relational by City_Id) (Filter)
      Articles_to_Badges (Relational by Article_Id) (Filter)
        Badges (Relational by Badge_Id)
      Sites (Relational by Article_Id)
    

    Another possible order is Cities first. The Criteria for Cities is easily indexable and there might only be 1 row! Finding the articles for a City and then filtering by date should read fewer rows than finding the articles for dates and then filtering down to the City.

    Cities (Filter)
      Cities_WhiteList (Relational by City_Id) (Filter)
      Articles_to_Geo (Relational by City_Id)
        Articles (Relational by Article_Id) (Filter)
          Articles_to_Badges (Relational by Article_Id) (Filter)
            Badges (Relational by Badge_Id)
          Sites (Relational by Article_Id)
    

    A third approach could be Badges first. This would be best if articles rarely accumulate Badges and there aren't many Badges.

    Badges (Read the Whole Table)
      Articles_to_Badges (Relational by Badge_Id) (Filter)
        Articles (Relational by Article_Id) (Filter)
          Articles_to_Geo (Relational by Article_Id)
            Cities_WhiteList (Relational by City_Id) (Filter)
            Cities (Relational by City_Id) (Filter)
        Sites (Relational by Article_Id)
    
    qid & accept id: (7260488, 7261547) query: How can I get a single result from a related table in SQL? soup:

    Since you're using MySQL, I'll give you a MySQL-specific solution that's really easy:

    \n
    SELECT \n gallery.id, \n gallery.thumbnail_big, \n products.id, \n products.title, \n products.size, \n products.price, \n products.text_description, \n products.main_description \nFROM gallery\nINNER JOIN products \nON gallery.id=products.id\nGROUP BY products.id\n
    \n

    Of course this returns an arbitrary gallery.id and thumbnail_big, but you haven't specified which one you want. In practice, it'll be the one that's stored first physically in the table, but you have little control over this.

    \n

    The query above is ambiguous, so it wouldn't be allowed by ANSI SQL and most brands of RDBMS. But MySQL allows it (SQLite does too, for what it's worth).

    \n

    The better solution is to make the query not ambiguous. For instance, if you want to fetch the gallery image that has the highest primary key value:

    \n
    SELECT \n g1.id, \n g1.thumbnail_big, \n p.id, \n p.title, \n p.size, \n p.price, \n p.text_description, \n p.main_description \nFROM products p\nINNER JOIN gallery g1 ON p.id = g1.id\nLEFT OUTER JOIN gallery g2 ON p.id = g2.id AND g1.pkey < g2.pkey\nWHERE g2.id IS NULL\n
    \n

    I have to assume you have another column gallery.pkey that is auto-increment, or otherwise serves to uniquely distinguish gallery images for a given product. If you don't have such a column, you need to create one.

    \n

    Then the query tries to find a row g2 for the same product, that is greater than g1. If no such row exists, then g1 must be the greatest row.

    \n soup wrap:

    Since you're using MySQL, I'll give you a MySQL-specific solution that's really easy:

    SELECT 
     gallery.id, 
     gallery.thumbnail_big, 
     products.id, 
     products.title, 
     products.size, 
     products.price, 
     products.text_description, 
     products.main_description 
    FROM gallery
    INNER JOIN products 
    ON gallery.id=products.id
    GROUP BY products.id
    

    Of course this returns an arbitrary gallery.id and thumbnail_big, but you haven't specified which one you want. In practice, it'll be the one that's stored first physically in the table, but you have little control over this.

    The query above is ambiguous, so it wouldn't be allowed by ANSI SQL and most brands of RDBMS. But MySQL allows it (SQLite does too, for what it's worth).

    The better solution is to make the query not ambiguous. For instance, if you want to fetch the gallery image that has the highest primary key value:

    SELECT 
     g1.id, 
     g1.thumbnail_big, 
     p.id, 
     p.title, 
     p.size, 
     p.price, 
     p.text_description, 
     p.main_description 
    FROM products p
    INNER JOIN gallery g1 ON p.id = g1.id
    LEFT OUTER JOIN gallery g2 ON p.id = g2.id AND g1.pkey < g2.pkey
    WHERE g2.id IS NULL
    

    I have to assume you have another column gallery.pkey that is auto-increment, or otherwise serves to uniquely distinguish gallery images for a given product. If you don't have such a column, you need to create one.

    Then the query tries to find a row g2 for the same product, that is greater than g1. If no such row exists, then g1 must be the greatest row.

    qid & accept id: (7270243, 7273455) query: How to localize database table soup:

    I recommend going with the second option, although you appear to have some data-typos.

    \n

    Country:

    \n
    Id  Code\n===============\n1   IT\n
    \n

    Localized_Country:

    \n
    CountryId  LanguageCode  LocalizedName\n=========================================\n1          IT            Italia\n1          EN            Italy\n
    \n

    Which you then query like so:

    \n
    SELECT a.Id, b.LocalizedName\nFROM Country as a\nJOIN Localized_Country as b\nON b.CountryId = a.Id\nAND b.LanguageCode = :InputLanguageCode\nWHERE a.Code = :InputInternationalCountryCode\n
    \n

    Wrap that (or something similar) up in a view, and you're golden.

    \n

    Some recommendations:
    \nYou may want to push Language (or some other type of Locale concept) into it's own table. The key can either be an auto-increment value, or the international characters, doesn't much matter which.
    \nMake sure to put a unique constraint on (CountryId, LanguageCode), just in case. And never forget your foreign keys.

    \n soup wrap:

    I recommend going with the second option, although you appear to have some data-typos.

    Country:

    Id  Code
    ===============
    1   IT
    

    Localized_Country:

    CountryId  LanguageCode  LocalizedName
    =========================================
    1          IT            Italia
    1          EN            Italy
    

    Which you then query like so:

    SELECT a.Id, b.LocalizedName
    FROM Country as a
    JOIN Localized_Country as b
    ON b.CountryId = a.Id
    AND b.LanguageCode = :InputLanguageCode
    WHERE a.Code = :InputInternationalCountryCode
    

    Wrap that (or something similar) up in a view, and you're golden.

    Some recommendations:
    You may want to push Language (or some other type of Locale concept) into it's own table. The key can either be an auto-increment value, or the international characters, doesn't much matter which.
    Make sure to put a unique constraint on (CountryId, LanguageCode), just in case. And never forget your foreign keys.

    qid & accept id: (7274514, 7274691) query: SQL query to match keywords? soup:

    Yes, possible with full text search, and likely the best answer. For a straight T-SQL solution, you could use a split function and join, e.g. assuming a table of numbers called dbo.Numbers (you may need to decide on a different upper limit):

    \n
    SET NOCOUNT ON;\nDECLARE @UpperLimit INT;\nSET @UpperLimit = 200000;\n\nWITH n AS\n(\n    SELECT\n        rn = ROW_NUMBER() OVER\n        (ORDER BY s1.[object_id])\n    FROM sys.objects AS s1\n    CROSS JOIN sys.objects AS s2\n    CROSS JOIN sys.objects AS s3\n)\nSELECT [Number] = rn - 1\nINTO dbo.Numbers\nFROM n\nWHERE rn <= @UpperLimit + 1;\n\nCREATE UNIQUE CLUSTERED INDEX n ON dbo.Numbers([Number]);\n
    \n

    And a splitting function that uses that table of numbers:

    \n
    CREATE FUNCTION dbo.SplitStrings\n(\n    @List NVARCHAR(MAX)\n)\nRETURNS TABLE\nAS\n    RETURN\n    (\n        SELECT DISTINCT\n            [Value] = LTRIM(RTRIM(\n                SUBSTRING(@List, [Number],\n                CHARINDEX(N',', @List + N',', [Number]) - [Number])))\n        FROM\n            dbo.Numbers\n        WHERE\n            Number <= LEN(@List)\n            AND SUBSTRING(N',' + @List, [Number], 1) = N','\n    );\nGO\n
    \n

    Then you can simply say:

    \n
    SELECT key, NvarcharColumn /*, other cols */\nFROM dbo.table AS outerT\nWHERE EXISTS\n(\n  SELECT 1 \n    FROM dbo.table AS t \n    INNER JOIN dbo.SplitStrings(N'list,of,words') AS s\n    ON t.NvarcharColumn LIKE '%' + s.Item + '%'\n    WHERE t.key = outerT.key\n);\n
    \n

    As a procedure:

    \n
    CREATE PROCEDURE dbo.Search\n    @List NVARCHAR(MAX)\nAS\nBEGIN\n    SET NOCOUNT ON;\n\n    SELECT key, NvarcharColumn /*, other cols */\n    FROM dbo.table AS outerT\n    WHERE EXISTS\n    (\n      SELECT 1 \n        FROM dbo.table AS t \n        INNER JOIN dbo.SplitStrings(@List) AS s\n        ON t.NvarcharColumn LIKE '%' + s.Item + '%'\n        WHERE t.key = outerT.key\n    );\nEND\nGO\n
    \n

    Then you can just pass in @List (e.g. EXEC dbo.Search @List = N'foo,bar,splunge') from C#.

    \n

    This won't be super fast, but I'm sure it will be quicker than pulling all the data out into C# and double-nested loop it manually.

    \n soup wrap:

    Yes, possible with full text search, and likely the best answer. For a straight T-SQL solution, you could use a split function and join, e.g. assuming a table of numbers called dbo.Numbers (you may need to decide on a different upper limit):

    SET NOCOUNT ON;
    DECLARE @UpperLimit INT;
    SET @UpperLimit = 200000;
    
    WITH n AS
    (
        SELECT
            rn = ROW_NUMBER() OVER
            (ORDER BY s1.[object_id])
        FROM sys.objects AS s1
        CROSS JOIN sys.objects AS s2
        CROSS JOIN sys.objects AS s3
    )
    SELECT [Number] = rn - 1
    INTO dbo.Numbers
    FROM n
    WHERE rn <= @UpperLimit + 1;
    
    CREATE UNIQUE CLUSTERED INDEX n ON dbo.Numbers([Number]);
    

    And a splitting function that uses that table of numbers:

    CREATE FUNCTION dbo.SplitStrings
    (
        @List NVARCHAR(MAX)
    )
    RETURNS TABLE
    AS
        RETURN
        (
            SELECT DISTINCT
                [Value] = LTRIM(RTRIM(
                    SUBSTRING(@List, [Number],
                    CHARINDEX(N',', @List + N',', [Number]) - [Number])))
            FROM
                dbo.Numbers
            WHERE
                Number <= LEN(@List)
                AND SUBSTRING(N',' + @List, [Number], 1) = N','
        );
    GO
    

    Then you can simply say:

    SELECT key, NvarcharColumn /*, other cols */
    FROM dbo.table AS outerT
    WHERE EXISTS
    (
      SELECT 1 
        FROM dbo.table AS t 
        INNER JOIN dbo.SplitStrings(N'list,of,words') AS s
        ON t.NvarcharColumn LIKE '%' + s.Item + '%'
        WHERE t.key = outerT.key
    );
    

    As a procedure:

    CREATE PROCEDURE dbo.Search
        @List NVARCHAR(MAX)
    AS
    BEGIN
        SET NOCOUNT ON;
    
        SELECT key, NvarcharColumn /*, other cols */
        FROM dbo.table AS outerT
        WHERE EXISTS
        (
          SELECT 1 
            FROM dbo.table AS t 
            INNER JOIN dbo.SplitStrings(@List) AS s
            ON t.NvarcharColumn LIKE '%' + s.Item + '%'
            WHERE t.key = outerT.key
        );
    END
    GO
    

    Then you can just pass in @List (e.g. EXEC dbo.Search @List = N'foo,bar,splunge') from C#.

    This won't be super fast, but I'm sure it will be quicker than pulling all the data out into C# and double-nested loop it manually.

    qid & accept id: (7278905, 7281278) query: Efficiently find top-N values from multiple columns independently in Oracle soup:

    This should only do one pass over the table. You can use the analytic version of count() to get the frequency of each value independently:

    \n
    select firstname, count(*) over (partition by firstname) as c_fn,\n    lastname, count(*) over (partition by lastname) as c_ln,\n    favoriteanimal, count(*) over (partition by favoriteanimal) as c_fa,\n    favoritebook, count(*) over (partition by favoritebook) as c_fb\nfrom my_table;\n\nFIRSTN C_FN LASTNAME C_LN FAVORIT C_FA FAVORITEBOOK C_FB\n------ ---- -------- ---- ------- ---- ------------ ----\nBill      1 Ribbits     1 Lemur      2 Dhalgren        1\nFerris    1 Freemont    2 Possum     1 Ubik            2\nNancy     2 Freemont    2 Lemur      2 Housekeeping    1\nNancy     2 Drew        1 Penguin    1 Ubik            2\n
    \n

    You can then use that as a CTE (or subquery factoring, I think in oracle terminology) and pull only the highest-frequency value from each column:

    \n
    with tmp_tab as (\n    select /*+ MATERIALIZE */\n        firstname, count(*) over (partition by firstname) as c_fn,\n        lastname, count(*) over (partition by lastname) as c_ln,\n        favoriteanimal, count(*) over (partition by favoriteanimal) as c_fa,\n        favoritebook, count(*) over (partition by favoritebook) as c_fb\n    from my_table)\nselect (select firstname from (\n        select firstname,\n            row_number() over (partition by null order by c_fn desc) as r_fn\n            from tmp_tab\n        ) where r_fn = 1) as firstname,\n    (select lastname from (\n        select lastname,\n            row_number() over (partition by null order by c_ln desc) as r_ln\n        from tmp_tab\n        ) where r_ln = 1) as lastname,\n    (select favoriteanimal from (\n        select favoriteanimal,\n            row_number() over (partition by null order by c_fa desc) as r_fa\n        from tmp_tab\n        ) where r_fa = 1) as favoriteanimal,\n    (select favoritebook from (\n        select favoritebook,\n            row_number() over (partition by null order by c_fb desc) as r_fb\n        from tmp_tab\n        ) where r_fb = 1) as favoritebook\nfrom dual;\n\nFIRSTN LASTNAME FAVORIT FAVORITEBOOK\n------ -------- ------- ------------\nNancy  Freemont Lemur   Ubik\n
    \n

    You're doing one pass over the CTE for each column, but that should still only hit the real table once (thanks to the materialize hint). And you may want to add to the order by clauses to tweak what do to if there are ties.

    \n

    This is similar in concept to what Thilo, ysth and others have suggested, except you're letting Oracle keep track of all the counting.

    \n

    Edit: Hmm, explain plan shows it doing four full table scans; may need to think about this a bit more...\nEdit 2: Adding the (undocumented) MATERIALIZE hint to the CTE seems to resolve this; it's creating a transient temporary table to hold the results, and only does one full table scan. The explain plan cost is higher though - at least on this time sample data set. Be interested in any comments on any downside to doing this.

    \n soup wrap:

    This should only do one pass over the table. You can use the analytic version of count() to get the frequency of each value independently:

    select firstname, count(*) over (partition by firstname) as c_fn,
        lastname, count(*) over (partition by lastname) as c_ln,
        favoriteanimal, count(*) over (partition by favoriteanimal) as c_fa,
        favoritebook, count(*) over (partition by favoritebook) as c_fb
    from my_table;
    
    FIRSTN C_FN LASTNAME C_LN FAVORIT C_FA FAVORITEBOOK C_FB
    ------ ---- -------- ---- ------- ---- ------------ ----
    Bill      1 Ribbits     1 Lemur      2 Dhalgren        1
    Ferris    1 Freemont    2 Possum     1 Ubik            2
    Nancy     2 Freemont    2 Lemur      2 Housekeeping    1
    Nancy     2 Drew        1 Penguin    1 Ubik            2
    

    You can then use that as a CTE (or subquery factoring, I think in oracle terminology) and pull only the highest-frequency value from each column:

    with tmp_tab as (
        select /*+ MATERIALIZE */
            firstname, count(*) over (partition by firstname) as c_fn,
            lastname, count(*) over (partition by lastname) as c_ln,
            favoriteanimal, count(*) over (partition by favoriteanimal) as c_fa,
            favoritebook, count(*) over (partition by favoritebook) as c_fb
        from my_table)
    select (select firstname from (
            select firstname,
                row_number() over (partition by null order by c_fn desc) as r_fn
                from tmp_tab
            ) where r_fn = 1) as firstname,
        (select lastname from (
            select lastname,
                row_number() over (partition by null order by c_ln desc) as r_ln
            from tmp_tab
            ) where r_ln = 1) as lastname,
        (select favoriteanimal from (
            select favoriteanimal,
                row_number() over (partition by null order by c_fa desc) as r_fa
            from tmp_tab
            ) where r_fa = 1) as favoriteanimal,
        (select favoritebook from (
            select favoritebook,
                row_number() over (partition by null order by c_fb desc) as r_fb
            from tmp_tab
            ) where r_fb = 1) as favoritebook
    from dual;
    
    FIRSTN LASTNAME FAVORIT FAVORITEBOOK
    ------ -------- ------- ------------
    Nancy  Freemont Lemur   Ubik
    

    You're doing one pass over the CTE for each column, but that should still only hit the real table once (thanks to the materialize hint). And you may want to add to the order by clauses to tweak what do to if there are ties.

    This is similar in concept to what Thilo, ysth and others have suggested, except you're letting Oracle keep track of all the counting.

    Edit: Hmm, explain plan shows it doing four full table scans; may need to think about this a bit more... Edit 2: Adding the (undocumented) MATERIALIZE hint to the CTE seems to resolve this; it's creating a transient temporary table to hold the results, and only does one full table scan. The explain plan cost is higher though - at least on this time sample data set. Be interested in any comments on any downside to doing this.

    qid & accept id: (7315875, 7316118) query: SQL Query for an update of a column based on other column's data in a Table soup:

    According to your comment on the other answer,

    \n
    UPDATE Network_Plant_Items\n    SET FULL_ADDRESS = 'foobar' || COALESCE(BARCODE, MANUF_SERIAL_NUMBER)\n    WHERE BARCODE IS NOT NULL OR MANUF_SERIAL_NUMBER IS NOT NULL\n
    \n

    If you want to append this to the current value of FULL_ADDRESS, as I understand from the original question,

    \n
    UPDATE Network_Plant_Items\n    SET FULL_ADDRESS = FULL_ADDRESS || COALESCE(BARCODE, MANUF_SERIAL_NUMBER)\n    WHERE BARCODE IS NOT NULL OR MANUF_SERIAL_NUMBER IS NOT NULL\n
    \n

    COALESCE() returns the first non-NULL argument you pass to it. See Oracle's manual page on it.

    \n

    Just as a general FIY, NVM() that was suggested by another answers is the old Oracle-specific version of COALESCE(), which works kinda the same - but it only supports two arguments and evaluates the second argument even if the first one is non-null (or in other words, its not short-circuit evaluated). Generally, it should be avoided and the standard COALESCE should be used instead, unless you explicitly need to evaluate all the arguments even when there's no need for it.

    \n soup wrap:

    According to your comment on the other answer,

    UPDATE Network_Plant_Items
        SET FULL_ADDRESS = 'foobar' || COALESCE(BARCODE, MANUF_SERIAL_NUMBER)
        WHERE BARCODE IS NOT NULL OR MANUF_SERIAL_NUMBER IS NOT NULL
    

    If you want to append this to the current value of FULL_ADDRESS, as I understand from the original question,

    UPDATE Network_Plant_Items
        SET FULL_ADDRESS = FULL_ADDRESS || COALESCE(BARCODE, MANUF_SERIAL_NUMBER)
        WHERE BARCODE IS NOT NULL OR MANUF_SERIAL_NUMBER IS NOT NULL
    

    COALESCE() returns the first non-NULL argument you pass to it. See Oracle's manual page on it.

    Just as a general FIY, NVM() that was suggested by another answers is the old Oracle-specific version of COALESCE(), which works kinda the same - but it only supports two arguments and evaluates the second argument even if the first one is non-null (or in other words, its not short-circuit evaluated). Generally, it should be avoided and the standard COALESCE should be used instead, unless you explicitly need to evaluate all the arguments even when there's no need for it.

    qid & accept id: (7322330, 7322424) query: use count in sql soup:

    This will work with most SQL DBMS, but shows the count value.

    \n
    SELECT ID, Owner_ID, Owner_Count\n  FROM AnonymousTable AS A\n  JOIN (SELECT Owner_ID, COUNT(*) AS Owner_Count\n          FROM AnonymousTable\n         GROUP BY Owner_ID\n       ) AS B ON B.Owner_ID = A.Owner_ID\n ORDER BY Owner_Count DESC, Owner_ID ASC, ID ASC;\n
    \n

    This will work with some, but not necessarily all, DBMS; it orders by a column that is not shown in the result list:

    \n
    SELECT ID, Owner_ID\n  FROM AnonymousTable AS A\n  JOIN (SELECT Owner_ID, COUNT(*) AS Owner_Count\n          FROM AnonymousTable\n         GROUP BY Owner_ID\n       ) AS B ON B.Owner_ID = A.Owner_ID\n ORDER BY Owner_Count DESC, Owner_ID ASC, ID ASC;\n
    \n soup wrap:

    This will work with most SQL DBMS, but shows the count value.

    SELECT ID, Owner_ID, Owner_Count
      FROM AnonymousTable AS A
      JOIN (SELECT Owner_ID, COUNT(*) AS Owner_Count
              FROM AnonymousTable
             GROUP BY Owner_ID
           ) AS B ON B.Owner_ID = A.Owner_ID
     ORDER BY Owner_Count DESC, Owner_ID ASC, ID ASC;
    

    This will work with some, but not necessarily all, DBMS; it orders by a column that is not shown in the result list:

    SELECT ID, Owner_ID
      FROM AnonymousTable AS A
      JOIN (SELECT Owner_ID, COUNT(*) AS Owner_Count
              FROM AnonymousTable
             GROUP BY Owner_ID
           ) AS B ON B.Owner_ID = A.Owner_ID
     ORDER BY Owner_Count DESC, Owner_ID ASC, ID ASC;
    
    qid & accept id: (7326337, 7326639) query: Updating a column based on values from other rows soup:

    Following your edit...

    \n
    DECLARE @T TABLE\n(\nID INT,\nCategoryID CHAR(4),\nCode CHAR(4),\nStatus CHAR(4) NULL\n)\nINSERT INTO @T (ID,CategoryID, Code)\nSELECT 1,'A100',0012 UNION ALL SELECT 2,'A100',0012 UNION ALL\nSELECT 3,'A100',0055 UNION ALL SELECT 4,'A100',0012 UNION ALL\nSELECT 5,'B201',1116 UNION ALL SELECT 6,'B201',1116 UNION ALL\nSELECT 7,'B201',1121 UNION ALL SELECT 8,'B201',1024;\n\nWITH T AS\n(\nSELECT *, MIN(Code) OVER (PARTITION BY CategoryID ) AS MinCode\nfrom @T\n)\nUPDATE T\nSET Status = 'FAIL'\nWHERE Code <> MinCode\n\nSELECT *\nFROM @T\n
    \n

    Returns

    \n
    ID          CategoryID Code Status\n----------- ---------- ---- ------\n1           A100       12   NULL\n2           A100       12   NULL\n3           A100       55   FAIL\n4           A100       12   NULL\n5           B201       1116 FAIL\n6           B201       1116 FAIL\n7           B201       1121 FAIL\n8           B201       1024 NULL\n
    \n soup wrap:

    Following your edit...

    DECLARE @T TABLE
    (
    ID INT,
    CategoryID CHAR(4),
    Code CHAR(4),
    Status CHAR(4) NULL
    )
    INSERT INTO @T (ID,CategoryID, Code)
    SELECT 1,'A100',0012 UNION ALL SELECT 2,'A100',0012 UNION ALL
    SELECT 3,'A100',0055 UNION ALL SELECT 4,'A100',0012 UNION ALL
    SELECT 5,'B201',1116 UNION ALL SELECT 6,'B201',1116 UNION ALL
    SELECT 7,'B201',1121 UNION ALL SELECT 8,'B201',1024;
    
    WITH T AS
    (
    SELECT *, MIN(Code) OVER (PARTITION BY CategoryID ) AS MinCode
    from @T
    )
    UPDATE T
    SET Status = 'FAIL'
    WHERE Code <> MinCode
    
    SELECT *
    FROM @T
    

    Returns

    ID          CategoryID Code Status
    ----------- ---------- ---- ------
    1           A100       12   NULL
    2           A100       12   NULL
    3           A100       55   FAIL
    4           A100       12   NULL
    5           B201       1116 FAIL
    6           B201       1116 FAIL
    7           B201       1121 FAIL
    8           B201       1024 NULL
    
    qid & accept id: (7364969, 7774879) query: How to filter SQL results in a has-many-through relation soup:

    I was curious. And as we all know, curiosity has a reputation for killing cats.

    \n

    So, which is the fastest way to skin a cat?

    \n

    The precise cat-skinning environment for this test:

    \n\n
    \n

    A multicolumn B-tree index can be used with query conditions that\n involve any subset of the index's columns, but the index is most\n efficient when there are constraints on the leading (leftmost)\n columns.

    \n
    \n

    Results:

    \n

    Total runtimes from EXPLAIN ANALYZE.

    \n

    1) Martin 2: 44.594 ms

    \n
    SELECT s.stud_id, s.name\nFROM   student s\nJOIN   student_club sc USING (stud_id)\nWHERE  sc.club_id IN (30, 50)\nGROUP  BY 1,2\nHAVING COUNT(*) > 1;\n
    \n
    \n

    2) Erwin 1: 33.217 ms

    \n
    SELECT s.stud_id, s.name\nFROM   student s\nJOIN   (\n   SELECT stud_id\n   FROM   student_club\n   WHERE  club_id IN (30, 50)\n   GROUP  BY 1\n   HAVING COUNT(*) > 1\n   ) sc USING (stud_id);\n
    \n
    \n

    3) Martin 1: 31.735 ms

    \n
    SELECT s.stud_id, s.name\n   FROM   student s\n   WHERE  student_id IN (\n   SELECT student_id\n   FROM   student_club\n   WHERE  club_id = 30\n   INTERSECT\n   SELECT stud_id\n   FROM   student_club\n   WHERE  club_id = 50);\n
    \n
    \n

    4) Derek: 2.287 ms

    \n
    SELECT s.stud_id,  s.name\nFROM   student s\nWHERE  s.stud_id IN (SELECT stud_id FROM student_club WHERE club_id = 30)\nAND    s.stud_id IN (SELECT stud_id FROM student_club WHERE club_id = 50);\n
    \n
    \n

    5) Erwin 2: 2.181 ms

    \n
    SELECT s.stud_id,  s.name\nFROM   student s\nWHERE  EXISTS (SELECT * FROM student_club\n               WHERE  stud_id = s.stud_id AND club_id = 30)\nAND    EXISTS (SELECT * FROM student_club\n               WHERE  stud_id = s.stud_id AND club_id = 50);\n
    \n
    \n

    6) Sean: 2.043 ms

    \n
    SELECT s.stud_id, s.name\nFROM   student s\nJOIN   student_club x ON s.stud_id = x.stud_id\nJOIN   student_club y ON s.stud_id = y.stud_id\nWHERE  x.club_id = 30\nAND    y.club_id = 50;\n
    \n

    The last three perform pretty much the same. 4) and 5) result in the same query plan.

    \n

    Late Additions:

    \n

    Fancy SQL, but the performance can't keep up.

    \n

    7) ypercube 1: 148.649 ms

    \n
    SELECT s.stud_id,  s.name\nFROM   student AS s\nWHERE  NOT EXISTS (\n   SELECT *\n   FROM   club AS c \n   WHERE  c.club_id IN (30, 50)\n   AND    NOT EXISTS (\n      SELECT *\n      FROM   student_club AS sc \n      WHERE  sc.stud_id = s.stud_id\n      AND    sc.club_id = c.club_id  \n      )\n   );\n
    \n
    \n

    8) ypercube 2: 147.497 ms

    \n
    SELECT s.stud_id,  s.name\nFROM   student AS s\nWHERE  NOT EXISTS (\n   SELECT *\n   FROM  (\n      SELECT 30 AS club_id  \n      UNION  ALL\n      SELECT 50\n      ) AS c\n   WHERE NOT EXISTS (\n      SELECT *\n      FROM   student_club AS sc \n      WHERE  sc.stud_id = s.stud_id\n      AND    sc.club_id = c.club_id  \n      )\n   );\n
    \n

    As expected, those two perform almost the same. Query plan results in table scans, the planner doesn't find a way to use the indexes here.

    \n
    \n

    9) wildplasser 1: 49.849 ms

    \n
    WITH RECURSIVE two AS (\n   SELECT 1::int AS level\n        , stud_id\n   FROM   student_club sc1\n   WHERE  sc1.club_id = 30\n   UNION\n   SELECT two.level + 1 AS level\n        , sc2.stud_id\n   FROM   student_club sc2\n   JOIN   two USING (stud_id)\n   WHERE  sc2.club_id = 50\n   AND    two.level = 1\n   )\nSELECT s.stud_id, s.student\nFROM   student s\nJOIN   two USING (studid)\nWHERE  two.level > 1;\n
    \n

    Fancy SQL, decent performance for a CTE. Very exotic query plan.
    \nAgain, would be interesting how 9.1 handles this. I am going to upgrade the db cluster used here to 9.1 soon. Maybe I'll rerun the whole shebang ...

    \n
    \n

    10) wildplasser 2: 36.986 ms

    \n
    WITH sc AS (\n   SELECT stud_id\n   FROM   student_club\n   WHERE  club_id IN (30,50)\n   GROUP  BY stud_id\n   HAVING COUNT(*) > 1\n   )\nSELECT s.*\nFROM   student s\nJOIN   sc USING (stud_id);\n
    \n

    CTE variant of query 2). Surprisingly, it can result in a slightly different query plan with the exact same data. I found a sequential scan on student, where the subquery-variant used the index.

    \n
    \n

    11) ypercube 3: 101.482 ms

    \n

    Another late addition @ypercube. It is positively amazing, how many ways there are.

    \n
    SELECT s.stud_id, s.student\nFROM   student s\nJOIN   student_club sc USING (stud_id)\nWHERE  sc.club_id = 10                 -- member in 1st club ...\nAND    NOT EXISTS (\n   SELECT *\n   FROM  (SELECT 14 AS club_id) AS c  -- can't be excluded for missing the 2nd\n   WHERE  NOT EXISTS (\n      SELECT *\n      FROM   student_club AS d\n      WHERE  d.stud_id = sc.stud_id\n      AND    d.club_id = c.club_id\n      )\n   )\n
    \n
    \n

    12) erwin 3: 2.377 ms

    \n

    @ypercube's 11) is actually just the mind-twisting reverse approach of this simpler variant, that was also still missing. Performs almost as fast as the top cats.

    \n
    SELECT s.*\nFROM   student s\nJOIN   student_club x USING (stud_id)\nWHERE  sc.club_id = 10                 -- member in 1st club ...\nAND    EXISTS (                        -- ... and membership in 2nd exists\n   SELECT *\n   FROM   student_club AS y\n   WHERE  y.stud_id = s.stud_id\n   AND    y.club_id = 14\n   )\n
    \n

    13) erwin 4: 2.375 ms

    \n

    Hard to believe, but here's another, genuinely new variant. I see potential for more than two memberships, but it also ranks among the top cats with just two.

    \n
    SELECT s.*\nFROM   student AS s\nWHERE  EXISTS (\n   SELECT *\n   FROM   student_club AS x\n   JOIN   student_club AS y USING (stud_id)\n   WHERE  x.stud_id = s.stud_id\n   AND    x.club_id = 14\n   AND    y.club_id = 10\n   )\n
    \n

    Dynamic number of club memberships

    \n

    In other words: varying number of filters. This question asked for exactly two club memberships. But many use cases have to prepare for a varying number.

    \n

    Detailed discussion in this related later answer:

    \n\n soup wrap:

    I was curious. And as we all know, curiosity has a reputation for killing cats.

    So, which is the fastest way to skin a cat?

    The precise cat-skinning environment for this test:

    A multicolumn B-tree index can be used with query conditions that involve any subset of the index's columns, but the index is most efficient when there are constraints on the leading (leftmost) columns.

    Results:

    Total runtimes from EXPLAIN ANALYZE.

    1) Martin 2: 44.594 ms

    SELECT s.stud_id, s.name
    FROM   student s
    JOIN   student_club sc USING (stud_id)
    WHERE  sc.club_id IN (30, 50)
    GROUP  BY 1,2
    HAVING COUNT(*) > 1;
    

    2) Erwin 1: 33.217 ms

    SELECT s.stud_id, s.name
    FROM   student s
    JOIN   (
       SELECT stud_id
       FROM   student_club
       WHERE  club_id IN (30, 50)
       GROUP  BY 1
       HAVING COUNT(*) > 1
       ) sc USING (stud_id);
    

    3) Martin 1: 31.735 ms

    SELECT s.stud_id, s.name
       FROM   student s
       WHERE  student_id IN (
       SELECT student_id
       FROM   student_club
       WHERE  club_id = 30
       INTERSECT
       SELECT stud_id
       FROM   student_club
       WHERE  club_id = 50);
    

    4) Derek: 2.287 ms

    SELECT s.stud_id,  s.name
    FROM   student s
    WHERE  s.stud_id IN (SELECT stud_id FROM student_club WHERE club_id = 30)
    AND    s.stud_id IN (SELECT stud_id FROM student_club WHERE club_id = 50);
    

    5) Erwin 2: 2.181 ms

    SELECT s.stud_id,  s.name
    FROM   student s
    WHERE  EXISTS (SELECT * FROM student_club
                   WHERE  stud_id = s.stud_id AND club_id = 30)
    AND    EXISTS (SELECT * FROM student_club
                   WHERE  stud_id = s.stud_id AND club_id = 50);
    

    6) Sean: 2.043 ms

    SELECT s.stud_id, s.name
    FROM   student s
    JOIN   student_club x ON s.stud_id = x.stud_id
    JOIN   student_club y ON s.stud_id = y.stud_id
    WHERE  x.club_id = 30
    AND    y.club_id = 50;
    

    The last three perform pretty much the same. 4) and 5) result in the same query plan.

    Late Additions:

    Fancy SQL, but the performance can't keep up.

    7) ypercube 1: 148.649 ms

    SELECT s.stud_id,  s.name
    FROM   student AS s
    WHERE  NOT EXISTS (
       SELECT *
       FROM   club AS c 
       WHERE  c.club_id IN (30, 50)
       AND    NOT EXISTS (
          SELECT *
          FROM   student_club AS sc 
          WHERE  sc.stud_id = s.stud_id
          AND    sc.club_id = c.club_id  
          )
       );
    

    8) ypercube 2: 147.497 ms

    SELECT s.stud_id,  s.name
    FROM   student AS s
    WHERE  NOT EXISTS (
       SELECT *
       FROM  (
          SELECT 30 AS club_id  
          UNION  ALL
          SELECT 50
          ) AS c
       WHERE NOT EXISTS (
          SELECT *
          FROM   student_club AS sc 
          WHERE  sc.stud_id = s.stud_id
          AND    sc.club_id = c.club_id  
          )
       );
    

    As expected, those two perform almost the same. Query plan results in table scans, the planner doesn't find a way to use the indexes here.


    9) wildplasser 1: 49.849 ms

    WITH RECURSIVE two AS (
       SELECT 1::int AS level
            , stud_id
       FROM   student_club sc1
       WHERE  sc1.club_id = 30
       UNION
       SELECT two.level + 1 AS level
            , sc2.stud_id
       FROM   student_club sc2
       JOIN   two USING (stud_id)
       WHERE  sc2.club_id = 50
       AND    two.level = 1
       )
    SELECT s.stud_id, s.student
    FROM   student s
    JOIN   two USING (studid)
    WHERE  two.level > 1;
    

    Fancy SQL, decent performance for a CTE. Very exotic query plan.
    Again, would be interesting how 9.1 handles this. I am going to upgrade the db cluster used here to 9.1 soon. Maybe I'll rerun the whole shebang ...


    10) wildplasser 2: 36.986 ms

    WITH sc AS (
       SELECT stud_id
       FROM   student_club
       WHERE  club_id IN (30,50)
       GROUP  BY stud_id
       HAVING COUNT(*) > 1
       )
    SELECT s.*
    FROM   student s
    JOIN   sc USING (stud_id);
    

    CTE variant of query 2). Surprisingly, it can result in a slightly different query plan with the exact same data. I found a sequential scan on student, where the subquery-variant used the index.


    11) ypercube 3: 101.482 ms

    Another late addition @ypercube. It is positively amazing, how many ways there are.

    SELECT s.stud_id, s.student
    FROM   student s
    JOIN   student_club sc USING (stud_id)
    WHERE  sc.club_id = 10                 -- member in 1st club ...
    AND    NOT EXISTS (
       SELECT *
       FROM  (SELECT 14 AS club_id) AS c  -- can't be excluded for missing the 2nd
       WHERE  NOT EXISTS (
          SELECT *
          FROM   student_club AS d
          WHERE  d.stud_id = sc.stud_id
          AND    d.club_id = c.club_id
          )
       )
    

    12) erwin 3: 2.377 ms

    @ypercube's 11) is actually just the mind-twisting reverse approach of this simpler variant, that was also still missing. Performs almost as fast as the top cats.

    SELECT s.*
    FROM   student s
    JOIN   student_club x USING (stud_id)
    WHERE  sc.club_id = 10                 -- member in 1st club ...
    AND    EXISTS (                        -- ... and membership in 2nd exists
       SELECT *
       FROM   student_club AS y
       WHERE  y.stud_id = s.stud_id
       AND    y.club_id = 14
       )
    

    13) erwin 4: 2.375 ms

    Hard to believe, but here's another, genuinely new variant. I see potential for more than two memberships, but it also ranks among the top cats with just two.

    SELECT s.*
    FROM   student AS s
    WHERE  EXISTS (
       SELECT *
       FROM   student_club AS x
       JOIN   student_club AS y USING (stud_id)
       WHERE  x.stud_id = s.stud_id
       AND    x.club_id = 14
       AND    y.club_id = 10
       )
    

    Dynamic number of club memberships

    In other words: varying number of filters. This question asked for exactly two club memberships. But many use cases have to prepare for a varying number.

    Detailed discussion in this related later answer:

    qid & accept id: (7392374, 7392728) query: Calculate column in view based on other column values soup:

    Editted as per @HLGM comments to make it a bit more robust.

    \n

    Note that in it's current form, I assume that when

    \n\n

    If this does not suffice, OP might clarify what should be returned instead.

    \n

    SQL Statement

    \n
        ;WITH Alarm (C1, C1Alarm, C2, C2Alarm, C3, C3Alarm, C4, C4Alarm) AS (\n        SELECT  12.44, 0, 99.43, 0, 4.43, 1, 43.33, 0\n        UNION ALL SELECT 12.44, 1, 99.43, 0, 4.43, 1, 43.33, 0\n        UNION ALL SELECT 1, 0, 2, 1, 3, 1, 4, 1\n        UNION ALL SELECT 1, 1, 2, 1, 3, 1, 4, 1\n    )\n    , AddRowNumbers AS (\n        SELECT  rowNumber = ROW_NUMBER() OVER (ORDER BY C1)\n                , C1, C1Alarm\n                , C2, C2Alarm\n                , C3, C3Alarm\n                , C4, C4Alarm\n        FROM    Alarm   \n    )\n    , UnPivotColumns AS (\n        SELECT  rowNumber, value = C1 FROM AddRowNumbers WHERE C1Alarm = 0\n        UNION ALL SELECT rowNumber, C2 FROM AddRowNumbers WHERE C2Alarm = 0\n        UNION ALL SELECT rowNumber, C3 FROM AddRowNumbers WHERE C3Alarm = 0\n        UNION ALL SELECT rowNumber, C4 FROM AddRowNumbers WHERE C4Alarm = 0\n    )\n    SELECT  C1, C1Alarm\n            , C2, C2Alarm\n            , C3, C3Alarm\n            , C4, C4Alarm\n            , COALESCE(range1.range, range2.range)\n    FROM    AddRowNumbers rowNumber\n            LEFT OUTER JOIN (SELECT rowNumber, range = MAX(value) - MIN(value) FROM UnPivotColumns GROUP BY rowNumber HAVING COUNT(*) > 1) range1 ON range1.rowNumber = rowNumber.rowNumber\n            LEFT OUTER JOIN (SELECT rowNumber, range = AVG(value) FROM UnPivotColumns GROUP BY rowNumber HAVING COUNT(*) = 1) range2 ON range2.rowNumber = rowNumber.rowNumber  \n
    \n

    Test script

    \n
    ;WITH Alarm (C1, C1Alarm, C2, C2Alarm, C3, C3Alarm, C4, C4Alarm) AS (\n    SELECT  12.44, 0, 99.43, 0, 4.43, 1, 43.33, 0\n    UNION ALL SELECT 12.44, 1, 99.43, 0, 4.43, 1, 43.33, 0\n    UNION ALL SELECT 1, 0, 2, 1, 3, 1, 4, 1\n    UNION ALL SELECT 1, 1, 2, 1, 3, 1, 4, 1\n)\n, AddRowNumbers AS (\n    SELECT  rowNumber = ROW_NUMBER() OVER (ORDER BY C1)\n            , C1, C1Alarm\n            , C2, C2Alarm\n            , C3, C3Alarm\n            , C4, C4Alarm\n    FROM    Alarm   \n)\n, UnPivotColumns AS (\n    SELECT  rowNumber, value = C1 FROM AddRowNumbers WHERE C1Alarm = 0\n    UNION ALL SELECT rowNumber, C2 FROM AddRowNumbers WHERE C2Alarm = 0\n    UNION ALL SELECT rowNumber, C3 FROM AddRowNumbers WHERE C3Alarm = 0\n    UNION ALL SELECT rowNumber, C4 FROM AddRowNumbers WHERE C4Alarm = 0\n)\nSELECT  C1, C1Alarm\n        , C2, C2Alarm\n        , C3, C3Alarm\n        , C4, C4Alarm\n        , COALESCE(range1.range, range2.range)\nFROM    AddRowNumbers rowNumber\n        LEFT OUTER JOIN (SELECT rowNumber, range = MAX(value) - MIN(value) FROM UnPivotColumns GROUP BY rowNumber HAVING COUNT(*) > 1) range1 ON range1.rowNumber = rowNumber.rowNumber\n        LEFT OUTER JOIN (SELECT rowNumber, range = AVG(value) FROM UnPivotColumns GROUP BY rowNumber HAVING COUNT(*) = 1) range2 ON range2.rowNumber = rowNumber.rowNumber  \n
    \n soup wrap:

    Editted as per @HLGM comments to make it a bit more robust.

    Note that in it's current form, I assume that when

    If this does not suffice, OP might clarify what should be returned instead.

    SQL Statement

        ;WITH Alarm (C1, C1Alarm, C2, C2Alarm, C3, C3Alarm, C4, C4Alarm) AS (
            SELECT  12.44, 0, 99.43, 0, 4.43, 1, 43.33, 0
            UNION ALL SELECT 12.44, 1, 99.43, 0, 4.43, 1, 43.33, 0
            UNION ALL SELECT 1, 0, 2, 1, 3, 1, 4, 1
            UNION ALL SELECT 1, 1, 2, 1, 3, 1, 4, 1
        )
        , AddRowNumbers AS (
            SELECT  rowNumber = ROW_NUMBER() OVER (ORDER BY C1)
                    , C1, C1Alarm
                    , C2, C2Alarm
                    , C3, C3Alarm
                    , C4, C4Alarm
            FROM    Alarm   
        )
        , UnPivotColumns AS (
            SELECT  rowNumber, value = C1 FROM AddRowNumbers WHERE C1Alarm = 0
            UNION ALL SELECT rowNumber, C2 FROM AddRowNumbers WHERE C2Alarm = 0
            UNION ALL SELECT rowNumber, C3 FROM AddRowNumbers WHERE C3Alarm = 0
            UNION ALL SELECT rowNumber, C4 FROM AddRowNumbers WHERE C4Alarm = 0
        )
        SELECT  C1, C1Alarm
                , C2, C2Alarm
                , C3, C3Alarm
                , C4, C4Alarm
                , COALESCE(range1.range, range2.range)
        FROM    AddRowNumbers rowNumber
                LEFT OUTER JOIN (SELECT rowNumber, range = MAX(value) - MIN(value) FROM UnPivotColumns GROUP BY rowNumber HAVING COUNT(*) > 1) range1 ON range1.rowNumber = rowNumber.rowNumber
                LEFT OUTER JOIN (SELECT rowNumber, range = AVG(value) FROM UnPivotColumns GROUP BY rowNumber HAVING COUNT(*) = 1) range2 ON range2.rowNumber = rowNumber.rowNumber  
    

    Test script

    ;WITH Alarm (C1, C1Alarm, C2, C2Alarm, C3, C3Alarm, C4, C4Alarm) AS (
        SELECT  12.44, 0, 99.43, 0, 4.43, 1, 43.33, 0
        UNION ALL SELECT 12.44, 1, 99.43, 0, 4.43, 1, 43.33, 0
        UNION ALL SELECT 1, 0, 2, 1, 3, 1, 4, 1
        UNION ALL SELECT 1, 1, 2, 1, 3, 1, 4, 1
    )
    , AddRowNumbers AS (
        SELECT  rowNumber = ROW_NUMBER() OVER (ORDER BY C1)
                , C1, C1Alarm
                , C2, C2Alarm
                , C3, C3Alarm
                , C4, C4Alarm
        FROM    Alarm   
    )
    , UnPivotColumns AS (
        SELECT  rowNumber, value = C1 FROM AddRowNumbers WHERE C1Alarm = 0
        UNION ALL SELECT rowNumber, C2 FROM AddRowNumbers WHERE C2Alarm = 0
        UNION ALL SELECT rowNumber, C3 FROM AddRowNumbers WHERE C3Alarm = 0
        UNION ALL SELECT rowNumber, C4 FROM AddRowNumbers WHERE C4Alarm = 0
    )
    SELECT  C1, C1Alarm
            , C2, C2Alarm
            , C3, C3Alarm
            , C4, C4Alarm
            , COALESCE(range1.range, range2.range)
    FROM    AddRowNumbers rowNumber
            LEFT OUTER JOIN (SELECT rowNumber, range = MAX(value) - MIN(value) FROM UnPivotColumns GROUP BY rowNumber HAVING COUNT(*) > 1) range1 ON range1.rowNumber = rowNumber.rowNumber
            LEFT OUTER JOIN (SELECT rowNumber, range = AVG(value) FROM UnPivotColumns GROUP BY rowNumber HAVING COUNT(*) = 1) range2 ON range2.rowNumber = rowNumber.rowNumber  
    
    qid & accept id: (7432065, 7434118) query: How can I do sql union in cake php? soup:

    You can do this in 4 or more different ways... the easiest but not recomended is using

    \n
    $this->Model->query($query); \n
    \n

    where $query is the query stated above.

    \n

    The second way but may not be what you want, is to redo your sql query you will get same result (but not separated with the alias) like this:

    \n
    SELECT * FROM `videos` AS `U1` \nWHERE `U1`.`level_id` = '1' AND (`U1`.`submitted_date` > '2011-09-11' OR `U1`.`submitted_date` < '2011-09-11')\nORDER BY  submitted_date DESC\nLIMIT 0,10\n
    \n

    This query can be easily done with find like this

    \n
    $conditions = array(\n    'Video.level_id'=>1,\n    'OR' => array(\n        'Video.submitted_date <'=> '2011-09-11',\n        'Video.submitted_date >'=> '2011-09-11'\n    )\n);\n$this->Video->find('all', array('conditions'=>$conditions)) \n
    \n

    The third way will be the one that Abba Bryant talk about, explained in detail here Union syntax in cakePhp that works building the statement directly.

    \n

    The fourth way will like the first one more less, you will have to create a behaviour that have a beforeFind function and there you will have to check if a option union and create the query or to create something like the the third option.

    \n

    you will call it with a find like this

    \n
    $this->Video->find('all', array('conditions'=>$conditions, 'union'=> $union));\n
    \n

    This will be something more less like the linkable or containable behavior.

    \n

    The fith way is to modified your cakephp sql driver... this one, i don't really know the changes you have to do, but it is a way to get to that... This drivers are the responsible to interpret and create the queries, connect to db and execute the queries...

    \n

    REMEMBER that cakephp find do the checks neccesary to prevent SQLInyection and other risks... the $model->query will NOT do this tests so be carefull

    \n soup wrap:

    You can do this in 4 or more different ways... the easiest but not recomended is using

    $this->Model->query($query); 
    

    where $query is the query stated above.

    The second way but may not be what you want, is to redo your sql query you will get same result (but not separated with the alias) like this:

    SELECT * FROM `videos` AS `U1` 
    WHERE `U1`.`level_id` = '1' AND (`U1`.`submitted_date` > '2011-09-11' OR `U1`.`submitted_date` < '2011-09-11')
    ORDER BY  submitted_date DESC
    LIMIT 0,10
    

    This query can be easily done with find like this

    $conditions = array(
        'Video.level_id'=>1,
        'OR' => array(
            'Video.submitted_date <'=> '2011-09-11',
            'Video.submitted_date >'=> '2011-09-11'
        )
    );
    $this->Video->find('all', array('conditions'=>$conditions)) 
    

    The third way will be the one that Abba Bryant talk about, explained in detail here Union syntax in cakePhp that works building the statement directly.

    The fourth way will like the first one more less, you will have to create a behaviour that have a beforeFind function and there you will have to check if a option union and create the query or to create something like the the third option.

    you will call it with a find like this

    $this->Video->find('all', array('conditions'=>$conditions, 'union'=> $union));
    

    This will be something more less like the linkable or containable behavior.

    The fith way is to modified your cakephp sql driver... this one, i don't really know the changes you have to do, but it is a way to get to that... This drivers are the responsible to interpret and create the queries, connect to db and execute the queries...

    REMEMBER that cakephp find do the checks neccesary to prevent SQLInyection and other risks... the $model->query will NOT do this tests so be carefull

    qid & accept id: (7557231, 7557630) query: Select * from n tables soup:

    to list ALL tables you could try :

    \n
    EXEC sp_msforeachtable 'SELECT * FROM  ?'\n
    \n

    you can programmability include/exclude table by doing something like:

    \n
    EXEC sp_msforeachtable 'IF LEFT(''?'',9)=''[dbo].[xy'' BEGIN SELECT * FROM  ? END ELSE PRINT LEFT(''?'',9)'\n
    \n soup wrap:

    to list ALL tables you could try :

    EXEC sp_msforeachtable 'SELECT * FROM  ?'
    

    you can programmability include/exclude table by doing something like:

    EXEC sp_msforeachtable 'IF LEFT(''?'',9)=''[dbo].[xy'' BEGIN SELECT * FROM  ? END ELSE PRINT LEFT(''?'',9)'
    
    qid & accept id: (7558371, 7558470) query: 10g Package Construction - Restricting References soup:

    You cannot refer using static SQL to objects that do not exist when the code is compiled. There is nothing you can do about that.

    \n

    You would need to modify your code to use dynamic SQL to refer to any object that is created at runtime. You can probably use EXECUTE IMMEDIATE, i.e.

    \n
    EXECUTE IMMEDIATE \n  'SELECT COUNT(*) FROM new_mv_name'\n  INTO l_cnt;\n
    \n

    rather than

    \n
    SELECT COUNT(*)\n  INTO l_cnt\n  FROM new_mv_name;\n
    \n

    That being said, however, I would be extremely dubious about a PL/SQL implementation that involved creating any new tables and materialized views at runtime. That is almost always a mistake in Oracle. Why do you need to create new objects at runtime?

    \n soup wrap:

    You cannot refer using static SQL to objects that do not exist when the code is compiled. There is nothing you can do about that.

    You would need to modify your code to use dynamic SQL to refer to any object that is created at runtime. You can probably use EXECUTE IMMEDIATE, i.e.

    EXECUTE IMMEDIATE 
      'SELECT COUNT(*) FROM new_mv_name'
      INTO l_cnt;
    

    rather than

    SELECT COUNT(*)
      INTO l_cnt
      FROM new_mv_name;
    

    That being said, however, I would be extremely dubious about a PL/SQL implementation that involved creating any new tables and materialized views at runtime. That is almost always a mistake in Oracle. Why do you need to create new objects at runtime?

    qid & accept id: (7605630, 7605650) query: What's the best approach to dynamically display a single product from the database? soup:

    If you are using MSSql you can order by the newId() function to randomly get a row of data. You still need a service/page on the server side to run this code for you.

    \n
    select top 1 productName, sku\nfrom products\norder by newid()\n
    \n

    for MySql this would suffice

    \n
    SELECT productName, sku\nFROM products\nORDER BY Rand()\nLIMIT 1\n
    \n soup wrap:

    If you are using MSSql you can order by the newId() function to randomly get a row of data. You still need a service/page on the server side to run this code for you.

    select top 1 productName, sku
    from products
    order by newid()
    

    for MySql this would suffice

    SELECT productName, sku
    FROM products
    ORDER BY Rand()
    LIMIT 1
    
    qid & accept id: (7656057, 7658392) query: How to make temporary table with row for each of last 24 hours? soup:

    One row for each hour for a given date (SQL Server solution).

    \n
    select dateadd(hour, Number, '20110101')\nfrom master..spt_values\nwhere type = 'P' and\n      number between 0 and 23\n
    \n
    \n

    result with a row for each hour in last 24 hours

    \n
    \n
    select dateadd(hour, datediff(hour, 0, getdate()) - number, 0)\nfrom master..spt_values\nwhere type = 'P' and\n      number between 0 and 23\n
    \n soup wrap:

    One row for each hour for a given date (SQL Server solution).

    select dateadd(hour, Number, '20110101')
    from master..spt_values
    where type = 'P' and
          number between 0 and 23
    

    result with a row for each hour in last 24 hours

    select dateadd(hour, datediff(hour, 0, getdate()) - number, 0)
    from master..spt_values
    where type = 'P' and
          number between 0 and 23
    
    qid & accept id: (7676110, 7676269) query: How to remove duplicates from table using SQL query soup:

    It looks like all four column values are duplicated so you can do this -

    \n
    select distinct emp_name, emp_address, sex, marital_status\nfrom YourTable\n
    \n

    However if marital status can be different and you have some other column based on which to choose (for eg you want latest record based on a column create_date) you can do this

    \n
    select emp_name, emp_address, sex, marital_status\nfrom YourTable a\nwhere not exists (select 1 \n                   from YourTable b\n                  where b.emp_name = a.emp_name and\n                        b.emp_address = a.emp_address and\n                        b.sex = a.sex and\n                        b.create_date >= a.create_date)\n
    \n soup wrap:

    It looks like all four column values are duplicated so you can do this -

    select distinct emp_name, emp_address, sex, marital_status
    from YourTable
    

    However if marital status can be different and you have some other column based on which to choose (for eg you want latest record based on a column create_date) you can do this

    select emp_name, emp_address, sex, marital_status
    from YourTable a
    where not exists (select 1 
                       from YourTable b
                      where b.emp_name = a.emp_name and
                            b.emp_address = a.emp_address and
                            b.sex = a.sex and
                            b.create_date >= a.create_date)
    
    qid & accept id: (7681122, 7681158) query: Oracle - Modify an existing table to auto-increment a column soup:

    If your MAX(noteid) is 799, then try:

    \n
    CREATE SEQUENCE noteseq\n    START WITH 800\n    INCREMENT BY 1\n
    \n

    Then when inserting a new record, for the NOTEID column, you would do:

    \n
    noteseq.nextval\n
    \n soup wrap:

    If your MAX(noteid) is 799, then try:

    CREATE SEQUENCE noteseq
        START WITH 800
        INCREMENT BY 1
    

    Then when inserting a new record, for the NOTEID column, you would do:

    noteseq.nextval
    
    qid & accept id: (7745609, 7745635) query: SQL select only rows with max value on a column soup:

    At first glance...

    \n

    All you need is a GROUP BY clause with the MAX aggregate function:

    \n
    SELECT id, MAX(rev)\nFROM YourTable\nGROUP BY id\n
    \n

    It's never that simple, is it?

    \n

    I just noticed you need the content column as well.

    \n

    This is a very common question in SQL: find the whole data for the row with some max value in a column per some group identifier. I heard that a lot during my career. Actually, it was one the questions I answered in my current job's technical interview.

    \n

    It is, actually, so common that StackOverflow community has created a single tag just to deal with questions like that: .

    \n

    Basically, you have two approaches to solve that problem:

    \n

    Joining with simple group-identifier, max-value-in-group Sub-query

    \n

    In this approach, you first find the group-identifier, max-value-in-group (already solved above) in a sub-query. Then you join your table to the sub-query with equality on both group-identifier and max-value-in-group:

    \n
    SELECT a.id, a.rev, a.contents\nFROM YourTable a\nINNER JOIN (\n    SELECT id, MAX(rev) rev\n    FROM YourTable\n    GROUP BY id\n) b ON a.id = b.id AND a.rev = b.rev\n
    \n

    Left Joining with self, tweaking join conditions and filters

    \n

    In this approach, you left join the table with itself. Equality, of course, goes in the group-identifier. Then, 2 smart moves:

    \n
      \n
    1. The second join condition is having left side value less than right value
    2. \n
    3. When you do step 1, the row(s) that actually have the max value will have NULL in the right side (it's a LEFT JOIN, remember?). Then, we filter the joined result, showing only the rows where the right side is NULL.
    4. \n
    \n

    So you end up with:

    \n
    SELECT a.*\nFROM YourTable a\nLEFT OUTER JOIN YourTable b\n    ON a.id = b.id AND a.rev < b.rev\nWHERE b.id IS NULL;\n
    \n

    Conclusion

    \n

    Both approaches bring the exact same result.

    \n

    If you have two rows with max-value-in-group for group-identifier, both rows will be in the result in both approaches.

    \n

    Both approaches are SQL ANSI compatible, thus, will work with your favorite RDBMS, regardless of its "flavor".

    \n

    Both approaches are also performance friendly, however your mileage may vary (RDBMS, DB Structure, Indexes, etc.). So when you pick one approach over the other, benchmark. And make sure you pick the one which make most of sense to you.

    \n soup wrap:

    At first glance...

    All you need is a GROUP BY clause with the MAX aggregate function:

    SELECT id, MAX(rev)
    FROM YourTable
    GROUP BY id
    

    It's never that simple, is it?

    I just noticed you need the content column as well.

    This is a very common question in SQL: find the whole data for the row with some max value in a column per some group identifier. I heard that a lot during my career. Actually, it was one the questions I answered in my current job's technical interview.

    It is, actually, so common that StackOverflow community has created a single tag just to deal with questions like that: .

    Basically, you have two approaches to solve that problem:

    Joining with simple group-identifier, max-value-in-group Sub-query

    In this approach, you first find the group-identifier, max-value-in-group (already solved above) in a sub-query. Then you join your table to the sub-query with equality on both group-identifier and max-value-in-group:

    SELECT a.id, a.rev, a.contents
    FROM YourTable a
    INNER JOIN (
        SELECT id, MAX(rev) rev
        FROM YourTable
        GROUP BY id
    ) b ON a.id = b.id AND a.rev = b.rev
    

    Left Joining with self, tweaking join conditions and filters

    In this approach, you left join the table with itself. Equality, of course, goes in the group-identifier. Then, 2 smart moves:

    1. The second join condition is having left side value less than right value
    2. When you do step 1, the row(s) that actually have the max value will have NULL in the right side (it's a LEFT JOIN, remember?). Then, we filter the joined result, showing only the rows where the right side is NULL.

    So you end up with:

    SELECT a.*
    FROM YourTable a
    LEFT OUTER JOIN YourTable b
        ON a.id = b.id AND a.rev < b.rev
    WHERE b.id IS NULL;
    

    Conclusion

    Both approaches bring the exact same result.

    If you have two rows with max-value-in-group for group-identifier, both rows will be in the result in both approaches.

    Both approaches are SQL ANSI compatible, thus, will work with your favorite RDBMS, regardless of its "flavor".

    Both approaches are also performance friendly, however your mileage may vary (RDBMS, DB Structure, Indexes, etc.). So when you pick one approach over the other, benchmark. And make sure you pick the one which make most of sense to you.

    qid & accept id: (7748125, 7748276) query: SQL find two consecutive days in a reservation system soup:

    Just join to the availability table twice

    \n
    SELECT rooms.* FROM rooms, availability as a1, availability as a2\nWHERE rooms.id = 123\nAND a1.room_id = rooms.id\nAND a2.room_id=  rooms.id\nAND a1.date_occupied + 1 = a2.date_occupied\n
    \n

    or, if we're not into writing SQL like its 1985:

    \n
    SELECT rooms.* FROM rooms\nJOIN availability a1 on a1.room_id = rooms.id\nJoin availability a2 on a2.room_id = rooms.id AND a1.date_occupied + 1 = a2.date_occupied\nWHERE rooms.id = 123\n
    \n soup wrap:

    Just join to the availability table twice

    SELECT rooms.* FROM rooms, availability as a1, availability as a2
    WHERE rooms.id = 123
    AND a1.room_id = rooms.id
    AND a2.room_id=  rooms.id
    AND a1.date_occupied + 1 = a2.date_occupied
    

    or, if we're not into writing SQL like its 1985:

    SELECT rooms.* FROM rooms
    JOIN availability a1 on a1.room_id = rooms.id
    Join availability a2 on a2.room_id = rooms.id AND a1.date_occupied + 1 = a2.date_occupied
    WHERE rooms.id = 123
    
    qid & accept id: (7763635, 7763673) query: SQL Sort by popularity? soup:
    SELECT PP.playgroup_id, COUNT(*) cnt\nFROM playgroup_players PP\nGROUP BY PP.playgroup_id\nORDER BY COUNT(*) DESC\n
    \n

    This will give you a list of playgroups that have at least one player sorted by the number of players. Of course, field name is made up :)

    \n
    SELECT G.playgroup_id, COUNT(PP.playgroup_id) cnt\nFROM playgroup G\n  LEFT OUTER JOIN playgroup_players PP ON (PP.playgroup_id=G.playgroup_id)\nGROUP BY G.playgroup_id\nORDER BY COUNT(*) DESC\n
    \n

    This should give you a list of ALL playgroups (even the ones with no players). I've tested this on Oracle and on some of my own data and it works

    \n soup wrap:
    SELECT PP.playgroup_id, COUNT(*) cnt
    FROM playgroup_players PP
    GROUP BY PP.playgroup_id
    ORDER BY COUNT(*) DESC
    

    This will give you a list of playgroups that have at least one player sorted by the number of players. Of course, field name is made up :)

    SELECT G.playgroup_id, COUNT(PP.playgroup_id) cnt
    FROM playgroup G
      LEFT OUTER JOIN playgroup_players PP ON (PP.playgroup_id=G.playgroup_id)
    GROUP BY G.playgroup_id
    ORDER BY COUNT(*) DESC
    

    This should give you a list of ALL playgroups (even the ones with no players). I've tested this on Oracle and on some of my own data and it works

    qid & accept id: (7794875, 7795191) query: Join a table to itself soup:

    You can perfectly join the table with it self.

    \n

    You should be aware, however, that your design allows you to have multiple levels of hierarchy. Since you are using SQL Server (assuming 2005 or higher), you can have a recursive CTE get your tree structure.

    \n

    Proof of concept preparation:

    \n
    declare @YourTable table (id int, parentid int, title varchar(20))\n\ninsert into @YourTable values\n(1,null, 'root'),\n(2,1,    'something'),\n(3,1,    'in the way'),\n(4,1,    'she moves'),\n(5,3,    ''),\n(6,null, 'I don''t know'),\n(7,6,    'Stick around');\n
    \n

    Query 1 - Node Levels:

    \n
    with cte as (\n    select Id, ParentId, Title, 1 level \n    from @YourTable where ParentId is null\n\n    union all\n\n    select yt.Id, yt.ParentId, yt.Title, cte.level + 1\n    from @YourTable yt inner join cte on cte.Id = yt.ParentId\n)\nselect cte.*\nfrom cte \norder by level, id, Title\n
    \n soup wrap:

    You can perfectly join the table with it self.

    You should be aware, however, that your design allows you to have multiple levels of hierarchy. Since you are using SQL Server (assuming 2005 or higher), you can have a recursive CTE get your tree structure.

    Proof of concept preparation:

    declare @YourTable table (id int, parentid int, title varchar(20))
    
    insert into @YourTable values
    (1,null, 'root'),
    (2,1,    'something'),
    (3,1,    'in the way'),
    (4,1,    'she moves'),
    (5,3,    ''),
    (6,null, 'I don''t know'),
    (7,6,    'Stick around');
    

    Query 1 - Node Levels:

    with cte as (
        select Id, ParentId, Title, 1 level 
        from @YourTable where ParentId is null
    
        union all
    
        select yt.Id, yt.ParentId, yt.Title, cte.level + 1
        from @YourTable yt inner join cte on cte.Id = yt.ParentId
    )
    select cte.*
    from cte 
    order by level, id, Title
    
    qid & accept id: (7830197, 7830231) query: Remove values in comma separated list from database soup:

    Using these user-defined REGEXP_REPLACE() functions, you may be able to replace it with an empty string:

    \n
    UPDATE children SET wishes = REGEXP_REPLACE(wishes, '(,(\s)?)?Surfboard', '') WHERE caseNum='whatever';\n
    \n

    Unfortunately, you cannot just use plain old REPLACE() because you don't know where in the string 'Surfboard' appears. In fact, the regex above would probably need additional tweaking if 'Surfboard' occurs at the beginning or end.

    \n

    Perhaps you could trim off leading and trailing commas left over like this:

    \n
    UPDATE children SET wishes = TRIM(BOTH ',' FROM REGEXP_REPLACE(wishes, '(,(\s)?)?Surfboard', '')) WHERE caseNum='whatever';\n
    \n

    So what's going on here? The regex removes 'Surfboard' plus an optional comma & space before it. Then the surrounding TRIM() function eliminates a possible leading comma in case 'Surfboard' occurred at the beginning of the string. That could probably be handled by the regex as well, but frankly, I'm too tired to puzzle it out.

    \n

    Note, I've never used these myself and cannot vouch for their effectiveness or robustness, but it is a place to start. And, as others are mentioning in the comments, you really should have these in a normalized wishlist table, rather than as a comma-separated string.

    \n

    Update

    \n

    Thinking about this more, I'm more partial to just forcing the use of built-in REPLACE() and then cleaning out the extra comma where you may get two commas in a row. This is looking for two commas side by side, as though there had been no spaces separating your original list items. If the items had been separated by commas and spaces, change ',,' to ', ,' in the outer REPLACE() call.

    \n
    UPDATE children SET wishes = TRIM(BOTH ',' FROM REPLACE(REPLACE(wishes, 'Surfboard', ''), ',,', ',')) WHERE caseNum='whatever';\n
    \n soup wrap:

    Using these user-defined REGEXP_REPLACE() functions, you may be able to replace it with an empty string:

    UPDATE children SET wishes = REGEXP_REPLACE(wishes, '(,(\s)?)?Surfboard', '') WHERE caseNum='whatever';
    

    Unfortunately, you cannot just use plain old REPLACE() because you don't know where in the string 'Surfboard' appears. In fact, the regex above would probably need additional tweaking if 'Surfboard' occurs at the beginning or end.

    Perhaps you could trim off leading and trailing commas left over like this:

    UPDATE children SET wishes = TRIM(BOTH ',' FROM REGEXP_REPLACE(wishes, '(,(\s)?)?Surfboard', '')) WHERE caseNum='whatever';
    

    So what's going on here? The regex removes 'Surfboard' plus an optional comma & space before it. Then the surrounding TRIM() function eliminates a possible leading comma in case 'Surfboard' occurred at the beginning of the string. That could probably be handled by the regex as well, but frankly, I'm too tired to puzzle it out.

    Note, I've never used these myself and cannot vouch for their effectiveness or robustness, but it is a place to start. And, as others are mentioning in the comments, you really should have these in a normalized wishlist table, rather than as a comma-separated string.

    Update

    Thinking about this more, I'm more partial to just forcing the use of built-in REPLACE() and then cleaning out the extra comma where you may get two commas in a row. This is looking for two commas side by side, as though there had been no spaces separating your original list items. If the items had been separated by commas and spaces, change ',,' to ', ,' in the outer REPLACE() call.

    UPDATE children SET wishes = TRIM(BOTH ',' FROM REPLACE(REPLACE(wishes, 'Surfboard', ''), ',,', ',')) WHERE caseNum='whatever';
    
    qid & accept id: (7901416, 7901490) query: Best way to update table with values calculated from same table soup:

    try creating a temp table in memory:

    \n
    DECLARE @temp_receipts TABLE (\nAssociatedReceiptID int,\nsum_value int)\n
    \n

    then:

    \n
    insert into @temp_receipts\nSELECT AssociatedReceiptID, sum(Value)\nFROM Receipt\nGROUP BY AssociatedReceiptID\n
    \n

    and then update the main table totals:

    \n
    UPDATE Receipt r\nSET Total = (SELECT sum_value\n             FROM @temp_receipts tt\n             WHERE r.AssociatedReceiptID = tt.AssociatedReceiptID)\n
    \n

    However, I would create a table called receipt_totals or something and use that instead. It makes no sense to have the total of each associated receipt in every single related row. if you are doing it for query convenience consider creating a view between receipts and receipt_totals

    \n soup wrap:

    try creating a temp table in memory:

    DECLARE @temp_receipts TABLE (
    AssociatedReceiptID int,
    sum_value int)
    

    then:

    insert into @temp_receipts
    SELECT AssociatedReceiptID, sum(Value)
    FROM Receipt
    GROUP BY AssociatedReceiptID
    

    and then update the main table totals:

    UPDATE Receipt r
    SET Total = (SELECT sum_value
                 FROM @temp_receipts tt
                 WHERE r.AssociatedReceiptID = tt.AssociatedReceiptID)
    

    However, I would create a table called receipt_totals or something and use that instead. It makes no sense to have the total of each associated receipt in every single related row. if you are doing it for query convenience consider creating a view between receipts and receipt_totals

    qid & accept id: (7905182, 7905222) query: How do I turn off this error temporarily while I delete a record? soup:
    \n

    I see that there are some keys set with references between the tables\n how do I just force the deletion anyway?

    \n
    \n

    You can do this, but its probably better just to update or delete the rows in the referencing table

    \n
    ALTER TABLE InviteConfiguration NOCHECK CONSTRAINT ALL\n
    \n

    or with a slightly smaller hammer

    \n
     ALTER TABLE InviteConfiguration NOCHECK CONSTRAINT FK_InviteConfiguration_Invite\n
    \n soup wrap:

    I see that there are some keys set with references between the tables how do I just force the deletion anyway?

    You can do this, but its probably better just to update or delete the rows in the referencing table

    ALTER TABLE InviteConfiguration NOCHECK CONSTRAINT ALL
    

    or with a slightly smaller hammer

     ALTER TABLE InviteConfiguration NOCHECK CONSTRAINT FK_InviteConfiguration_Invite
    
    qid & accept id: (7991363, 7991989) query: How to pull out schema of db from MySQL/phpMyAdmin? soup:

    Not sure exactly what you want. You can try one of these methods:

    \n

    1) Use phpMyAdmin's export feature to export the database. PMA allows you to omit the data.

    \n

    2) You can do the same using mysqldump. This command should export CREATE DATABASE/CREATE TABLE queries:

    \n
    mysqldump -hlocalhost -uroot -proot --all-databases --no-data > create-database-and-tables.sql\n
    \n

    3) You can pull information from mySQL schema tables. Most mySQL clients (phpMyAdmin, HeidiSQL etc) allow you to export result of queries as CSV. Some useful queries:

    \n
    /*\n * DATABASE, TABLE, TYPE\n */\nSELECT TABLE_SCHEMA, TABLE_NAME, TABLE_TYPE\nFROM INFORMATION_SCHEMA.TABLES\nWHERE TABLE_SCHEMA NOT IN ('information_schema', 'performance_schema', 'mysql')\nORDER BY TABLE_SCHEMA, TABLE_NAME, TABLE_TYPE\n\n/*\n * DATABASE, TABLE, COLUMN, TYPE\n */\nSELECT TABLE_SCHEMA, TABLE_NAME, COLUMN_NAME, DATA_TYPE, IS_NULLABLE /* ETC */\nFROM INFORMATION_SCHEMA.COLUMNS\nWHERE TABLE_SCHEMA NOT IN ('information_schema', 'performance_schema', 'mysql')\nORDER BY TABLE_SCHEMA, TABLE_NAME, ORDINAL_POSITION\n
    \n soup wrap:

    Not sure exactly what you want. You can try one of these methods:

    1) Use phpMyAdmin's export feature to export the database. PMA allows you to omit the data.

    2) You can do the same using mysqldump. This command should export CREATE DATABASE/CREATE TABLE queries:

    mysqldump -hlocalhost -uroot -proot --all-databases --no-data > create-database-and-tables.sql
    

    3) You can pull information from mySQL schema tables. Most mySQL clients (phpMyAdmin, HeidiSQL etc) allow you to export result of queries as CSV. Some useful queries:

    /*
     * DATABASE, TABLE, TYPE
     */
    SELECT TABLE_SCHEMA, TABLE_NAME, TABLE_TYPE
    FROM INFORMATION_SCHEMA.TABLES
    WHERE TABLE_SCHEMA NOT IN ('information_schema', 'performance_schema', 'mysql')
    ORDER BY TABLE_SCHEMA, TABLE_NAME, TABLE_TYPE
    
    /*
     * DATABASE, TABLE, COLUMN, TYPE
     */
    SELECT TABLE_SCHEMA, TABLE_NAME, COLUMN_NAME, DATA_TYPE, IS_NULLABLE /* ETC */
    FROM INFORMATION_SCHEMA.COLUMNS
    WHERE TABLE_SCHEMA NOT IN ('information_schema', 'performance_schema', 'mysql')
    ORDER BY TABLE_SCHEMA, TABLE_NAME, ORDINAL_POSITION
    
    qid & accept id: (7994408, 7994437) query: Use Alias in Select Query soup:

    You cannot do this:

    \n
    SELECT (Complex SubQuery) AS A, (Another Sub Query WHERE ID = A) FROM TABLE\n
    \n

    You can however do this:

    \n
    SELECT (Another Sub Query WHERE ID = A.somecolumn)\nFROM table\nJOIN SELECT (Complex SubQuery) AS A on (A.X = TABLE.Y)\n
    \n

    Or

    \n
    SELECT (Another Sub Query)\nFROM table\nWHERE table.afield IN (SELECT Complex SubQuery.otherfield)\n
    \n

    The problem is that you cannot refer to aliases like this in the SELECT and WHERE clauses, because they will not have evaluated by the time the select or where part is executed.
    \nYou can also use a having clause, but having clauses do not use indexes and should be avoided if possible.

    \n soup wrap:

    You cannot do this:

    SELECT (Complex SubQuery) AS A, (Another Sub Query WHERE ID = A) FROM TABLE
    

    You can however do this:

    SELECT (Another Sub Query WHERE ID = A.somecolumn)
    FROM table
    JOIN SELECT (Complex SubQuery) AS A on (A.X = TABLE.Y)
    

    Or

    SELECT (Another Sub Query)
    FROM table
    WHERE table.afield IN (SELECT Complex SubQuery.otherfield)
    

    The problem is that you cannot refer to aliases like this in the SELECT and WHERE clauses, because they will not have evaluated by the time the select or where part is executed.
    You can also use a having clause, but having clauses do not use indexes and should be avoided if possible.

    qid & accept id: (8001083, 8001125) query: SQL: ORDER BY `date` AND START WHERE`value`="something"? soup:
    SELECT \n  y.*\nFROM\n  YourTable y\nWHERE\n  y.date <= (SELECT yb.date FROM YourTable yb WHERE yb.color = 'BLUE')\nORDER BY\n  y.date DESC\nLIMIT 4 OFFSET 0\n
    \n

    Updated:

    \n
    SELECT \n  y.*\nFROM\n  YourTable y\nWHERE\n  /* The colors 'before' blue */\n  y.date < (SELECT yb.date FROM YourTable yb WHERE yb.color = 'BLUE') or\n  /* And blue itself */\n  y.color = 'BLUE'\nORDER BY\n  y.date DESC\nLIMIT 4 OFFSET 0\n
    \n

    Second update to meet newly discovered criteria.

    \n
    SELECT \n  y.*\nFROM\n  YourTable y,\n  (SELECT yb.id, yb.date FROM yb WHERE color = 'GREEN') ys\nWHERE\n  /* The colors 'before' green */\n  y.date < ys.date or\n  /* The colors on the same date as green, but with greater \n     or equal id to green. This includes green itself.\n     Note the parentheses here. */\n  (y.date = ys.date and y.id >= ys.id)\nORDER BY\n  y.date DESC\nLIMIT 4 OFFSET 0\n
    \n soup wrap:
    SELECT 
      y.*
    FROM
      YourTable y
    WHERE
      y.date <= (SELECT yb.date FROM YourTable yb WHERE yb.color = 'BLUE')
    ORDER BY
      y.date DESC
    LIMIT 4 OFFSET 0
    

    Updated:

    SELECT 
      y.*
    FROM
      YourTable y
    WHERE
      /* The colors 'before' blue */
      y.date < (SELECT yb.date FROM YourTable yb WHERE yb.color = 'BLUE') or
      /* And blue itself */
      y.color = 'BLUE'
    ORDER BY
      y.date DESC
    LIMIT 4 OFFSET 0
    

    Second update to meet newly discovered criteria.

    SELECT 
      y.*
    FROM
      YourTable y,
      (SELECT yb.id, yb.date FROM yb WHERE color = 'GREEN') ys
    WHERE
      /* The colors 'before' green */
      y.date < ys.date or
      /* The colors on the same date as green, but with greater 
         or equal id to green. This includes green itself.
         Note the parentheses here. */
      (y.date = ys.date and y.id >= ys.id)
    ORDER BY
      y.date DESC
    LIMIT 4 OFFSET 0
    
    qid & accept id: (8014982, 8015012) query: Is there a way to make a column's nullability depend on another column's nullability? soup:

    Assuming you are on SQL Server or something similar, you can do this with a CHECK constraint on your table. (Unfortunately, MySQL parses but ignores CHECK constraints, so you'd have to use a trigger for that platform.)

    \n

    If the table already exists:

    \n
    ALTER TABLE ADD CONSTRAINT CK_ExitDateReason\nCHECK (\n      (ExitDate IS NULL AND ExitReason IS NULL) \n   OR (ExitDate IS NOT NULL AND ExitReason IS NOT NULL) \n);\n
    \n

    If you are creating the table yourself:

    \n
    CREATE TABLE dbo.Exit (\n     ...\n\n   , CONSTRAINT CK_ExitDateReason CHECK ...\n);\n
    \n

    Using a check constraint is preferable to using a trigger because:

    \n\n soup wrap:

    Assuming you are on SQL Server or something similar, you can do this with a CHECK constraint on your table. (Unfortunately, MySQL parses but ignores CHECK constraints, so you'd have to use a trigger for that platform.)

    If the table already exists:

    ALTER TABLE ADD CONSTRAINT CK_ExitDateReason
    CHECK (
          (ExitDate IS NULL AND ExitReason IS NULL) 
       OR (ExitDate IS NOT NULL AND ExitReason IS NOT NULL) 
    );
    

    If you are creating the table yourself:

    CREATE TABLE dbo.Exit (
         ...
    
       , CONSTRAINT CK_ExitDateReason CHECK ...
    );
    

    Using a check constraint is preferable to using a trigger because:

    qid & accept id: (8015482, 8016442) query: How to merge time intervals in SQL Server soup:

    You can use a recursive CTE to build a list of dates and then count the distinct dates.

    \n
    declare @T table\n(\n  startDate date,\n  endDate date\n);\n\ninsert into @T values\n('2011-01-01', '2011-01-05'),\n('2011-01-04', '2011-01-08'),\n('2011-01-11', '2011-01-15');\n\nwith C as\n(\n  select startDate,\n         endDate\n  from @T\n  union all\n  select dateadd(day, 1, startDate),\n         endDate\n  from C\n  where dateadd(day, 1, startDate) < endDate       \n)\nselect count(distinct startDate) as DayCount\nfrom C\noption (MAXRECURSION 0)\n
    \n

    Result:

    \n
    DayCount\n-----------\n11\n
    \n

    Or you can use a numbers table. Here I use master..spt_values:

    \n
    declare @MinStartDate date\nselect @MinStartDate = min(startDate)\nfrom @T\n\nselect count(distinct N.number)\nfrom @T as T\n  inner join master..spt_values as N\n    on dateadd(day, N.Number, @MinStartDate) between T.startDate and dateadd(day, -1, T.endDate)\nwhere N.type = 'P'    \n
    \n soup wrap:

    You can use a recursive CTE to build a list of dates and then count the distinct dates.

    declare @T table
    (
      startDate date,
      endDate date
    );
    
    insert into @T values
    ('2011-01-01', '2011-01-05'),
    ('2011-01-04', '2011-01-08'),
    ('2011-01-11', '2011-01-15');
    
    with C as
    (
      select startDate,
             endDate
      from @T
      union all
      select dateadd(day, 1, startDate),
             endDate
      from C
      where dateadd(day, 1, startDate) < endDate       
    )
    select count(distinct startDate) as DayCount
    from C
    option (MAXRECURSION 0)
    

    Result:

    DayCount
    -----------
    11
    

    Or you can use a numbers table. Here I use master..spt_values:

    declare @MinStartDate date
    select @MinStartDate = min(startDate)
    from @T
    
    select count(distinct N.number)
    from @T as T
      inner join master..spt_values as N
        on dateadd(day, N.Number, @MinStartDate) between T.startDate and dateadd(day, -1, T.endDate)
    where N.type = 'P'    
    
    qid & accept id: (8030624, 8030698) query: Checking if specific tuple exists in table soup:

    Join Test to itself thusly:

    \n
    select t1.A, t1.B\nfrom Test t1\njoin Test t2 on t1.A = t2.B and t1.B = t2.A\n
    \n

    Or use an intersection:

    \n
    select A, B from Test\nintersect\nselect B, A from Test\n
    \n

    The self-join would probably be faster though.

    \n soup wrap:

    Join Test to itself thusly:

    select t1.A, t1.B
    from Test t1
    join Test t2 on t1.A = t2.B and t1.B = t2.A
    

    Or use an intersection:

    select A, B from Test
    intersect
    select B, A from Test
    

    The self-join would probably be faster though.

    qid & accept id: (8044345, 8052502) query: DBIx::Class : Resultset order_by based upon existence of a value in the list soup:

    ORDER BY expr might be what you're looking for.

    \n

    For example, here a table:

    \n
    mysql> select * from test;\n+----+-----------+\n| id | name      |\n+----+-----------+\n|  1 | London    |\n|  2 | Paris     |\n|  3 | Tokio     |\n|  4 | Rome      |\n|  5 | Amsterdam |\n+----+-----------+\n
    \n

    Here the special ordering:

    \n
    mysql> select * from test order by name = 'London' desc, \n                                   name = 'Paris'  desc, \n                                   name = 'Amsterdam' desc;\n+----+-----------+\n| id | name      |\n+----+-----------+\n|  1 | London    |\n|  2 | Paris     |\n|  5 | Amsterdam |\n|  3 | Tokio     |\n|  4 | Rome      |\n+----+-----------+\n
    \n

    Translating this into a ResultSet method:

    \n
    $schema->resultset('Test')->search(\n    {},\n    {order_by => {-desc => q[name in ('London', 'New York', 'Tokyo')] }}\n);\n
    \n soup wrap:

    ORDER BY expr might be what you're looking for.

    For example, here a table:

    mysql> select * from test;
    +----+-----------+
    | id | name      |
    +----+-----------+
    |  1 | London    |
    |  2 | Paris     |
    |  3 | Tokio     |
    |  4 | Rome      |
    |  5 | Amsterdam |
    +----+-----------+
    

    Here the special ordering:

    mysql> select * from test order by name = 'London' desc, 
                                       name = 'Paris'  desc, 
                                       name = 'Amsterdam' desc;
    +----+-----------+
    | id | name      |
    +----+-----------+
    |  1 | London    |
    |  2 | Paris     |
    |  5 | Amsterdam |
    |  3 | Tokio     |
    |  4 | Rome      |
    +----+-----------+
    

    Translating this into a ResultSet method:

    $schema->resultset('Test')->search(
        {},
        {order_by => {-desc => q[name in ('London', 'New York', 'Tokyo')] }}
    );
    
    qid & accept id: (8046345, 8046509) query: Conditional GROUP BY and additional columns? soup:

    Table X must have at least five columns whose names, we can presume, are a, b, c, x, y.

    \n

    If you are doing a single INSERT, then you'll need to insert into all five columns. If you are doing multiple INSERT operations, you can insert into 3 and then 5 (or vice versa) columns. You may have to do some juggling with the NULL values in the select-list of the first alternative. I'm assuming that the columns x and y are INTEGER for definiteness - choose the appropriate type.

    \n

    1st Alternative

    \n
    INSERT INTO x(a, b, c, x, y)\n    SELECT a, b, c, MAX(CAST(NULL AS INTEGER)) AS x, MAX(CAST(NULL AS INTEGER)) AS y\n      FROM pqr\n     WHERE p_a IS NULL\n     GROUP BY a, b, c\n    UNION\n    SELECT MAX(a) AS a, MAX(b) AS b, MAX(c) AS c, x, y\n      FROM pqr\n     WHERE p_a IS NOT NULL\n     GROUP BY x, y;\n
    \n

    You could replace the GROUP BY a, b, c clause with a DISTINCT in front of a in the select-list of the first part of the UNION. In most SQL DBMS, you must list all the non-aggregate columns from the select-list in the GROUP BY clause. Using the MAX means that you have aggregates for x and y in the first half of the UNION and for a, b and c in the second half of the UNION.

    \n

    2nd Alternative

    \n
    INSERT INTO x(a, b, c)\n    SELECT DISTINCT a, b, c\n      FROM pqr\n     WHERE p_a IS NULL;\nINSERT INTO x(a, b, c, x, y)\n    SELECT MAX(a) AS a, MAX(b) AS b, MAX(c) AS c, x, y\n      FROM pqr\n     WHERE p_a IS NOT NULL\n     GROUP BY x, y;\n
    \n

    As discussed before, you need aggregates on the columns not in the GROUP BY list.

    \n

    3rd Alternative

    \n

    If you meant that you must group by x and y as well as a, b and c, then the second half of the UNION (or the second SELECT) simplifies to:

    \n
        SELECT a, b, c, x, y\n      FROM pqr\n     WHERE p_a IS NOT NULL\n     GROUP BY a, b, c, x, y;\n
    \n

    Or you can use DISTINCT again:

    \n
        SELECT DISTINCT a, b, c, x, y\n      FROM pqr\n     WHERE p_a IS NOT NULL;\n
    \n soup wrap:

    Table X must have at least five columns whose names, we can presume, are a, b, c, x, y.

    If you are doing a single INSERT, then you'll need to insert into all five columns. If you are doing multiple INSERT operations, you can insert into 3 and then 5 (or vice versa) columns. You may have to do some juggling with the NULL values in the select-list of the first alternative. I'm assuming that the columns x and y are INTEGER for definiteness - choose the appropriate type.

    1st Alternative

    INSERT INTO x(a, b, c, x, y)
        SELECT a, b, c, MAX(CAST(NULL AS INTEGER)) AS x, MAX(CAST(NULL AS INTEGER)) AS y
          FROM pqr
         WHERE p_a IS NULL
         GROUP BY a, b, c
        UNION
        SELECT MAX(a) AS a, MAX(b) AS b, MAX(c) AS c, x, y
          FROM pqr
         WHERE p_a IS NOT NULL
         GROUP BY x, y;
    

    You could replace the GROUP BY a, b, c clause with a DISTINCT in front of a in the select-list of the first part of the UNION. In most SQL DBMS, you must list all the non-aggregate columns from the select-list in the GROUP BY clause. Using the MAX means that you have aggregates for x and y in the first half of the UNION and for a, b and c in the second half of the UNION.

    2nd Alternative

    INSERT INTO x(a, b, c)
        SELECT DISTINCT a, b, c
          FROM pqr
         WHERE p_a IS NULL;
    INSERT INTO x(a, b, c, x, y)
        SELECT MAX(a) AS a, MAX(b) AS b, MAX(c) AS c, x, y
          FROM pqr
         WHERE p_a IS NOT NULL
         GROUP BY x, y;
    

    As discussed before, you need aggregates on the columns not in the GROUP BY list.

    3rd Alternative

    If you meant that you must group by x and y as well as a, b and c, then the second half of the UNION (or the second SELECT) simplifies to:

        SELECT a, b, c, x, y
          FROM pqr
         WHERE p_a IS NOT NULL
         GROUP BY a, b, c, x, y;
    

    Or you can use DISTINCT again:

        SELECT DISTINCT a, b, c, x, y
          FROM pqr
         WHERE p_a IS NOT NULL;
    
    qid & accept id: (8046386, 8047626) query: sum of time based on flag from multiple rows SQL Server soup:

    If there was an additional criterion for us to distinguish contiguous sequences of events with identical ign values from one another, we could take from each sequence with ign=1 its earliest event and link it with the earliest event of the corresponding ign=0 sequence.

    \n

    It is possible to add such a criterion, as you will see below. I'm going to post the solution first, then explain how it works.

    \n

    First, the setup:

    \n
    DECLARE @atable TABLE (\n  Id int IDENTITY,\n  UnitId int,\n  eventtime datetime,\n  ign bit\n);\nINSERT INTO @atable (UnitId, eventtime, ign)\nSELECT 356, '2011-05-04 10:41:00.000', 1 UNION ALL\nSELECT 356, '2011-05-04 10:42:00.000', 1 UNION ALL\nSELECT 356, '2011-05-04 10:43:00.000', 1 UNION ALL\nSELECT 356, '2011-05-04 10:45:00.000', 1 UNION ALL\nSELECT 356, '2011-05-04 10:47:00.000', 1 UNION ALL\nSELECT 356, '2011-05-04 10:48:00.000', 0 UNION ALL\nSELECT 356, '2011-05-04 11:14:00.000', 1 UNION ALL\nSELECT 356, '2011-05-04 11:14:00.000', 1 UNION ALL\nSELECT 356, '2011-05-04 11:15:00.000', 1 UNION ALL\nSELECT 356, '2011-05-04 11:15:00.000', 1 UNION ALL\nSELECT 356, '2011-05-04 11:15:00.000', 1 UNION ALL\nSELECT 356, '2011-05-04 11:16:00.000', 0 UNION ALL\nSELECT 356, '2011-05-04 11:16:00.000', 0 UNION ALL\nSELECT 356, '2011-05-04 11:16:00.000', 0 UNION ALL\nSELECT 356, '2011-05-04 14:49:00.000', 1 UNION ALL\nSELECT 356, '2011-05-04 14:50:00.000', 1 UNION ALL\nSELECT 356, '2011-05-04 14:50:00.000', 1 UNION ALL\nSELECT 356, '2011-05-04 14:51:00.000', 1 UNION ALL\nSELECT 356, '2011-05-04 14:52:00.000', 0 UNION ALL\nSELECT 356, '2011-05-04 14:52:00.000', 0 UNION ALL\nSELECT 356, '2011-05-04 20:52:00.000', 0;\n
    \n

    And now the query:

    \n
    WITH\nmarked AS (\n  SELECT\n    *,\n    Grp = ROW_NUMBER() OVER (PARTITION BY UnitId ORDER BY eventtime) -\n     ROW_NUMBER() OVER (PARTITION BY UnitId, ign ORDER BY eventtime)\n  FROM @atable\n),\nranked AS (\n  SELECT\n    *,\n    seqRank = DENSE_RANK() OVER (PARTITION BY UnitId, ign ORDER BY Grp),\n    eventRank = ROW_NUMBER() OVER (PARTITION BY UnitId, ign, Grp ORDER BY eventtime)\n  FROM marked\n),\nfinal AS (\n  SELECT\n    s.UnitId,\n    EventStart = s.eventtime,\n    EventEnd   = e.eventtime\n  FROM ranked s\n    INNER JOIN ranked e ON s.UnitId = e.UnitId AND s.seqRank = e.seqRank\n  WHERE s.ign = 1\n    AND e.ign = 0\n    AND s.eventRank = 1\n    AND e.eventRank = 1\n)\nSELECT *\nFROM final\nORDER BY\n  UnitId,\n  EventStart\n
    \n

    This is how it works.

    \n

    The marked common table expression (CTE) provides us with the additional criterion I was talking about at the beginning. The result set it produces looks like this:

    \n
    Id  UnitId  eventtime                ign  Grp\n--  ------  -----------------------  ---  ---\n1   356     2011-05-04 10:41:00.000  1    0\n2   356     2011-05-04 10:42:00.000  1    0\n3   356     2011-05-04 10:43:00.000  1    0\n4   356     2011-05-04 10:45:00.000  1    0\n5   356     2011-05-04 10:47:00.000  1    0\n6   356     2011-05-04 10:48:00.000  0    5\n7   356     2011-05-04 11:14:00.000  1    1\n8   356     2011-05-04 11:14:00.000  1    1\n9   356     2011-05-04 11:15:00.000  1    1\n10  356     2011-05-04 11:15:00.000  1    1\n11  356     2011-05-04 11:15:00.000  1    1\n12  356     2011-05-04 11:16:00.000  0    10\n13  356     2011-05-04 11:16:00.000  0    10\n14  356     2011-05-04 11:16:00.000  0    10\n15  356     2011-05-04 14:49:00.000  1    4\n16  356     2011-05-04 14:50:00.000  1    4\n17  356     2011-05-04 14:50:00.000  1    4\n18  356     2011-05-04 14:51:00.000  1    4\n19  356     2011-05-04 14:52:00.000  0    14\n20  356     2011-05-04 14:52:00.000  0    14\n21  356     2011-05-04 20:52:00.000  0    14\n
    \n

    You can see for yourself how every sequence of events with identical ign can now be easily distinguished from the others by its own key of (UnitId, ign, Grp). So now we can rank every sequence as well as every event within a sequence, which is what the ranked CTE does. It produces the following result set:

    \n
    Id  UnitId  eventtime                ign  Grp  seqRank  eventRank\n--  ------  -----------------------  ---  ---  -------  ---------\n1   356     2011-05-04 10:41:00.000  1    0    1        1\n2   356     2011-05-04 10:42:00.000  1    0    1        2\n3   356     2011-05-04 10:43:00.000  1    0    1        3\n4   356     2011-05-04 10:45:00.000  1    0    1        4\n5   356     2011-05-04 10:47:00.000  1    0    1        5\n6   356     2011-05-04 10:48:00.000  0    5    1        1\n7   356     2011-05-04 11:14:00.000  1    1    2        1\n8   356     2011-05-04 11:14:00.000  1    1    2        2\n9   356     2011-05-04 11:15:00.000  1    1    2        3\n10  356     2011-05-04 11:15:00.000  1    1    2        4\n11  356     2011-05-04 11:15:00.000  1    1    2        5\n12  356     2011-05-04 11:16:00.000  0    10   2        1\n13  356     2011-05-04 11:16:00.000  0    10   2        2\n14  356     2011-05-04 11:16:00.000  0    10   2        3\n15  356     2011-05-04 14:49:00.000  1    4    3        1\n16  356     2011-05-04 14:50:00.000  1    4    3        2\n17  356     2011-05-04 14:50:00.000  1    4    3        3\n18  356     2011-05-04 14:51:00.000  1    4    3        4\n19  356     2011-05-04 14:52:00.000  0    14   3        1\n20  356     2011-05-04 14:52:00.000  0    14   3        2\n21  356     2011-05-04 20:52:00.000  0    14   3        3\n
    \n

    You can see that an ign=1 sequence can now be matched with an ign=0 sequence with the help of seqRank. And picking only the earliest event from every sequence (filtering by eventRank=1) we'll get start and end times of all the ign=1 sequences. And so the result of the final CTE is:

    \n
    UnitId  EventStart               EventEnd\n------  -----------------------  -----------------------\n356     2011-05-04 10:41:00.000  2011-05-04 10:48:00.000\n356     2011-05-04 11:14:00.000  2011-05-04 11:16:00.000\n356     2011-05-04 14:49:00.000  2011-05-04 14:52:00.000\n
    \n

    Obviously, if the last ign=1 sequence isn't followed by an ign=0 event, it will not be shown in the final results, because the last ign=1 sequence will have no matching ign=0 sequence, using the above approach.

    \n

    There's one possible case when this query will not work as it is. It's when the event list starts with an ign=0 event instead of ign=1. If that is actually possible, you could simply add the following filter to the ranked CTE:

    \n
    WHERE NOT (ign = 0 AND Grp = 0)\n-- Alternatively: WHERE ign <> 0 OR Grp <> 0\n
    \n

    It takes advantage of the fact that the first value of Grp will always be 0. So, if 0 is assigned to events with ign=0, those events should be excluded.

    \n
    \n

    Useful reading:

    \n\n soup wrap:

    If there was an additional criterion for us to distinguish contiguous sequences of events with identical ign values from one another, we could take from each sequence with ign=1 its earliest event and link it with the earliest event of the corresponding ign=0 sequence.

    It is possible to add such a criterion, as you will see below. I'm going to post the solution first, then explain how it works.

    First, the setup:

    DECLARE @atable TABLE (
      Id int IDENTITY,
      UnitId int,
      eventtime datetime,
      ign bit
    );
    INSERT INTO @atable (UnitId, eventtime, ign)
    SELECT 356, '2011-05-04 10:41:00.000', 1 UNION ALL
    SELECT 356, '2011-05-04 10:42:00.000', 1 UNION ALL
    SELECT 356, '2011-05-04 10:43:00.000', 1 UNION ALL
    SELECT 356, '2011-05-04 10:45:00.000', 1 UNION ALL
    SELECT 356, '2011-05-04 10:47:00.000', 1 UNION ALL
    SELECT 356, '2011-05-04 10:48:00.000', 0 UNION ALL
    SELECT 356, '2011-05-04 11:14:00.000', 1 UNION ALL
    SELECT 356, '2011-05-04 11:14:00.000', 1 UNION ALL
    SELECT 356, '2011-05-04 11:15:00.000', 1 UNION ALL
    SELECT 356, '2011-05-04 11:15:00.000', 1 UNION ALL
    SELECT 356, '2011-05-04 11:15:00.000', 1 UNION ALL
    SELECT 356, '2011-05-04 11:16:00.000', 0 UNION ALL
    SELECT 356, '2011-05-04 11:16:00.000', 0 UNION ALL
    SELECT 356, '2011-05-04 11:16:00.000', 0 UNION ALL
    SELECT 356, '2011-05-04 14:49:00.000', 1 UNION ALL
    SELECT 356, '2011-05-04 14:50:00.000', 1 UNION ALL
    SELECT 356, '2011-05-04 14:50:00.000', 1 UNION ALL
    SELECT 356, '2011-05-04 14:51:00.000', 1 UNION ALL
    SELECT 356, '2011-05-04 14:52:00.000', 0 UNION ALL
    SELECT 356, '2011-05-04 14:52:00.000', 0 UNION ALL
    SELECT 356, '2011-05-04 20:52:00.000', 0;
    

    And now the query:

    WITH
    marked AS (
      SELECT
        *,
        Grp = ROW_NUMBER() OVER (PARTITION BY UnitId ORDER BY eventtime) -
         ROW_NUMBER() OVER (PARTITION BY UnitId, ign ORDER BY eventtime)
      FROM @atable
    ),
    ranked AS (
      SELECT
        *,
        seqRank = DENSE_RANK() OVER (PARTITION BY UnitId, ign ORDER BY Grp),
        eventRank = ROW_NUMBER() OVER (PARTITION BY UnitId, ign, Grp ORDER BY eventtime)
      FROM marked
    ),
    final AS (
      SELECT
        s.UnitId,
        EventStart = s.eventtime,
        EventEnd   = e.eventtime
      FROM ranked s
        INNER JOIN ranked e ON s.UnitId = e.UnitId AND s.seqRank = e.seqRank
      WHERE s.ign = 1
        AND e.ign = 0
        AND s.eventRank = 1
        AND e.eventRank = 1
    )
    SELECT *
    FROM final
    ORDER BY
      UnitId,
      EventStart
    

    This is how it works.

    The marked common table expression (CTE) provides us with the additional criterion I was talking about at the beginning. The result set it produces looks like this:

    Id  UnitId  eventtime                ign  Grp
    --  ------  -----------------------  ---  ---
    1   356     2011-05-04 10:41:00.000  1    0
    2   356     2011-05-04 10:42:00.000  1    0
    3   356     2011-05-04 10:43:00.000  1    0
    4   356     2011-05-04 10:45:00.000  1    0
    5   356     2011-05-04 10:47:00.000  1    0
    6   356     2011-05-04 10:48:00.000  0    5
    7   356     2011-05-04 11:14:00.000  1    1
    8   356     2011-05-04 11:14:00.000  1    1
    9   356     2011-05-04 11:15:00.000  1    1
    10  356     2011-05-04 11:15:00.000  1    1
    11  356     2011-05-04 11:15:00.000  1    1
    12  356     2011-05-04 11:16:00.000  0    10
    13  356     2011-05-04 11:16:00.000  0    10
    14  356     2011-05-04 11:16:00.000  0    10
    15  356     2011-05-04 14:49:00.000  1    4
    16  356     2011-05-04 14:50:00.000  1    4
    17  356     2011-05-04 14:50:00.000  1    4
    18  356     2011-05-04 14:51:00.000  1    4
    19  356     2011-05-04 14:52:00.000  0    14
    20  356     2011-05-04 14:52:00.000  0    14
    21  356     2011-05-04 20:52:00.000  0    14
    

    You can see for yourself how every sequence of events with identical ign can now be easily distinguished from the others by its own key of (UnitId, ign, Grp). So now we can rank every sequence as well as every event within a sequence, which is what the ranked CTE does. It produces the following result set:

    Id  UnitId  eventtime                ign  Grp  seqRank  eventRank
    --  ------  -----------------------  ---  ---  -------  ---------
    1   356     2011-05-04 10:41:00.000  1    0    1        1
    2   356     2011-05-04 10:42:00.000  1    0    1        2
    3   356     2011-05-04 10:43:00.000  1    0    1        3
    4   356     2011-05-04 10:45:00.000  1    0    1        4
    5   356     2011-05-04 10:47:00.000  1    0    1        5
    6   356     2011-05-04 10:48:00.000  0    5    1        1
    7   356     2011-05-04 11:14:00.000  1    1    2        1
    8   356     2011-05-04 11:14:00.000  1    1    2        2
    9   356     2011-05-04 11:15:00.000  1    1    2        3
    10  356     2011-05-04 11:15:00.000  1    1    2        4
    11  356     2011-05-04 11:15:00.000  1    1    2        5
    12  356     2011-05-04 11:16:00.000  0    10   2        1
    13  356     2011-05-04 11:16:00.000  0    10   2        2
    14  356     2011-05-04 11:16:00.000  0    10   2        3
    15  356     2011-05-04 14:49:00.000  1    4    3        1
    16  356     2011-05-04 14:50:00.000  1    4    3        2
    17  356     2011-05-04 14:50:00.000  1    4    3        3
    18  356     2011-05-04 14:51:00.000  1    4    3        4
    19  356     2011-05-04 14:52:00.000  0    14   3        1
    20  356     2011-05-04 14:52:00.000  0    14   3        2
    21  356     2011-05-04 20:52:00.000  0    14   3        3
    

    You can see that an ign=1 sequence can now be matched with an ign=0 sequence with the help of seqRank. And picking only the earliest event from every sequence (filtering by eventRank=1) we'll get start and end times of all the ign=1 sequences. And so the result of the final CTE is:

    UnitId  EventStart               EventEnd
    ------  -----------------------  -----------------------
    356     2011-05-04 10:41:00.000  2011-05-04 10:48:00.000
    356     2011-05-04 11:14:00.000  2011-05-04 11:16:00.000
    356     2011-05-04 14:49:00.000  2011-05-04 14:52:00.000
    

    Obviously, if the last ign=1 sequence isn't followed by an ign=0 event, it will not be shown in the final results, because the last ign=1 sequence will have no matching ign=0 sequence, using the above approach.

    There's one possible case when this query will not work as it is. It's when the event list starts with an ign=0 event instead of ign=1. If that is actually possible, you could simply add the following filter to the ranked CTE:

    WHERE NOT (ign = 0 AND Grp = 0)
    -- Alternatively: WHERE ign <> 0 OR Grp <> 0
    

    It takes advantage of the fact that the first value of Grp will always be 0. So, if 0 is assigned to events with ign=0, those events should be excluded.


    Useful reading:

    qid & accept id: (8073455, 8073490) query: How can I Determine Date of Import from MySQL? soup:

    A quick way is to check the create_time or update_time when you execute this command:

    \n
    show table  status;\n
    \n

    like the following example:

    \n
    +--------------------+--------+---------+------------+------+----------------+-------------+------------------+--------------+-----------+----------------+---------------------+---------------------+------------+-------------------+----------+----------------+---------+\n| Name               | Engine | Version | Row_format | Rows | Avg_row_length | Data_length | Max_data_length  | Index_length | Data_free | Auto_increment | Create_time         | Update_time         | Check_time | Collation         | Checksum | Create_options | Comment |\n+--------------------+--------+---------+------------+------+----------------+-------------+------------------+--------------+-----------+----------------+---------------------+---------------------+------------+-------------------+----------+----------------+---------+\n| a_table            | MyISAM |      10 | Dynamic    |    2 |             60 |         120 |  281474976710655 |         1024 |         0 |           NULL | 2011-09-08 18:26:38 | 2011-11-07 20:38:28 | NULL       | latin1_swedish_ci |     NULL |                |         |\n
    \n soup wrap:

    A quick way is to check the create_time or update_time when you execute this command:

    show table  status;
    

    like the following example:

    +--------------------+--------+---------+------------+------+----------------+-------------+------------------+--------------+-----------+----------------+---------------------+---------------------+------------+-------------------+----------+----------------+---------+
    | Name               | Engine | Version | Row_format | Rows | Avg_row_length | Data_length | Max_data_length  | Index_length | Data_free | Auto_increment | Create_time         | Update_time         | Check_time | Collation         | Checksum | Create_options | Comment |
    +--------------------+--------+---------+------------+------+----------------+-------------+------------------+--------------+-----------+----------------+---------------------+---------------------+------------+-------------------+----------+----------------+---------+
    | a_table            | MyISAM |      10 | Dynamic    |    2 |             60 |         120 |  281474976710655 |         1024 |         0 |           NULL | 2011-09-08 18:26:38 | 2011-11-07 20:38:28 | NULL       | latin1_swedish_ci |     NULL |                |         |
    
    qid & accept id: (8108295, 8160666) query: Intersection of sets soup:

    So here's what I've come up with:

    \n
    $this->Sql = 'SELECT DISTINCT * FROM `nodes` `n`\n    JOIN `tagged_nodes` `t` ON t.nid=n.nid';\n\n $i=0;\nforeach( $tagids as $tagid ) {\n     $t = 't' . $i++;\n    $this->Sql .= ' INNER JOIN `tagged_nodes` `'.$t.'` ON '\n        .$t'.tid=t.tid WHERE '.$t.'.tid='.$tagid;\n}\n
    \n

    It's in PHP since I need it to be dynamic, but it would basically be the following if I needed, say, only 2 tags (animals, pets).

    \n
    SELECT * FROM nodes n JOIN tagged_nodes t ON t.nid=n.nid\nINNER JOIN tagged_nodes t1 ON t1.tid=t.tid WHERE t1.tid='animals'\nINNER JOIN tagged_nodes t2 ON t2.tid=t.tid WHERE t2.tid='pets'\n
    \n

    Am I on the right track?

    \n soup wrap:

    So here's what I've come up with:

    $this->Sql = 'SELECT DISTINCT * FROM `nodes` `n`
        JOIN `tagged_nodes` `t` ON t.nid=n.nid';
    
     $i=0;
    foreach( $tagids as $tagid ) {
         $t = 't' . $i++;
        $this->Sql .= ' INNER JOIN `tagged_nodes` `'.$t.'` ON '
            .$t'.tid=t.tid WHERE '.$t.'.tid='.$tagid;
    }
    

    It's in PHP since I need it to be dynamic, but it would basically be the following if I needed, say, only 2 tags (animals, pets).

    SELECT * FROM nodes n JOIN tagged_nodes t ON t.nid=n.nid
    INNER JOIN tagged_nodes t1 ON t1.tid=t.tid WHERE t1.tid='animals'
    INNER JOIN tagged_nodes t2 ON t2.tid=t.tid WHERE t2.tid='pets'
    

    Am I on the right track?

    qid & accept id: (8110165, 8110179) query: Removing duplicate foreign key rows in MySQL database soup:

    Assuming your School table has a store_ID from what you've said.

    \n

    I would start by figuring out for each duplicate, which store_ID you want to keep. I will also assume that you want it to be the lowest ID value. I would then update the Schools' store_ID to be the MIN(store_ID) for the current URL they have. You should then be free to delete the extra store_ID records

    \n

    This is how I would go about the update:

    \n
    UPDATE sch\nSET sch.Store_ID = matcher.store_ID\nFROM Schools AS sch\nINNER JOIN Stores AS st ON sch.store_ID = st.store_ID\nINNER JOIN\n(\n   SELECT MIN(st.store_id) AS store_ID, store_url\n   FROM Schools AS sch\n   INNER JOIN Stores AS st ON sch.store_ID = st.store_ID\n   GROUP BY Store_URL\n) AS matcher ON st.Store_URL = matcher.Store_Url\n   AND st.Store_ID != matcher.store_ID\n
    \n

    If you are able to delete stores that do not have an associated school, the following query will remove the extra rows:

    \n
    DELETE FROM st\nFROM Stores AS st\nLEFT JOIN Schools AS sch ON st.Store_ID = sch.Store_Id\nWHERE sch.Store_id IS NULL\n
    \n

    If you only want to delete the Store's duplicate records, I would look at this query instead of the above:

    \n
    DELETE FROM st\nFROM Stores AS st\nINNER JOIN\n(\n   SELECT MIN(st.store_ID) store_Id, st.Store_Url\n   FROM Stores AS st\n   GROUP BY st.Store_URL\n) AS useful ON st.Store_Url = useful.Store_URL\nWHERE st.Store_ID != useful.store_Id\n
    \n soup wrap:

    Assuming your School table has a store_ID from what you've said.

    I would start by figuring out for each duplicate, which store_ID you want to keep. I will also assume that you want it to be the lowest ID value. I would then update the Schools' store_ID to be the MIN(store_ID) for the current URL they have. You should then be free to delete the extra store_ID records

    This is how I would go about the update:

    UPDATE sch
    SET sch.Store_ID = matcher.store_ID
    FROM Schools AS sch
    INNER JOIN Stores AS st ON sch.store_ID = st.store_ID
    INNER JOIN
    (
       SELECT MIN(st.store_id) AS store_ID, store_url
       FROM Schools AS sch
       INNER JOIN Stores AS st ON sch.store_ID = st.store_ID
       GROUP BY Store_URL
    ) AS matcher ON st.Store_URL = matcher.Store_Url
       AND st.Store_ID != matcher.store_ID
    

    If you are able to delete stores that do not have an associated school, the following query will remove the extra rows:

    DELETE FROM st
    FROM Stores AS st
    LEFT JOIN Schools AS sch ON st.Store_ID = sch.Store_Id
    WHERE sch.Store_id IS NULL
    

    If you only want to delete the Store's duplicate records, I would look at this query instead of the above:

    DELETE FROM st
    FROM Stores AS st
    INNER JOIN
    (
       SELECT MIN(st.store_ID) store_Id, st.Store_Url
       FROM Stores AS st
       GROUP BY st.Store_URL
    ) AS useful ON st.Store_Url = useful.Store_URL
    WHERE st.Store_ID != useful.store_Id
    
    qid & accept id: (8111247, 8113193) query: How to move a DB2 SQL result table into a physical file? soup:

    If you want to create the table automatically you can also use the following form:

    \n
    CREATE TABLE new_table_name \nAS (SELECT * FROM  \n    UNION SELECT * FROM ) WITH DATA\n
    \n

    Note that you can create a view over the query to dynamically build the result set on demand. The view can then be referenced from any HLL as a logical file:

    \n
    CREATE VIEW new_table_name\nAS SELECT * FROM \n   UNION SELECT * FROM \n
    \n soup wrap:

    If you want to create the table automatically you can also use the following form:

    CREATE TABLE new_table_name 
    AS (SELECT * FROM  
        UNION SELECT * FROM ) WITH DATA
    

    Note that you can create a view over the query to dynamically build the result set on demand. The view can then be referenced from any HLL as a logical file:

    CREATE VIEW new_table_name
    AS SELECT * FROM 
       UNION SELECT * FROM 
    
    qid & accept id: (8128360, 8153266) query: using multiple left outer joins pl/sql soup:

    Okay, so after taking Wolf's suggestion, i went in and ran the following line of code

    \n
        select categorytype, count(*) \nfrom nptcategories \ngroup by categorytype \nhaving count(*) > 1;\n
    \n

    After running this, i found that somehow there were duplicates of records in this table so, this was fixed by removing the duplicates and setting the table to have unique ids. This was done by using the following script on the DB:

    \n
    alter table nptcategories add constraint nptcatidunq unique(categoryid)\n
    \n soup wrap:

    Okay, so after taking Wolf's suggestion, i went in and ran the following line of code

        select categorytype, count(*) 
    from nptcategories 
    group by categorytype 
    having count(*) > 1;
    

    After running this, i found that somehow there were duplicates of records in this table so, this was fixed by removing the duplicates and setting the table to have unique ids. This was done by using the following script on the DB:

    alter table nptcategories add constraint nptcatidunq unique(categoryid)
    
    qid & accept id: (8139699, 8139768) query: select records that don't have certain values in 2 columns soup:
    SELECT\n  *\nFROM\n  MyTable                                 AS data\nLEFT JOIN\n  (SELECT x, y, z FROM UpdateMyTable)     AS check\n    ON  data.x = check.x\n    AND data.y = check.y\n    AND data.z = check.z\nWHERE\n  x = @x\n  AND check.x IS NULL\n
    \n


    \n

    OR

    \n
    SELECT\n  *\nFROM\n  MyTable                                 AS data\nWHERE\n  x = @x\n  AND NOT EXISTS (\n                  SELECT\n                    *\n                  FROM\n                    UpdateMyTable        AS check\n                  WHERE\n                      data.x = check.x\n                  AND data.y = check.y\n                  AND data.z = check.z\n                 )\n
    \n soup wrap:
    SELECT
      *
    FROM
      MyTable                                 AS data
    LEFT JOIN
      (SELECT x, y, z FROM UpdateMyTable)     AS check
        ON  data.x = check.x
        AND data.y = check.y
        AND data.z = check.z
    WHERE
      x = @x
      AND check.x IS NULL
    


    OR

    SELECT
      *
    FROM
      MyTable                                 AS data
    WHERE
      x = @x
      AND NOT EXISTS (
                      SELECT
                        *
                      FROM
                        UpdateMyTable        AS check
                      WHERE
                          data.x = check.x
                      AND data.y = check.y
                      AND data.z = check.z
                     )
    
    qid & accept id: (8143581, 8143720) query: Extract first numeric part of field soup:

    Try this:

    \n
    SELECT substring(address, '^\\d+') AS heading_number\nFROM   tbl\nWHERE  zip = 12345\nAND    address ILIKE '3%'\n
    \n

    Returns 1 or more digits from the start of the string.
    \nLeave out the anchor ^ if you want the first sequence of digits in the string instead of the sequence at the start. Example:

    \n
    SELECT substring('South 13rd street 3452435 foo', '\\d+');\n
    \n

    Read about substring() and regular expressions in the manual.
    \nIn more recent versions (8.0+), don't forget to use for escape string syntax like this:

    \n
    SELECT substring('South 13rd street 3452435 foo', E'\\d+');\n
    \n soup wrap:

    Try this:

    SELECT substring(address, '^\\d+') AS heading_number
    FROM   tbl
    WHERE  zip = 12345
    AND    address ILIKE '3%'
    

    Returns 1 or more digits from the start of the string.
    Leave out the anchor ^ if you want the first sequence of digits in the string instead of the sequence at the start. Example:

    SELECT substring('South 13rd street 3452435 foo', '\\d+');
    

    Read about substring() and regular expressions in the manual.
    In more recent versions (8.0+), don't forget to use for escape string syntax like this:

    SELECT substring('South 13rd street 3452435 foo', E'\\d+');
    
    qid & accept id: (8153000, 8153092) query: Count Events/year in SQL soup:

    Look into date functions on mysql http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html#function_datediff

    \n

    You can use datediff which will give you difference in days. Ex;

    \n

    WHERE abs(datediff(now(), event_date)) < 365*5

    \n

    or dateadd(), if your event dates are timestamps, use timestampdiff()

    \n

    Sample query

    \n
    SELECT count(*) FROM mytable\nWHERE abs(datediff(now(), event_date)) < 365*5\n
    \n

    UPDATE

    \n

    based on some of the comments I've read here, here's a query for you

    \n
    SELECT year(event_date) as event_year, count(event_date)\nFROM mytable\nWHERE\nabs(datediff(now(), event_date)) < 365*5\nGROUP by year(event_date)\n
    \n

    Feel free to adjust 5 in (365 * 5) for different range

    \n

    UPDATE 2

    \n

    This is NOT very pretty but you can try this with pure mysql. You can also modify this to be a stored proc if necessary:

    \n
    SET @y6 = year(now());\nSET @y5 = @y6-1;\nSET @y4 = @y5-1;\nSET @y3 = @y4-1;\nSET @y2 = @y3-1;\nSET @y1 = @y2-1;\n\nSET @y7 = @y6+1;\nSET @y8 = @y7+1;\nSET @y9 = @y8+1;\nSET @y10 = @y9+1;\nSET @y11 = @y10+1;\n\nCREATE TEMPORARY TABLE event_years (event_year int not null);\nINSERT INTO event_years SELECT @y1;\nINSERT INTO event_years SELECT @y2;\nINSERT INTO event_years SELECT @y3;\nINSERT INTO event_years SELECT @y4;\nINSERT INTO event_years SELECT @y5;\nINSERT INTO event_years SELECT @y6;\nINSERT INTO event_years SELECT @y7;\nINSERT INTO event_years SELECT @y8;\nINSERT INTO event_years SELECT @y9;\nINSERT INTO event_years SELECT @y10;\nINSERT INTO event_years SELECT @y11;\n\nSELECT ey.event_year , (SELECT count(event_date) from mytable where year(event_date) = ey.event_year)\nfrom event_years ey;\n
    \n

    temporary table will get dropped by itself after your connection is closed. If you add DROP TABLE after SELECT, you might not get your results back.

    \n soup wrap:

    Look into date functions on mysql http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html#function_datediff

    You can use datediff which will give you difference in days. Ex;

    WHERE abs(datediff(now(), event_date)) < 365*5

    or dateadd(), if your event dates are timestamps, use timestampdiff()

    Sample query

    SELECT count(*) FROM mytable
    WHERE abs(datediff(now(), event_date)) < 365*5
    

    UPDATE

    based on some of the comments I've read here, here's a query for you

    SELECT year(event_date) as event_year, count(event_date)
    FROM mytable
    WHERE
    abs(datediff(now(), event_date)) < 365*5
    GROUP by year(event_date)
    

    Feel free to adjust 5 in (365 * 5) for different range

    UPDATE 2

    This is NOT very pretty but you can try this with pure mysql. You can also modify this to be a stored proc if necessary:

    SET @y6 = year(now());
    SET @y5 = @y6-1;
    SET @y4 = @y5-1;
    SET @y3 = @y4-1;
    SET @y2 = @y3-1;
    SET @y1 = @y2-1;
    
    SET @y7 = @y6+1;
    SET @y8 = @y7+1;
    SET @y9 = @y8+1;
    SET @y10 = @y9+1;
    SET @y11 = @y10+1;
    
    CREATE TEMPORARY TABLE event_years (event_year int not null);
    INSERT INTO event_years SELECT @y1;
    INSERT INTO event_years SELECT @y2;
    INSERT INTO event_years SELECT @y3;
    INSERT INTO event_years SELECT @y4;
    INSERT INTO event_years SELECT @y5;
    INSERT INTO event_years SELECT @y6;
    INSERT INTO event_years SELECT @y7;
    INSERT INTO event_years SELECT @y8;
    INSERT INTO event_years SELECT @y9;
    INSERT INTO event_years SELECT @y10;
    INSERT INTO event_years SELECT @y11;
    
    SELECT ey.event_year , (SELECT count(event_date) from mytable where year(event_date) = ey.event_year)
    from event_years ey;
    

    temporary table will get dropped by itself after your connection is closed. If you add DROP TABLE after SELECT, you might not get your results back.

    qid & accept id: (8159093, 8159213) query: How can you order like items in a nested set hierarchical structure? soup:

    I'm still a little unclear on what you are asking, but it appears you can get your desired result set with the following query:

    \n
    SELECT distinct 'Junior' as Database, \n       xType, \n       displayLabel, \n       child_xType, \n       child_displayLabel\nFROM MyTable\nORDER BY displayLabel DESC, child_displayLabel ASC\n
    \n

    UPDATE:

    \n

    I'm still confused after your last comment but give this a try

    \n
    SELECT 'Junior' as Database, \n       xType, \n       displayLabel, \n       child_xType, \n       child_displayLabel\nFROM MyTable\nGROUP BY xType, displayLabel, child_xType, child_displayLabel\nORDER BY min(lft1),  min(lft2)\n
    \n soup wrap:

    I'm still a little unclear on what you are asking, but it appears you can get your desired result set with the following query:

    SELECT distinct 'Junior' as Database, 
           xType, 
           displayLabel, 
           child_xType, 
           child_displayLabel
    FROM MyTable
    ORDER BY displayLabel DESC, child_displayLabel ASC
    

    UPDATE:

    I'm still confused after your last comment but give this a try

    SELECT 'Junior' as Database, 
           xType, 
           displayLabel, 
           child_xType, 
           child_displayLabel
    FROM MyTable
    GROUP BY xType, displayLabel, child_xType, child_displayLabel
    ORDER BY min(lft1),  min(lft2)
    
    qid & accept id: (8216437, 8216634) query: SQL: Remove duplicates soup:

    A textbook candidate for the window function row_number():

    \n
    ;WITH x AS (\n    SELECT unique_ID\n          ,row_number() OVER (PARTITION BY worker_ID,type_ID ORDER BY date) AS rn\n    FROM   tbl\n    )\nDELETE FROM tbl\nFROM   x\nWHERE  tbl.unique_ID = x.unique_ID\nAND    x.rn > 1\n
    \n

    This also takes care of the situation where a set of dupes on (worker_ID,type_ID) shares the same date.
    \nSee the simplified demo on data.SE.

    \n

    Update with simpler version

    \n

    Turns out, this can be simplified: In SQL Server you can delete from the CTE directly:

    \n
    ;WITH x AS (\n    SELECT unique_ID\n          ,row_number() OVER (PARTITION BY worker_ID,type_ID ORDER BY date) AS rn\n    FROM   tbl\n    )\nDELETE x\nWHERE  rn > 1\n
    \n soup wrap:

    A textbook candidate for the window function row_number():

    ;WITH x AS (
        SELECT unique_ID
              ,row_number() OVER (PARTITION BY worker_ID,type_ID ORDER BY date) AS rn
        FROM   tbl
        )
    DELETE FROM tbl
    FROM   x
    WHERE  tbl.unique_ID = x.unique_ID
    AND    x.rn > 1
    

    This also takes care of the situation where a set of dupes on (worker_ID,type_ID) shares the same date.
    See the simplified demo on data.SE.

    Update with simpler version

    Turns out, this can be simplified: In SQL Server you can delete from the CTE directly:

    ;WITH x AS (
        SELECT unique_ID
              ,row_number() OVER (PARTITION BY worker_ID,type_ID ORDER BY date) AS rn
        FROM   tbl
        )
    DELETE x
    WHERE  rn > 1
    
    qid & accept id: (8223650, 8223684) query: How do I get all rows that contains a string in a field (SQL)? soup:

    you could use the like operator

    \n
    select * from articles where tag like '%php%'\n
    \n

    if you are worried about tags which are not php but have php in them like say phphp then you can use with comma

    \n
    select * from articles where tag like '%php,%' or tag like '%,php%'\n
    \n soup wrap:

    you could use the like operator

    select * from articles where tag like '%php%'
    

    if you are worried about tags which are not php but have php in them like say phphp then you can use with comma

    select * from articles where tag like '%php,%' or tag like '%,php%'
    
    qid & accept id: (8256892, 8257056) query: Converting normal datetime to a time zone in sql server 2008 soup:

    Cast it to dtaetimeoffset like

    \n
    select CAST(dt as datetimeoffset)  from test\n
    \n

    EDIT:

    \n

    you can then use SWITCHOFFSET to get into the specified timezone. For your example

    \n
    select switchoffset(CAST(dt as datetimeoffset),'+05:30')  from test \n
    \n

    Results in 2011-11-24 23:26:30.0600000 +05:30

    \n soup wrap:

    Cast it to dtaetimeoffset like

    select CAST(dt as datetimeoffset)  from test
    

    EDIT:

    you can then use SWITCHOFFSET to get into the specified timezone. For your example

    select switchoffset(CAST(dt as datetimeoffset),'+05:30')  from test 
    

    Results in 2011-11-24 23:26:30.0600000 +05:30

    qid & accept id: (8276553, 8276938) query: Figure out the last item of a group of items in SQL soup:

    You did not state your DBMS, but the following is an ANSI compliant SQL (should work on PosgreSQL, Oracle, DB2)

    \n
    SELECT *\nFROM (\n    SELECT listid, \n           itemid,\n           case \n              when lead(itemid) over (partition by listid order by itemid) is null then 'last'\n              else 'not_last'\n           end as last_flag\n    FROM items_tbl\n    WHERE listID = 'List_1'\n) t\nWHERE itemID = 'item_2'\n
    \n

    Edit, the following should work on SQL Server (as that doesn't yet support lead()):

    \n
    SELECT listid, \n       itemid,\n       case \n         when rn = list_count the 'last'\n         else 'not_last'\n       end\nFROM (\n    SELECT listid, \n           itemid,\n           row_number() over (partition by listid order by itemid) as rn,\n           count(*) over (partition by listid) as list_count\n    FROM items_tbl\n    WHERE listID = 'List_1'\n) t\nWHERE itemID = 'item_2'\n
    \n soup wrap:

    You did not state your DBMS, but the following is an ANSI compliant SQL (should work on PosgreSQL, Oracle, DB2)

    SELECT *
    FROM (
        SELECT listid, 
               itemid,
               case 
                  when lead(itemid) over (partition by listid order by itemid) is null then 'last'
                  else 'not_last'
               end as last_flag
        FROM items_tbl
        WHERE listID = 'List_1'
    ) t
    WHERE itemID = 'item_2'
    

    Edit, the following should work on SQL Server (as that doesn't yet support lead()):

    SELECT listid, 
           itemid,
           case 
             when rn = list_count the 'last'
             else 'not_last'
           end
    FROM (
        SELECT listid, 
               itemid,
               row_number() over (partition by listid order by itemid) as rn,
               count(*) over (partition by listid) as list_count
        FROM items_tbl
        WHERE listID = 'List_1'
    ) t
    WHERE itemID = 'item_2'
    
    qid & accept id: (8293350, 8293373) query: SQL query to join columns in result soup:

    You should try this:

    \n
    SELECT que.*, opt.* FROM questions que\nINNER JOIN options opt ON que.queid = opt.queid\nWHERE que.queid = 1\n
    \n

    INNER JOIN loads questions and options having at least one corresponing record in every table.

    \n

    If you need to get all questions (even the ones not having options) you could use

    \n
    SELECT que.*, opt.* FROM questions que\nLEFT JOIN options opt ON que.queid = opt.queid\nWHERE que.queid = 1\n
    \n

    LEFT JOIN always loads questions and, if they have options, their options too; if not you get NULL for options columns.

    \n soup wrap:

    You should try this:

    SELECT que.*, opt.* FROM questions que
    INNER JOIN options opt ON que.queid = opt.queid
    WHERE que.queid = 1
    

    INNER JOIN loads questions and options having at least one corresponing record in every table.

    If you need to get all questions (even the ones not having options) you could use

    SELECT que.*, opt.* FROM questions que
    LEFT JOIN options opt ON que.queid = opt.queid
    WHERE que.queid = 1
    

    LEFT JOIN always loads questions and, if they have options, their options too; if not you get NULL for options columns.

    qid & accept id: (8306044, 8306124) query: SQL - Summing events by date (5 days at a time) soup:

    You can try something like this:

    \n
    select\n    Date,\n    (select sum(events)\n     from tablename d2\n     where abs(datediff(DAY, d1.Date, d2.Date)) <= 2) as EventCount\nfrom\n    tablename d1\nwhere\n    Date between '11/03/2011' and '11/07/2011'\n
    \n

    Sample output:

    \n
    Date        EventCount\n11/03/2011  12\n11/04/2011  9  ** Note that the correct value for w02 is 9, not 7\n11/05/2011  14\n11/06/2011  10\n11/07/2011  14\n
    \n soup wrap:

    You can try something like this:

    select
        Date,
        (select sum(events)
         from tablename d2
         where abs(datediff(DAY, d1.Date, d2.Date)) <= 2) as EventCount
    from
        tablename d1
    where
        Date between '11/03/2011' and '11/07/2011'
    

    Sample output:

    Date        EventCount
    11/03/2011  12
    11/04/2011  9  ** Note that the correct value for w02 is 9, not 7
    11/05/2011  14
    11/06/2011  10
    11/07/2011  14
    
    qid & accept id: (8315026, 8315588) query: select with condition oracle soup:

    You could use a CASE statement

    \n
    SELECT id\n  FROM table\n WHERE age = (CASE WHEN variable = 'aaa' \n                   THEN 21\n                   WHEN variable = 'bbb'\n                   THEN 99\n                   ELSE null\n                END)\n
    \n

    However, it may be more efficient and easier to read to just do an OR

    \n
    SELECT id\n  FROM table\n WHERE (variable = 'aaa' AND age = 21)\n    OR (variable = 'bbb' AND age = 99)\n
    \n soup wrap:

    You could use a CASE statement

    SELECT id
      FROM table
     WHERE age = (CASE WHEN variable = 'aaa' 
                       THEN 21
                       WHEN variable = 'bbb'
                       THEN 99
                       ELSE null
                    END)
    

    However, it may be more efficient and easier to read to just do an OR

    SELECT id
      FROM table
     WHERE (variable = 'aaa' AND age = 21)
        OR (variable = 'bbb' AND age = 99)
    
    qid & accept id: (8327616, 8327659) query: Dynamic 'LIKE' Statement in SQL (Oracle) soup:

    You can use the CONCAT() function:

    \n
    SELECT * \nFROM MATERIALS \nWHERE longname LIKE CONCAT(shortname, '%')\n
    \n

    or even better, the standard || (double pipe) operator:

    \n
    SELECT * \nFROM MATERIALS \nWHERE longname LIKE (shortname || '%')\n
    \n
    \n

    Oracle's CONCAT() function does not take more than 2 arguments so one would use the cumbersome CONCAT(CONCAT(a, b), c) while with the operator it's the simple: a || b || c

    \n soup wrap:

    You can use the CONCAT() function:

    SELECT * 
    FROM MATERIALS 
    WHERE longname LIKE CONCAT(shortname, '%')
    

    or even better, the standard || (double pipe) operator:

    SELECT * 
    FROM MATERIALS 
    WHERE longname LIKE (shortname || '%')
    

    Oracle's CONCAT() function does not take more than 2 arguments so one would use the cumbersome CONCAT(CONCAT(a, b), c) while with the operator it's the simple: a || b || c

    qid & accept id: (8337138, 8337841) query: SQL to get an daily average from month total soup:

    Sample data (may vary):

    \n
    select * into #totals from (\nselect '1001' as person, 114.00  as total, 199905 as month union\nselect '1001', 120.00, 199906 union\nselect '1001', 120.00, 199907 union\nselect '1001', 120.00, 199908  \n\n) t\n\nselect * into #calendar from (\nselect cast('19990501' as datetime) as tran_date, 'WEEKEND' as day_type union\nselect '19990502', 'WEEKEND' union\nselect '19990503', 'WORKING_DAY' union\nselect '19990504', 'WORKING_DAY' union\nselect '19990505', 'WORKING_DAY' union\nselect '19990601', 'WEEKEND' union\nselect '19990602', 'WORKING_DAY' union\nselect '19990603', 'WORKING_DAY' union\nselect '19990604', 'WORKING_DAY' union\nselect '19990605', 'WORKING_DAY' union\nselect '19990606', 'WORKING_DAY' union\nselect '19990701', 'WORKING_DAY' union\nselect '19990702', 'WEEKEND' union\nselect '19990703', 'WEEKEND' union\nselect '19990704', 'WORKING_DAY' union\nselect '19990801', 'WORKING_DAY' union\nselect '19990802', 'WORKING_DAY' union\nselect '19990803', 'WEEKEND' union\nselect '19990804', 'WEEKEND' union\nselect '19990805', 'WORKING_DAY' union\nselect '19990901', 'WORKING_DAY'\n) t\n
    \n

    Select statement, it returns 0 if the day is 'weekend' or not exists in calendar table. Please keep in mind that MAXRECURSION is a value between 0 and 32,767.

    \n
    ;with dates as ( \n    select cast('19990501' as datetime) as tran_date \n    union all \n    select dateadd(dd, 1, tran_date) \n    from dates where dateadd(dd, 1, tran_date) <= cast('20010101' as datetime) \n) \nselect t.person , d.tran_date, (case when wd.tran_date is not null then t.total / w_days else 0 end) as day_avg \nfrom dates d \nleft join #totals t on  \n    datepart(yy, d.tran_date) * 100 + datepart(mm, d.tran_date) = t.month \nleft join ( \n        select datepart(yy, tran_date) * 100 + datepart(mm, tran_date) as month, count(*) as w_days \n        from #calendar \n        where day_type = 'WORKING_DAY' \n        group by datepart(yy, tran_date) * 100 + datepart(mm, tran_date) \n) c on t.month = c.month  \nleft join #calendar wd on d.tran_date = wd.tran_date and wd.day_type = 'WORKING_DAY' \nwhere t.person is not null\noption(maxrecursion 20000) \n
    \n soup wrap:

    Sample data (may vary):

    select * into #totals from (
    select '1001' as person, 114.00  as total, 199905 as month union
    select '1001', 120.00, 199906 union
    select '1001', 120.00, 199907 union
    select '1001', 120.00, 199908  
    
    ) t
    
    select * into #calendar from (
    select cast('19990501' as datetime) as tran_date, 'WEEKEND' as day_type union
    select '19990502', 'WEEKEND' union
    select '19990503', 'WORKING_DAY' union
    select '19990504', 'WORKING_DAY' union
    select '19990505', 'WORKING_DAY' union
    select '19990601', 'WEEKEND' union
    select '19990602', 'WORKING_DAY' union
    select '19990603', 'WORKING_DAY' union
    select '19990604', 'WORKING_DAY' union
    select '19990605', 'WORKING_DAY' union
    select '19990606', 'WORKING_DAY' union
    select '19990701', 'WORKING_DAY' union
    select '19990702', 'WEEKEND' union
    select '19990703', 'WEEKEND' union
    select '19990704', 'WORKING_DAY' union
    select '19990801', 'WORKING_DAY' union
    select '19990802', 'WORKING_DAY' union
    select '19990803', 'WEEKEND' union
    select '19990804', 'WEEKEND' union
    select '19990805', 'WORKING_DAY' union
    select '19990901', 'WORKING_DAY'
    ) t
    

    Select statement, it returns 0 if the day is 'weekend' or not exists in calendar table. Please keep in mind that MAXRECURSION is a value between 0 and 32,767.

    ;with dates as ( 
        select cast('19990501' as datetime) as tran_date 
        union all 
        select dateadd(dd, 1, tran_date) 
        from dates where dateadd(dd, 1, tran_date) <= cast('20010101' as datetime) 
    ) 
    select t.person , d.tran_date, (case when wd.tran_date is not null then t.total / w_days else 0 end) as day_avg 
    from dates d 
    left join #totals t on  
        datepart(yy, d.tran_date) * 100 + datepart(mm, d.tran_date) = t.month 
    left join ( 
            select datepart(yy, tran_date) * 100 + datepart(mm, tran_date) as month, count(*) as w_days 
            from #calendar 
            where day_type = 'WORKING_DAY' 
            group by datepart(yy, tran_date) * 100 + datepart(mm, tran_date) 
    ) c on t.month = c.month  
    left join #calendar wd on d.tran_date = wd.tran_date and wd.day_type = 'WORKING_DAY' 
    where t.person is not null
    option(maxrecursion 20000) 
    
    qid & accept id: (8350660, 8350874) query: LINQ OrderBy Count of Records in a Joined Table soup:

    You need to execute a group by if you want the count

    \n
    SELECT P.Name\nFROM Product P\n    INNER JOIN OrderItems OI ON P.productID = OI.productID\n        INNER JOIN Orders O ON OI.orderID = O.orderId\nWHERE P.Active = 1 AND O.Status > 2\nGROUP BY P.Name\nORDER BY count(*) DESC\n
    \n

    I'll assume you actually want the count for each group in the projection.

    \n
    from p in CRM.tProducts\n    join oi in CRM.tOrderItems on p.prodID equals oi.prodID\n    join o in CRM.tOrders on oi.orderID equals o.orderID\nwhere o.status > 1 && p.active == true\ngroup p by p.Name into nameGroup\norderby nameGroup.Count()\nselect new { Name = nameGroup.Key, Count = nameGroup.Count() };\n
    \n soup wrap:

    You need to execute a group by if you want the count

    SELECT P.Name
    FROM Product P
        INNER JOIN OrderItems OI ON P.productID = OI.productID
            INNER JOIN Orders O ON OI.orderID = O.orderId
    WHERE P.Active = 1 AND O.Status > 2
    GROUP BY P.Name
    ORDER BY count(*) DESC
    

    I'll assume you actually want the count for each group in the projection.

    from p in CRM.tProducts
        join oi in CRM.tOrderItems on p.prodID equals oi.prodID
        join o in CRM.tOrders on oi.orderID equals o.orderID
    where o.status > 1 && p.active == true
    group p by p.Name into nameGroup
    orderby nameGroup.Count()
    select new { Name = nameGroup.Key, Count = nameGroup.Count() };
    
    qid & accept id: (8384688, 8384704) query: Concat two table columns and update one with result soup:

    Try this (for MySQL)

    \n
    UPDATE your_table\nSET col1 = CONCAT_WS('.', col1, col2)\n
    \n

    and this for MS-SQL

    \n
    UPDATE your_table\nSET col1 =col1 || "." || col2\n
    \n soup wrap:

    Try this (for MySQL)

    UPDATE your_table
    SET col1 = CONCAT_WS('.', col1, col2)
    

    and this for MS-SQL

    UPDATE your_table
    SET col1 =col1 || "." || col2
    
    qid & accept id: (8423506, 8423824) query: T-SQL Dynamically execute stored procedure soup:

    Quite simple

    \n
    CREATE PROCEDURE [logging]    \n   @PROCID int,,\n   @MESSAGE VARCHAR(MAX)\n-- allows resolution of @PROCID in some circumstances\n-- eg nested calls, no direct permission on inner proc\nWITH EXECUTE AS OWNER\nAS\nBEGIN\n    -- you are using schemas, right?\n    PRINT OBJECT_SCHEMA_NAME(@PROCID) + '.' + OBJECT_NAME(@PROCID);\n    PRINT @MESSAGE\nEND;\nGO\n
    \n

    Then

    \n
    execute logging @@PROCID, N'log_message';\n
    \n

    MSDN on OBJECT_SCHEMA_NAME and @@PROCID

    \n

    Edit:

    \n

    Beware of logging into tables during transactions. On rollback, you'll lose the log data

    \n soup wrap:

    Quite simple

    CREATE PROCEDURE [logging]    
       @PROCID int,,
       @MESSAGE VARCHAR(MAX)
    -- allows resolution of @PROCID in some circumstances
    -- eg nested calls, no direct permission on inner proc
    WITH EXECUTE AS OWNER
    AS
    BEGIN
        -- you are using schemas, right?
        PRINT OBJECT_SCHEMA_NAME(@PROCID) + '.' + OBJECT_NAME(@PROCID);
        PRINT @MESSAGE
    END;
    GO
    

    Then

    execute logging @@PROCID, N'log_message';
    

    MSDN on OBJECT_SCHEMA_NAME and @@PROCID

    Edit:

    Beware of logging into tables during transactions. On rollback, you'll lose the log data

    qid & accept id: (8451219, 8456920) query: How do I copy or import Oracle schemas between two different databases on different servers? soup:

    Similarly, if you're using Oracle 10g+, you should be able to make this work with Data Pump:

    \n
    expdp user1/pass1@db1 directory=dp_out schemas=user1 dumpfile=user1.dmp logfile=user1.log\n
    \n

    And to import:

    \n
    impdp user2/pass2@db2 directory=dp_out remap_schema=user1:user2 dumpfile=user1.dmp logfile=user2.log\n
    \n soup wrap:

    Similarly, if you're using Oracle 10g+, you should be able to make this work with Data Pump:

    expdp user1/pass1@db1 directory=dp_out schemas=user1 dumpfile=user1.dmp logfile=user1.log
    

    And to import:

    impdp user2/pass2@db2 directory=dp_out remap_schema=user1:user2 dumpfile=user1.dmp logfile=user2.log
    
    qid & accept id: (8451558, 8451820) query: Remove a decimal from many fields soup:

    It sounds like you just need a simple REPLACE

    \n
    SQL> with x as (\n  2    select '123E4.00' str from dual\n  3    union all\n  4    select '123K5.00' from dual\n  5    union all\n  6    select '123K123' from dual\n  7  )\n  8  select replace( str, '.' )\n  9    from x;\n\nREPLACE(\n--------\n123E400\n123K500\n123K123\n
    \n

    You'd need to turn that into an UPDATE statement against your table

    \n
    UPDATE some_table\n   SET some_column = REPLACE( some_column, '.' )\n WHERE some_column != REPLACE( some_column, '.' )\n
    \n soup wrap:

    It sounds like you just need a simple REPLACE

    SQL> with x as (
      2    select '123E4.00' str from dual
      3    union all
      4    select '123K5.00' from dual
      5    union all
      6    select '123K123' from dual
      7  )
      8  select replace( str, '.' )
      9    from x;
    
    REPLACE(
    --------
    123E400
    123K500
    123K123
    

    You'd need to turn that into an UPDATE statement against your table

    UPDATE some_table
       SET some_column = REPLACE( some_column, '.' )
     WHERE some_column != REPLACE( some_column, '.' )
    
    qid & accept id: (8524475, 8527922) query: Join tables by suitable period soup:
    SET search_path='tmp';\n\nDROP TABLE items CASCADE;\nCREATE TABLE items\n    ( item_id INTEGER NOT NULL PRIMARY KEY\n    , item VARCHAR\n    , save_date date NOT NULL\n    );\nINSERT INTO items(item_id,item,save_date) VALUES\n ( 1, 'car', '2011-12-01' )\n,( 2, 'wheel', '2011-12-10' )\n,( 3, 'screen', '2011-12-11' )\n,( 4, 'table', '2011-12-15' )\n    ;\n\nDROP TABLE periods CASCADE;\nCREATE TABLE periods\n    ( period_id INTEGER NOT NULL PRIMARY KEY\n    , period_name VARCHAR\n    , start_date date NOT NULL\n    );\nINSERT INTO periods(period_id,period_name,start_date) VALUES\n ( 1, 'period1', '2011-12-05' )\n,( 2, 'period2', '2011-12-09' )\n,( 3, 'period3', '2011-12-12' )\n    ;\n-- self-join to find the next interval\nWITH pe AS (\n    SELECT p0.period_id,p0.period_name,p0.start_date\n        , p1.start_date AS end_date\n    FROM periods p0\n    -- must be a left join; because the most recent interval is still open\n    -- (has no successor)\n    LEFT JOIN periods p1 ON p1.start_date > p0.start_date\n    WHERE NOT EXISTS (\n        SELECT * FROM periods px\n        WHERE px.start_date > p0.start_date\n        AND px.start_date < p1.start_date\n        )\n    )\nSELECT it.item_id\n    , it.item\n    , it.save_date\n    , pe.period_id\n    , pe.period_name\n    , pe.start_date\n    , pe.end_date\nFROM items it\nLEFT JOIN pe\n       ON it.save_date >= pe.start_date\n      AND ( it.save_date < pe.end_date OR pe.end_date IS NULL)\n    ;\n
    \n

    The result:

    \n
     item_id |  item  | save_date  | period_id | period_name | start_date |  end_date\n---------+--------+------------+-----------+-------------+------------+------------\n       1 | car    | 2011-12-01 |           |             |            |\n       2 | wheel  | 2011-12-10 |         2 | period2     | 2011-12-09 | 2011-12-12\n       3 | screen | 2011-12-11 |         2 | period2     | 2011-12-09 | 2011-12-12\n       4 | table  | 2011-12-15 |         3 | period3     | 2011-12-12 |\n(4 rows)\n
    \n soup wrap:
    SET search_path='tmp';
    
    DROP TABLE items CASCADE;
    CREATE TABLE items
        ( item_id INTEGER NOT NULL PRIMARY KEY
        , item VARCHAR
        , save_date date NOT NULL
        );
    INSERT INTO items(item_id,item,save_date) VALUES
     ( 1, 'car', '2011-12-01' )
    ,( 2, 'wheel', '2011-12-10' )
    ,( 3, 'screen', '2011-12-11' )
    ,( 4, 'table', '2011-12-15' )
        ;
    
    DROP TABLE periods CASCADE;
    CREATE TABLE periods
        ( period_id INTEGER NOT NULL PRIMARY KEY
        , period_name VARCHAR
        , start_date date NOT NULL
        );
    INSERT INTO periods(period_id,period_name,start_date) VALUES
     ( 1, 'period1', '2011-12-05' )
    ,( 2, 'period2', '2011-12-09' )
    ,( 3, 'period3', '2011-12-12' )
        ;
    -- self-join to find the next interval
    WITH pe AS (
        SELECT p0.period_id,p0.period_name,p0.start_date
            , p1.start_date AS end_date
        FROM periods p0
        -- must be a left join; because the most recent interval is still open
        -- (has no successor)
        LEFT JOIN periods p1 ON p1.start_date > p0.start_date
        WHERE NOT EXISTS (
            SELECT * FROM periods px
            WHERE px.start_date > p0.start_date
            AND px.start_date < p1.start_date
            )
        )
    SELECT it.item_id
        , it.item
        , it.save_date
        , pe.period_id
        , pe.period_name
        , pe.start_date
        , pe.end_date
    FROM items it
    LEFT JOIN pe
           ON it.save_date >= pe.start_date
          AND ( it.save_date < pe.end_date OR pe.end_date IS NULL)
        ;
    

    The result:

     item_id |  item  | save_date  | period_id | period_name | start_date |  end_date
    ---------+--------+------------+-----------+-------------+------------+------------
           1 | car    | 2011-12-01 |           |             |            |
           2 | wheel  | 2011-12-10 |         2 | period2     | 2011-12-09 | 2011-12-12
           3 | screen | 2011-12-11 |         2 | period2     | 2011-12-09 | 2011-12-12
           4 | table  | 2011-12-15 |         3 | period3     | 2011-12-12 |
    (4 rows)
    
    qid & accept id: (8527731, 8527765) query: sql subquery group by soup:
    SELECT REF, UserName, TransDate\nFROM dbo.MyTable    \nWHERE ID = (\n    SELECT TOP 1 ID\n    FROM dbo.MyTable\n    WHERE Status = 1 AND REF = 1001\n    ORDER BY TransDate ASC\n)\n
    \n

    EDIT:

    \n

    Or, if you need the results for each REF, instead of a specific REF, you can try this:

    \n
    SELECT mt.REF, mt.UserName, mt.TransDate\nFROM \n    dbo.MyTable mt JOIN (\n        SELECT\n            REF,\n            MIN(TransDate) AS MinTransDate\n        FROM dbo.MyTable\n        WHERE Status = 1\n        GROUP BY REF\n    ) MinResult mr ON mr.REF = mt.REF AND mr.MinTransDate = mt.TransDate\n
    \n soup wrap:
    SELECT REF, UserName, TransDate
    FROM dbo.MyTable    
    WHERE ID = (
        SELECT TOP 1 ID
        FROM dbo.MyTable
        WHERE Status = 1 AND REF = 1001
        ORDER BY TransDate ASC
    )
    

    EDIT:

    Or, if you need the results for each REF, instead of a specific REF, you can try this:

    SELECT mt.REF, mt.UserName, mt.TransDate
    FROM 
        dbo.MyTable mt JOIN (
            SELECT
                REF,
                MIN(TransDate) AS MinTransDate
            FROM dbo.MyTable
            WHERE Status = 1
            GROUP BY REF
        ) MinResult mr ON mr.REF = mt.REF AND mr.MinTransDate = mt.TransDate
    
    qid & accept id: (8546198, 8546321) query: Selecting using two column names, using the other one if one is known of each record soup:

    You can use a case statement in your join condition, something like this:

    \n
    SELECT * FROM games g\n    JOIN accounts a \n      ON a.id = case g.userid1 when ? then g.userid2 else g.userid1 end\nWHERE \n    g.userid1 = ? OR g.userid2 = ?\n
    \n

    However, depending on your indexes, it may be quicker to use a union, eg.

    \n
      SELECT * FROM games g\n      JOIN accounts a ON a.id = case g.userid2\n  WHERE g.userid1 = ?\nUNION ALL\n  SELECT * FROM games g\n      JOIN accounts a ON a.id = case g.userid1\n  WHERE g.userid2 = ?\n
    \n

    An alternative query using OR,

    \n
    SELECT * FROM games g, accounts a \nWHERE \n      (g.userid1 = ? AND g.userid2 = a.id) \n   OR (g.userid2 = ? AND g.userid1 = a.id)\n
    \n soup wrap:

    You can use a case statement in your join condition, something like this:

    SELECT * FROM games g
        JOIN accounts a 
          ON a.id = case g.userid1 when ? then g.userid2 else g.userid1 end
    WHERE 
        g.userid1 = ? OR g.userid2 = ?
    

    However, depending on your indexes, it may be quicker to use a union, eg.

      SELECT * FROM games g
          JOIN accounts a ON a.id = case g.userid2
      WHERE g.userid1 = ?
    UNION ALL
      SELECT * FROM games g
          JOIN accounts a ON a.id = case g.userid1
      WHERE g.userid2 = ?
    

    An alternative query using OR,

    SELECT * FROM games g, accounts a 
    WHERE 
          (g.userid1 = ? AND g.userid2 = a.id) 
       OR (g.userid2 = ? AND g.userid1 = a.id)
    
    qid & accept id: (8578252, 8578411) query: change sql parameter to date decimal soup:

    Try something like this.

    \n
     select CAST(replace(convert(varchar, getdate(), 101), '/', '') AS DECIMAL)\n
    \n

    Or something like this where @normaldate is the search date.

    \n
    SELECT decimaldate FROM TABLE1 WHERE decimaldate = CAST(replace(convert(varchar, @normaldate, 101), '/', '') AS DECIMAL)\n
    \n soup wrap:

    Try something like this.

     select CAST(replace(convert(varchar, getdate(), 101), '/', '') AS DECIMAL)
    

    Or something like this where @normaldate is the search date.

    SELECT decimaldate FROM TABLE1 WHERE decimaldate = CAST(replace(convert(varchar, @normaldate, 101), '/', '') AS DECIMAL)
    
    qid & accept id: (8610517, 9038725) query: Trying to replace dbms_xmlgen.xmlget with sys_xmlagg soup:

    I don't have access to an Oracle DB at the moment, so please forgive inaccuracies.

    \n

    The parameterization of the DBMS_XMLGEN call seems to be the goal. This is accomplished by using a little PL/SQL. The Oracle Docs for the DBMS_XMLGEN package describe a few operations which should help. First, create a context from a SYS_REFCURSOR using this form:

    \n
    DBMS_XMLGEN.NEWCONTEXT (\n  queryString  IN SYS_REFCURSOR)\nRETURN ctxHandle;\n
    \n

    Then, use the context in another form of GetXML:

    \n
    DBMS_XMLGEN.GETXML (\n   ctx          IN ctxHandle, \n   tmpclob      IN OUT NCOPY CLOB,\n   dtdOrSchema  IN number := NONE)\nRETURN BOOLEAN;\n
    \n

    Using this method also gives the benefit of potentially reusing the CLOB (not making a new temporary one), which may help with performance. There is another form which is more like the one you were using in your example, but loses this property.

    \n

    One more thing... The return of GETXML in this example should tell you whether there were rows returned or not. This should be more reliable than checking the contents of the CLOB when the operation completes. Alternately, you can use the NumRowsProcessed function on the context to get the count of the rows included in the CLOB.

    \n

    Roughly, your code would look something like this:

    \n
    DECLARE\n  srcRefCursor SYS_REFCURSOR;\n  ctxHandle ctxHandle;\n  somevalue VARCHAR2(1000);\n  myClob CLOB;\n  hasRows boolean;\nBEGIN\n  OPEN srcRefCursor FOR\n      SELECT c1, c2 \n      FROM t1 \n      WHERE c1 = somevalue; --Note parameterized value\n\n  ctxHandle := DBMS_XMLGEN.NEWCONTEXT(srcRefCursor);\n\n  hasRows := DBMS_XMLGEN.GETXML(\n      ctxHandle,\n      myClob -- XML stored in myCLOB\n  );\n\n  IF (hasRows) THEN\n    /* Do work on CLOB here */\n  END IF;\n\n\n  DBMS_XMLGEN.CLOSECONTEXT(ctxHandle);\nEND;\n
    \n soup wrap:

    I don't have access to an Oracle DB at the moment, so please forgive inaccuracies.

    The parameterization of the DBMS_XMLGEN call seems to be the goal. This is accomplished by using a little PL/SQL. The Oracle Docs for the DBMS_XMLGEN package describe a few operations which should help. First, create a context from a SYS_REFCURSOR using this form:

    DBMS_XMLGEN.NEWCONTEXT (
      queryString  IN SYS_REFCURSOR)
    RETURN ctxHandle;
    

    Then, use the context in another form of GetXML:

    DBMS_XMLGEN.GETXML (
       ctx          IN ctxHandle, 
       tmpclob      IN OUT NCOPY CLOB,
       dtdOrSchema  IN number := NONE)
    RETURN BOOLEAN;
    

    Using this method also gives the benefit of potentially reusing the CLOB (not making a new temporary one), which may help with performance. There is another form which is more like the one you were using in your example, but loses this property.

    One more thing... The return of GETXML in this example should tell you whether there were rows returned or not. This should be more reliable than checking the contents of the CLOB when the operation completes. Alternately, you can use the NumRowsProcessed function on the context to get the count of the rows included in the CLOB.

    Roughly, your code would look something like this:

    DECLARE
      srcRefCursor SYS_REFCURSOR;
      ctxHandle ctxHandle;
      somevalue VARCHAR2(1000);
      myClob CLOB;
      hasRows boolean;
    BEGIN
      OPEN srcRefCursor FOR
          SELECT c1, c2 
          FROM t1 
          WHERE c1 = somevalue; --Note parameterized value
    
      ctxHandle := DBMS_XMLGEN.NEWCONTEXT(srcRefCursor);
    
      hasRows := DBMS_XMLGEN.GETXML(
          ctxHandle,
          myClob -- XML stored in myCLOB
      );
    
      IF (hasRows) THEN
        /* Do work on CLOB here */
      END IF;
    
    
      DBMS_XMLGEN.CLOSECONTEXT(ctxHandle);
    END;
    
    qid & accept id: (8629046, 8629140) query: How to avoid the null values soup:

    Unless you explain in more detail how those values from Value1 and Value2 columns belong together, and only if that "matching" is really deterministic, then you could do something like this:

    \n
    DECLARE @temp TABLE (ID INT, Value1 VARCHAR(20), Value2 VARCHAR(20))\n\nINSERT INTO @temp\n        (ID, Value1, Value2)\nVALUES\n        (1, 'Rajan', NULL),\n        (3, 'Vijayan', NULL),\n        (1, NULL, 'Ravi'),\n        (3, NULL, 'sudeep'),\n        (2, 'kumar', NULL),\n        (2, NULL, 'venkat')\n\nSELECT DISTINCT\n   ID, \n   (SELECT Value1 FROM @temp t2 WHERE t2.ID = t.ID AND Value1 IS NOT NULL) AS 'Value1',\n   (SELECT Value2 FROM @temp t2 WHERE t2.ID = t.ID AND Value2 IS NOT NULL) AS 'Value2'\nFROM\n   @temp t\n
    \n

    That would give you one row for each value of ID, with the non-NULL value for Value1 and the non-null value for Value2.

    \n

    But as your question stands right now, this approach doesn't work, since you have multiple entries for the same ID - and no explanation as to how to match the two separate values together....

    \n

    So as it stands right now, I would say there is no deterministic and proper solution for your question. You need to provide more information so we can find a solution for you.

    \n

    Update: if you would update to SQL Server 2005 or newer, you could do something like two nested CTE's - but in that case, too, you would have to define some rule / ordering as to how the two variants with ID = 001 are joined together.....

    \n

    Something like:

    \n
    DECLARE @temp TABLE (ID INT, Value1 VARCHAR(20), Value2 VARCHAR(20))\n\nINSERT INTO @temp\n        (ID, Value1, Value2)\nVALUES\n        (1, 'Rajan', NULL),\n        (1, 'Vijayan', NULL),\n        (1, NULL, 'Ravi'),\n        (1, NULL, 'sudeep'),\n        (2, 'kumar', NULL),\n        (2, NULL, 'venkat')\n\n;WITH Value1CTE AS\n(\n    SELECT ID, Value1,\n       ROW_NUMBER() OVER (PARTITION BY ID ORDER BY Value1) AS 'RowNum'\n    FROM @temp\n    WHERE Value1 IS NOT NULL\n),\nValue2CTE AS\n(\n    SELECT ID, Value2,\n       ROW_NUMBER() OVER (PARTITION BY ID ORDER BY Value2) AS 'RowNum'\n    FROM @temp\n    WHERE Value2 IS NOT NULL\n)\nSELECT \n   v1.ID, \n    v1.Value1, v2.Value2\nFROM\n   Value1CTE v1\nINNER JOIN \n    Value2CTE v2 ON v1.ID = v2.ID AND v1.RowNum = v2.RowNum\n
    \n

    would give you a reproducible output of:

    \n
    ID  Value1  Value2\n1   Rajan   Ravi\n1   Vijayan sudeep\n2   kumar   venkat\n
    \n

    This is under the assumption that given two entries with the SAME ID, you want to sort (ORDER BY) the actual values (e.g. Rajan before Vijayan and Ravi before sudeep --> there you'd join Rajan and Ravi together, as well as Vijayan and sudeep).

    \n

    But again: this is in SQL Server 2005 and newer only - no equivalent in SQL Server 2000, unforutnately.....

    \n soup wrap:

    Unless you explain in more detail how those values from Value1 and Value2 columns belong together, and only if that "matching" is really deterministic, then you could do something like this:

    DECLARE @temp TABLE (ID INT, Value1 VARCHAR(20), Value2 VARCHAR(20))
    
    INSERT INTO @temp
            (ID, Value1, Value2)
    VALUES
            (1, 'Rajan', NULL),
            (3, 'Vijayan', NULL),
            (1, NULL, 'Ravi'),
            (3, NULL, 'sudeep'),
            (2, 'kumar', NULL),
            (2, NULL, 'venkat')
    
    SELECT DISTINCT
       ID, 
       (SELECT Value1 FROM @temp t2 WHERE t2.ID = t.ID AND Value1 IS NOT NULL) AS 'Value1',
       (SELECT Value2 FROM @temp t2 WHERE t2.ID = t.ID AND Value2 IS NOT NULL) AS 'Value2'
    FROM
       @temp t
    

    That would give you one row for each value of ID, with the non-NULL value for Value1 and the non-null value for Value2.

    But as your question stands right now, this approach doesn't work, since you have multiple entries for the same ID - and no explanation as to how to match the two separate values together....

    So as it stands right now, I would say there is no deterministic and proper solution for your question. You need to provide more information so we can find a solution for you.

    Update: if you would update to SQL Server 2005 or newer, you could do something like two nested CTE's - but in that case, too, you would have to define some rule / ordering as to how the two variants with ID = 001 are joined together.....

    Something like:

    DECLARE @temp TABLE (ID INT, Value1 VARCHAR(20), Value2 VARCHAR(20))
    
    INSERT INTO @temp
            (ID, Value1, Value2)
    VALUES
            (1, 'Rajan', NULL),
            (1, 'Vijayan', NULL),
            (1, NULL, 'Ravi'),
            (1, NULL, 'sudeep'),
            (2, 'kumar', NULL),
            (2, NULL, 'venkat')
    
    ;WITH Value1CTE AS
    (
        SELECT ID, Value1,
           ROW_NUMBER() OVER (PARTITION BY ID ORDER BY Value1) AS 'RowNum'
        FROM @temp
        WHERE Value1 IS NOT NULL
    ),
    Value2CTE AS
    (
        SELECT ID, Value2,
           ROW_NUMBER() OVER (PARTITION BY ID ORDER BY Value2) AS 'RowNum'
        FROM @temp
        WHERE Value2 IS NOT NULL
    )
    SELECT 
       v1.ID, 
        v1.Value1, v2.Value2
    FROM
       Value1CTE v1
    INNER JOIN 
        Value2CTE v2 ON v1.ID = v2.ID AND v1.RowNum = v2.RowNum
    

    would give you a reproducible output of:

    ID  Value1  Value2
    1   Rajan   Ravi
    1   Vijayan sudeep
    2   kumar   venkat
    

    This is under the assumption that given two entries with the SAME ID, you want to sort (ORDER BY) the actual values (e.g. Rajan before Vijayan and Ravi before sudeep --> there you'd join Rajan and Ravi together, as well as Vijayan and sudeep).

    But again: this is in SQL Server 2005 and newer only - no equivalent in SQL Server 2000, unforutnately.....

    qid & accept id: (8636956, 8644844) query: How to join two tables with one of them not having a primary key and not the same character length soup:

    Try this to compare the first 8 characters only:

    \n
    SELECT r.domainid, r.dombegin, r.domend, d.ddid \nFROM   domainregion r\nJOIN   dyndomrun d ON r.domainid::varchar(8) = d.ddid \nORDER  BY r.domainid, d.ddid, r.dombegin, r.domend;\n
    \n

    The cast implicitly trims trailing characters. ddid only has 8 characters to begin with. No need to process it, too. This achieves the same:

    \n
    JOIN   dyndomrun d ON left(r.domainid, 8) = d.ddid \n
    \n

    However, be advised that the string function left() was only introduced with PostgreSQL 9.1. In earlier versions you can substitute:

    \n
    JOIN   dyndomrun d ON substr(r.domainid, 1, 8) = d.ddid\n
    \n

    __

    \n

    Basic explanation for beginners:

    \n\n

    It's a simple query, not much to explain here.

    \n soup wrap:

    Try this to compare the first 8 characters only:

    SELECT r.domainid, r.dombegin, r.domend, d.ddid 
    FROM   domainregion r
    JOIN   dyndomrun d ON r.domainid::varchar(8) = d.ddid 
    ORDER  BY r.domainid, d.ddid, r.dombegin, r.domend;
    

    The cast implicitly trims trailing characters. ddid only has 8 characters to begin with. No need to process it, too. This achieves the same:

    JOIN   dyndomrun d ON left(r.domainid, 8) = d.ddid 
    

    However, be advised that the string function left() was only introduced with PostgreSQL 9.1. In earlier versions you can substitute:

    JOIN   dyndomrun d ON substr(r.domainid, 1, 8) = d.ddid
    

    __

    Basic explanation for beginners:

    It's a simple query, not much to explain here.

    qid & accept id: (8645254, 8645279) query: Find rows with same ID and have a particular set of names soup:

    The simplest way is to compare a COUNT per ID with the number of elements in your list:

    \n
    SELECT\n   ID\nFROM\n   MyTable\nWHERE\n   NAME IN ('A', 'B', 'C')\nGROUP BY\n   ID\nHAVING\n   COUNT(*) = 3;\n
    \n

    Note: ORDER BY isn't needed and goes after the HAVING if needed

    \n

    Edit, with question update. In MySQL, it's easier to use a separate table for search terms

    \n
    DROP TABLE IF EXISTS gbn;\nCREATE TABLE gbn (ID INT, `name` VARCHAR(100), REV INT);\nINSERT gbn VALUES (1, 'A', 0);\nINSERT gbn VALUES (1, 'B', 0);\nINSERT gbn VALUES (1, 'C', 0);\nINSERT gbn VALUES (2, 'A', 1);\nINSERT gbn VALUES (2, 'B', 0);\nINSERT gbn VALUES (2, 'C', 0);\nINSERT gbn VALUES (3, 'A', 0);\nINSERT gbn VALUES (3, 'B', 0);\n\nDROP TABLE IF EXISTS gbn1;\nCREATE TABLE gbn1 ( `name` VARCHAR(100));\nINSERT gbn1 VALUES ('A');\nINSERT gbn1 VALUES ('B');\n\nSELECT\n   gbn.ID\nFROM\n   gbn\n   LEFT JOIN\n   gbn1 ON gbn.`name` = gbn1.`name`\nGROUP BY\n   gbn.ID\nHAVING\n   COUNT(*) = (SELECT COUNT(*) FROM gbn1)\n   AND MIN(gbn.REV) = MAX(gbn.REV);\n\nINSERT gbn1 VALUES ('C');\n\nSELECT\n   gbn.ID\nFROM\n   gbn\n   LEFT JOIN\n   gbn1 ON gbn.`name` = gbn1.`name`\nGROUP BY\n   gbn.ID\nHAVING\n   COUNT(*) = (SELECT COUNT(*) FROM gbn1)\n   AND MIN(gbn.REV) = MAX(gbn.REV);\n
    \n

    Edit 2, without extra table, use a derived (inline) table:

    \n
    SELECT\n   gbn.ID\nFROM\n   gbn\n   LEFT JOIN\n   (SELECT 'A' AS `name`\n    UNION ALL SELECT 'B' \n    UNION ALL SELECT 'C'\n   ) gbn1 ON gbn.`name` = gbn1.`name`\nGROUP BY\n   gbn.ID\nHAVING\n   COUNT(*) = 3 -- matches number of elements in gbn1 derived table\n   AND MIN(gbn.REV) = MAX(gbn.REV);\n
    \n soup wrap:

    The simplest way is to compare a COUNT per ID with the number of elements in your list:

    SELECT
       ID
    FROM
       MyTable
    WHERE
       NAME IN ('A', 'B', 'C')
    GROUP BY
       ID
    HAVING
       COUNT(*) = 3;
    

    Note: ORDER BY isn't needed and goes after the HAVING if needed

    Edit, with question update. In MySQL, it's easier to use a separate table for search terms

    DROP TABLE IF EXISTS gbn;
    CREATE TABLE gbn (ID INT, `name` VARCHAR(100), REV INT);
    INSERT gbn VALUES (1, 'A', 0);
    INSERT gbn VALUES (1, 'B', 0);
    INSERT gbn VALUES (1, 'C', 0);
    INSERT gbn VALUES (2, 'A', 1);
    INSERT gbn VALUES (2, 'B', 0);
    INSERT gbn VALUES (2, 'C', 0);
    INSERT gbn VALUES (3, 'A', 0);
    INSERT gbn VALUES (3, 'B', 0);
    
    DROP TABLE IF EXISTS gbn1;
    CREATE TABLE gbn1 ( `name` VARCHAR(100));
    INSERT gbn1 VALUES ('A');
    INSERT gbn1 VALUES ('B');
    
    SELECT
       gbn.ID
    FROM
       gbn
       LEFT JOIN
       gbn1 ON gbn.`name` = gbn1.`name`
    GROUP BY
       gbn.ID
    HAVING
       COUNT(*) = (SELECT COUNT(*) FROM gbn1)
       AND MIN(gbn.REV) = MAX(gbn.REV);
    
    INSERT gbn1 VALUES ('C');
    
    SELECT
       gbn.ID
    FROM
       gbn
       LEFT JOIN
       gbn1 ON gbn.`name` = gbn1.`name`
    GROUP BY
       gbn.ID
    HAVING
       COUNT(*) = (SELECT COUNT(*) FROM gbn1)
       AND MIN(gbn.REV) = MAX(gbn.REV);
    

    Edit 2, without extra table, use a derived (inline) table:

    SELECT
       gbn.ID
    FROM
       gbn
       LEFT JOIN
       (SELECT 'A' AS `name`
        UNION ALL SELECT 'B' 
        UNION ALL SELECT 'C'
       ) gbn1 ON gbn.`name` = gbn1.`name`
    GROUP BY
       gbn.ID
    HAVING
       COUNT(*) = 3 -- matches number of elements in gbn1 derived table
       AND MIN(gbn.REV) = MAX(gbn.REV);
    
    qid & accept id: (8647675, 8649305) query: List category/subcategory tree and display its sub-categories in the same row soup:

    When we used to make these concatenated lists in the database we took a similar approach to what you are doing at first

    \n

    then when we looked for speed

    \n
    we made them into CLR functions\nhttp://msdn.microsoft.com/en-US/library/a8s4s5dz(v=VS.90).aspx\n
    \n

    and now our database is only responsible for storing and retrieving data

    \n
    this sort of thing will be in our data layer in the application\n
    \n soup wrap:

    When we used to make these concatenated lists in the database we took a similar approach to what you are doing at first

    then when we looked for speed

    we made them into CLR functions
    http://msdn.microsoft.com/en-US/library/a8s4s5dz(v=VS.90).aspx
    

    and now our database is only responsible for storing and retrieving data

    this sort of thing will be in our data layer in the application
    
    qid & accept id: (8669703, 8670279) query: How do I combine result sets from two stored procedure calls? soup:

    This may be oversimplifying the problem, but if you have control over the sp, just use in rather than =:

    \n
    CREATE PROCEDURE [dbo].[MyStored]\nAS\n   SELECT blahblahblah WHERE StoredState IN (0,1) LotsOfJoinsFollow;\nRETURN 0\n
    \n

    If this is not an option, just push the results of both sproc calls into a temp table:

    \n
    /*Create a table with the same columns that the sproc returns*/\nCREATE TABLE #tempblahblah(blahblahblah NVARCHAR(50))\n\nINSERT #tempblahblah ( blahblahblah )\n EXEC MyStored 0\n\nINSERT #tempblahblah ( blahblahblah )\n EXEC MyStored 1\n\nSELECT * FROM #tempblahblah
    \n soup wrap:

    This may be oversimplifying the problem, but if you have control over the sp, just use in rather than =:

    CREATE PROCEDURE [dbo].[MyStored]
    AS
       SELECT blahblahblah WHERE StoredState IN (0,1) LotsOfJoinsFollow;
    RETURN 0
    

    If this is not an option, just push the results of both sproc calls into a temp table:

    /*Create a table with the same columns that the sproc returns*/
    CREATE TABLE #tempblahblah(blahblahblah NVARCHAR(50))
    
    INSERT #tempblahblah ( blahblahblah )
     EXEC MyStored 0
    
    INSERT #tempblahblah ( blahblahblah )
     EXEC MyStored 1
    
    SELECT * FROM #tempblahblah
    qid & accept id: (8684054, 8684257) query: T-SQL how to get date range for 2 week pay period soup:

    You need some modulo operations and DATEDIFF.

    \n
    declare @periodStart datetime\ndeclare @periodEnd datetime\n\nset @periodStart = CAST('2011-12-03' as datetime)\nset @periodEnd = CAST('2011-12-16' as datetime)\n\ndeclare @anyDate datetime\nset @anyDate = CAST('2011-12-30' as datetime)\n\ndeclare @periodLength int\nset @periodLength = DATEDIFF(day, @periodStart, @periodEnd) + 1\n\n\ndeclare @daysFromFirstPeriod int\nset @daysFromFirstPeriod = DATEDIFF(day, @periodStart, @anyDate)\ndeclare @daysIntoPeriod int\nset @daysIntoPeriod = @daysFromFirstPeriod % @periodLength\n\nselect @periodLength as periodLength, @daysFromFirstPeriod as daysFromFirstPeriod, @daysIntoPeriod as daysIntoPeriod\nselect DATEADD(day, -@daysIntoPeriod, @anyDate) as currentPeriodStart, DATEADD(day, @periodLength -@daysIntoPeriod, @anyDate) as currentPeriodEnd\n
    \n

    Gives output

    \n
    periodLength    daysFromFirstPeriod daysIntoPeriod\n14              27                  13\n
    \n

    and

    \n
    currentPeriodStart        currentPeriodEnd\n2011-12-17 00:00:00.000   2011-12-31 00:00:00.000\n
    \n soup wrap:

    You need some modulo operations and DATEDIFF.

    declare @periodStart datetime
    declare @periodEnd datetime
    
    set @periodStart = CAST('2011-12-03' as datetime)
    set @periodEnd = CAST('2011-12-16' as datetime)
    
    declare @anyDate datetime
    set @anyDate = CAST('2011-12-30' as datetime)
    
    declare @periodLength int
    set @periodLength = DATEDIFF(day, @periodStart, @periodEnd) + 1
    
    
    declare @daysFromFirstPeriod int
    set @daysFromFirstPeriod = DATEDIFF(day, @periodStart, @anyDate)
    declare @daysIntoPeriod int
    set @daysIntoPeriod = @daysFromFirstPeriod % @periodLength
    
    select @periodLength as periodLength, @daysFromFirstPeriod as daysFromFirstPeriod, @daysIntoPeriod as daysIntoPeriod
    select DATEADD(day, -@daysIntoPeriod, @anyDate) as currentPeriodStart, DATEADD(day, @periodLength -@daysIntoPeriod, @anyDate) as currentPeriodEnd
    

    Gives output

    periodLength    daysFromFirstPeriod daysIntoPeriod
    14              27                  13
    

    and

    currentPeriodStart        currentPeriodEnd
    2011-12-17 00:00:00.000   2011-12-31 00:00:00.000
    
    qid & accept id: (8711054, 8711080) query: Trying to get rid of comma at end of a column soup:

    You can use substring.

    \n

    Here is an example:

    \n
    declare @test varchar(5)\nselect @test = '12,'\n\nselect substring(@test, 1, len(@test)-1)\n
    \n
    \n

    In your case it would be:

    \n
    UPDATE [Database].[schema].[Table]\nSET    substring([Columnx], 1, len([Columnx])-1)\nWHERE  [Columnx] like '%,'\nAND  len([Columnx]) > 0\n
    \n soup wrap:

    You can use substring.

    Here is an example:

    declare @test varchar(5)
    select @test = '12,'
    
    select substring(@test, 1, len(@test)-1)
    

    In your case it would be:

    UPDATE [Database].[schema].[Table]
    SET    substring([Columnx], 1, len([Columnx])-1)
    WHERE  [Columnx] like '%,'
    AND  len([Columnx]) > 0
    
    qid & accept id: (8718458, 8718594) query: view all data for duplicate rows in oracle soup:

    You can always use the GROUP BY/ HAVING query in an IN clause. This works and is relatively straightforward but it may not be particularly efficient if the number of duplicate rows is relatively large.

    \n
    SELECT *\n  FROM table1\n WHERE (name, type_id) IN (SELECT name, type_id\n                             FROM table1\n                            GROUP BY name, type_id\n                           HAVING COUNT(*) > 1)\n
    \n

    It would generally be more efficient to use analytic functions in order to avoid hitting the table a second time.

    \n
    SELECT *\n  FROM (SELECT id, \n               name,\n               type_id,\n               code,\n               lat,\n               long,\n               count(*) over (partition by name, type_id) cnt\n          FROM table1)\n WHERE cnt > 1\n
    \n

    Depending on what you are planning to do with the data and how many duplicates of a particular row there might be, you also might want to join table1 to itself to get the data in a single row

    \n
    SELECT a.name,\n       a.type_id,\n       a.id,\n       b.id,\n       a.code,\n       b.code,\n       a.lat,\n       b.lat,\n       a.long,\n       b.long\n  FROM table1 a\n       JOIN table1 b ON (a.name = b.name AND\n                         a.type_id = b.type_id AND\n                         a.rowid > b.rowid)\n
    \n soup wrap:

    You can always use the GROUP BY/ HAVING query in an IN clause. This works and is relatively straightforward but it may not be particularly efficient if the number of duplicate rows is relatively large.

    SELECT *
      FROM table1
     WHERE (name, type_id) IN (SELECT name, type_id
                                 FROM table1
                                GROUP BY name, type_id
                               HAVING COUNT(*) > 1)
    

    It would generally be more efficient to use analytic functions in order to avoid hitting the table a second time.

    SELECT *
      FROM (SELECT id, 
                   name,
                   type_id,
                   code,
                   lat,
                   long,
                   count(*) over (partition by name, type_id) cnt
              FROM table1)
     WHERE cnt > 1
    

    Depending on what you are planning to do with the data and how many duplicates of a particular row there might be, you also might want to join table1 to itself to get the data in a single row

    SELECT a.name,
           a.type_id,
           a.id,
           b.id,
           a.code,
           b.code,
           a.lat,
           b.lat,
           a.long,
           b.long
      FROM table1 a
           JOIN table1 b ON (a.name = b.name AND
                             a.type_id = b.type_id AND
                             a.rowid > b.rowid)
    
    qid & accept id: (8806028, 8806289) query: How to do calculations with crosstab/pivot via case in sqlite? soup:

    You can always just do the sums again, like so:

    \n
    SELECT \n    shop_id,\n    sum(CASE WHEN product = 'Fiesta' THEN units END) as Fiesta,\n    sum(CASE WHEN product = 'Focus' THEN units END) as Focus,\n    sum(CASE WHEN product = 'Puma' THEN units END) as Puma,\n    sum(CASE WHEN product = 'Fiesta' THEN units END) / sum(CASE WHEN product = 'Focus' THEN units END) as Ratio\nFROM sales\nGROUP BY shop_id\n
    \n

    Or, faster, you can wrap it up in a subquery, like this:

    \n
    select\n    shop_id,\n    Fiesta,\n    Focus,\n    Puma,\n    Fiesta/Focus as Ratio\nfrom\n    (\n    SELECT \n        shop_id,\n        sum(CASE WHEN product = 'Fiesta' THEN units END) as Fiesta,\n        sum(CASE WHEN product = 'Focus' THEN units END) as Focus,\n        sum(CASE WHEN product = 'Puma' THEN units END) as Puma\n    FROM sales\n    GROUP BY shop_id\n    ) x\n
    \n soup wrap:

    You can always just do the sums again, like so:

    SELECT 
        shop_id,
        sum(CASE WHEN product = 'Fiesta' THEN units END) as Fiesta,
        sum(CASE WHEN product = 'Focus' THEN units END) as Focus,
        sum(CASE WHEN product = 'Puma' THEN units END) as Puma,
        sum(CASE WHEN product = 'Fiesta' THEN units END) / sum(CASE WHEN product = 'Focus' THEN units END) as Ratio
    FROM sales
    GROUP BY shop_id
    

    Or, faster, you can wrap it up in a subquery, like this:

    select
        shop_id,
        Fiesta,
        Focus,
        Puma,
        Fiesta/Focus as Ratio
    from
        (
        SELECT 
            shop_id,
            sum(CASE WHEN product = 'Fiesta' THEN units END) as Fiesta,
            sum(CASE WHEN product = 'Focus' THEN units END) as Focus,
            sum(CASE WHEN product = 'Puma' THEN units END) as Puma
        FROM sales
        GROUP BY shop_id
        ) x
    
    qid & accept id: (8847175, 8847300) query: how to select from table untill the total is a specific number? soup:

    You could make it easier for yourself by adding an extra column, containing the sum of the amounts with a lower ID.

    \n
    "ID" "oamount" "mamount"\n'1'  '1500'    '0'\n'2'  '2000'    '1500'\n'3'  '2000'    '3500'\n'4'  '1000'    '5500'\n
    \n

    You can then select based on that new column:

    \n
    SELECT `ID`,\n    CASE WHEN `oamount` < @Amount - `mamount`\n         THEN `oamount`\n         ELSE @Amount - `mamount` END AS `amount`\nFROM `yourtable`\nWHERE `mamount` < @Amount\n
    \n

    You can do it without adding this extra column, but you'll be making things unnecessarily hard.

    \n soup wrap:

    You could make it easier for yourself by adding an extra column, containing the sum of the amounts with a lower ID.

    "ID" "oamount" "mamount"
    '1'  '1500'    '0'
    '2'  '2000'    '1500'
    '3'  '2000'    '3500'
    '4'  '1000'    '5500'
    

    You can then select based on that new column:

    SELECT `ID`,
        CASE WHEN `oamount` < @Amount - `mamount`
             THEN `oamount`
             ELSE @Amount - `mamount` END AS `amount`
    FROM `yourtable`
    WHERE `mamount` < @Amount
    

    You can do it without adding this extra column, but you'll be making things unnecessarily hard.

    qid & accept id: (8928978, 8929247) query: How can I use the LIKE operator on a list of strings to compare? soup:

    You could do something like this -

    \n
    SELECT FIND_IN_SET(\n  'bigD',\n   REPLACE(REPLACE('barfy,max,whiskers,champ,big-D,Big D,Sally', '-', ''), ' ', '')\n  ) has_petname;\n+-------------+\n| has_petname |\n+-------------+\n|           5 |\n+-------------+\n
    \n

    It will give a non-zero value (>0) if there is a pet_name we are looking for.

    \n

    But I'd suggest you to create a table petnames and use SOUNDS LIKE function to compare names, in this case 'bigD' will be equal to 'big-D', e.g.:

    \n
    SELECT 'bigD' SOUNDS LIKE 'big-D';\n+---------------------------+\n| 'bigD'SOUNDS LIKE 'big-D' |\n+---------------------------+\n|                         1 |\n+---------------------------+\n
    \n

    Example:

    \n
    CREATE TABLE petnames(name VARCHAR(40));\nINSERT INTO petnames VALUES\n  ('barfy'),('max'),('whiskers'),('champ'),('big-D'),('Big D'),('Sally');\n\nSELECT name FROM petnames WHERE 'bigD' SOUNDS LIKE name;\n+-------+\n| name  |\n+-------+\n| big-D |\n| Big D |\n+-------+\n
    \n soup wrap:

    You could do something like this -

    SELECT FIND_IN_SET(
      'bigD',
       REPLACE(REPLACE('barfy,max,whiskers,champ,big-D,Big D,Sally', '-', ''), ' ', '')
      ) has_petname;
    +-------------+
    | has_petname |
    +-------------+
    |           5 |
    +-------------+
    

    It will give a non-zero value (>0) if there is a pet_name we are looking for.

    But I'd suggest you to create a table petnames and use SOUNDS LIKE function to compare names, in this case 'bigD' will be equal to 'big-D', e.g.:

    SELECT 'bigD' SOUNDS LIKE 'big-D';
    +---------------------------+
    | 'bigD'SOUNDS LIKE 'big-D' |
    +---------------------------+
    |                         1 |
    +---------------------------+
    

    Example:

    CREATE TABLE petnames(name VARCHAR(40));
    INSERT INTO petnames VALUES
      ('barfy'),('max'),('whiskers'),('champ'),('big-D'),('Big D'),('Sally');
    
    SELECT name FROM petnames WHERE 'bigD' SOUNDS LIKE name;
    +-------+
    | name  |
    +-------+
    | big-D |
    | Big D |
    +-------+
    
    qid & accept id: (8939857, 8941609) query: Generating a series from a predefined date (PG) soup:

    Turns out, it can be even simpler. :)

    \n
    SELECT generate_series(\n          date_trunc('year', min(created_at))\n        , now()\n        , interval '1 month') AS month;\nFROM   users;\n
    \n

    More about date_trunc in the manual.

    \n

    Or, if you actually want the data type date instead of timestamp with time zone:

    \n
    SELECT generate_series(\n          date_trunc('year', min(created_at))\n        , now()\n        , interval '1 month')::date AS month;\nFROM   users;\n
    \n soup wrap:

    Turns out, it can be even simpler. :)

    SELECT generate_series(
              date_trunc('year', min(created_at))
            , now()
            , interval '1 month') AS month;
    FROM   users;
    

    More about date_trunc in the manual.

    Or, if you actually want the data type date instead of timestamp with time zone:

    SELECT generate_series(
              date_trunc('year', min(created_at))
            , now()
            , interval '1 month')::date AS month;
    FROM   users;
    
    qid & accept id: (9015870, 9016168) query: Find position of given PK and next and previous row as one result row soup:

    Try this one -

    \n

    +row position

    \n
    SELECT car_id, url, signup, CONCAT(pos1, '/', @p1) position FROM (\n  SELECT\n    c.*,\n    @p1:=@p1+1 pos1,\n    @p2:=IF(car_id = 3 AND @p2 IS NULL, @p1, @p2)\n  FROM\n    cars c,\n    (SELECT @p1:=0, @p2:=NULL) t\n  ORDER BY\n    signup\n) t\nWHERE\n  pos1 BETWEEN @p2 - 1 AND @p2 + 1\n
    \n
    \n

    You wrote:\nthe desired result would be: pos, nextid, nexturl, previd, prevurl

    \n

    Try this query:

    \n
    SELECT\n  @p2 pos,\n  MAX(IF(pos1 > @p2, car_id, NULL)) nextid,\n  MAX(IF(pos1 > @p2, url, NULL)) nexturl,\n  MAX(IF(pos1 < @p2, car_id, NULL)) previd,\n  MAX(IF(pos1 < @p2, url, NULL)) prevurl\nFROM (\n  SELECT\n    c.*,\n    @p1:=@p1+1 pos1,\n    @p2:=IF(car_id = 3 AND @p2 IS NULL, @p1, @p2)\n  FROM\n    cars c,\n    (SELECT @p1:=0, @p2:=NULL) t\n  ORDER BY\n    signup\n) t\nWHERE\n  pos1 BETWEEN @p2 - 1 AND @p2 + 1\n
    \n soup wrap:

    Try this one -

    +row position

    SELECT car_id, url, signup, CONCAT(pos1, '/', @p1) position FROM (
      SELECT
        c.*,
        @p1:=@p1+1 pos1,
        @p2:=IF(car_id = 3 AND @p2 IS NULL, @p1, @p2)
      FROM
        cars c,
        (SELECT @p1:=0, @p2:=NULL) t
      ORDER BY
        signup
    ) t
    WHERE
      pos1 BETWEEN @p2 - 1 AND @p2 + 1
    

    You wrote: the desired result would be: pos, nextid, nexturl, previd, prevurl

    Try this query:

    SELECT
      @p2 pos,
      MAX(IF(pos1 > @p2, car_id, NULL)) nextid,
      MAX(IF(pos1 > @p2, url, NULL)) nexturl,
      MAX(IF(pos1 < @p2, car_id, NULL)) previd,
      MAX(IF(pos1 < @p2, url, NULL)) prevurl
    FROM (
      SELECT
        c.*,
        @p1:=@p1+1 pos1,
        @p2:=IF(car_id = 3 AND @p2 IS NULL, @p1, @p2)
      FROM
        cars c,
        (SELECT @p1:=0, @p2:=NULL) t
      ORDER BY
        signup
    ) t
    WHERE
      pos1 BETWEEN @p2 - 1 AND @p2 + 1
    
    qid & accept id: (9056169, 9056277) query: Ranges on multiple columns soup:

    If you're looking for the first range that contains at least a part of the block, try a condition like:

    \n
    vala <= colb and cola <= valb\n
    \n

    This says the search range [vala,valb] must partially overlap with the target range [cola,colb].

    \n

    In SQL:

    \n
    select  *\nfrom    example\nwhere   vala <= colb and cola <= valb\norder by\n        cola -- Lowest network range\nlimit   1\n
    \n soup wrap:

    If you're looking for the first range that contains at least a part of the block, try a condition like:

    vala <= colb and cola <= valb
    

    This says the search range [vala,valb] must partially overlap with the target range [cola,colb].

    In SQL:

    select  *
    from    example
    where   vala <= colb and cola <= valb
    order by
            cola -- Lowest network range
    limit   1
    
    qid & accept id: (9127317, 9127415) query: How to order by a column (which match a criteria) in SQL? soup:

    Use isnull, if UPDATE_DATE is null it uses CREATION_DATE to order rows.

    \n
    select * \nfrom table\norder by isnull(UPDATE_DATE, CREATION_DATE) asc\n
    \n

    Read more about isnull on MSDN.

    \n

    coalesce is an alternative and it's going to work in most RDBMS (afaik).

    \n
    select * \nfrom table\norder by coalesce(UPDATE_DATE, CREATION_DATE) asc\n
    \n soup wrap:

    Use isnull, if UPDATE_DATE is null it uses CREATION_DATE to order rows.

    select * 
    from table
    order by isnull(UPDATE_DATE, CREATION_DATE) asc
    

    Read more about isnull on MSDN.

    coalesce is an alternative and it's going to work in most RDBMS (afaik).

    select * 
    from table
    order by coalesce(UPDATE_DATE, CREATION_DATE) asc
    
    qid & accept id: (9153901, 9154036) query: Select max value within other select statement and display also a relevant field from the nested select soup:

    You don't even need a subquery:

    \n
    SELECT COUNT(bc.taken) AS mn\n     , b.title\nFROM books_clients AS bc\n  JOIN books b \n    ON b.book_id = bc.book_id\nGROUP BY b.title\nORDER BY mn DESC\nLIMIT 1\n
    \n

    If there are more than one results with same Max count, then you need a subquery:

    \n
    SELECT allb.mn\n     , allb.title\nFROM \n    ( SELECT COUNT(bc.taken) AS mn\n      FROM books_clients AS bc\n        JOIN books b \n          ON b.book_id = bc.book_id\n      GROUP BY b.title\n      ORDER BY mn DESC\n      LIMIT 1\n    ) AS maxb\n  JOIN\n    ( SELECT COUNT(bc.taken) AS mn\n           , b.title\n      FROM books_clients AS bc\n        JOIN books b \n          ON b.book_id = bc.book_id\n      GROUP BY b.title\n    ) AS allb\n    ON allb.mn = maxb.man\n
    \n soup wrap:

    You don't even need a subquery:

    SELECT COUNT(bc.taken) AS mn
         , b.title
    FROM books_clients AS bc
      JOIN books b 
        ON b.book_id = bc.book_id
    GROUP BY b.title
    ORDER BY mn DESC
    LIMIT 1
    

    If there are more than one results with same Max count, then you need a subquery:

    SELECT allb.mn
         , allb.title
    FROM 
        ( SELECT COUNT(bc.taken) AS mn
          FROM books_clients AS bc
            JOIN books b 
              ON b.book_id = bc.book_id
          GROUP BY b.title
          ORDER BY mn DESC
          LIMIT 1
        ) AS maxb
      JOIN
        ( SELECT COUNT(bc.taken) AS mn
               , b.title
          FROM books_clients AS bc
            JOIN books b 
              ON b.book_id = bc.book_id
          GROUP BY b.title
        ) AS allb
        ON allb.mn = maxb.man
    
    qid & accept id: (9172621, 9173022) query: Enumerate in postgresql soup:

    I'm not sure what you're asking for. The "row number in points group" is a straight forward window function application but I don't know what "array of ids" means.

    \n

    Given date like this:

    \n
     id | player_id | game_id | points \n----+-----------+---------+--------\n  1 |         1 |       1 |      0\n  2 |         1 |       2 |      1\n  3 |         1 |       3 |      5\n  4 |         2 |       1 |      1\n  5 |         2 |       2 |      0\n  6 |         2 |       3 |      0\n  7 |         3 |       1 |      2\n  8 |         3 |       2 |      3\n  9 |         3 |       3 |      1\n
    \n

    You can get the per-game rankings with this:

    \n
    select game_id, player_id, points,\n       rank() over (partition by game_id order by points desc)\nfrom players\n
    \n

    That will give you output like this:

    \n
     game_id | player_id | points | rank \n---------+-----------+--------+------\n       1 |         3 |      2 |    1\n       1 |         2 |      1 |    2\n       1 |         1 |      0 |    3\n       2 |         3 |      3 |    1\n       2 |         1 |      1 |    2\n       2 |         2 |      0 |    3\n       3 |         1 |      5 |    1\n       3 |         3 |      1 |    2\n       3 |         2 |      0 |    3\n
    \n soup wrap:

    I'm not sure what you're asking for. The "row number in points group" is a straight forward window function application but I don't know what "array of ids" means.

    Given date like this:

     id | player_id | game_id | points 
    ----+-----------+---------+--------
      1 |         1 |       1 |      0
      2 |         1 |       2 |      1
      3 |         1 |       3 |      5
      4 |         2 |       1 |      1
      5 |         2 |       2 |      0
      6 |         2 |       3 |      0
      7 |         3 |       1 |      2
      8 |         3 |       2 |      3
      9 |         3 |       3 |      1
    

    You can get the per-game rankings with this:

    select game_id, player_id, points,
           rank() over (partition by game_id order by points desc)
    from players
    

    That will give you output like this:

     game_id | player_id | points | rank 
    ---------+-----------+--------+------
           1 |         3 |      2 |    1
           1 |         2 |      1 |    2
           1 |         1 |      0 |    3
           2 |         3 |      3 |    1
           2 |         1 |      1 |    2
           2 |         2 |      0 |    3
           3 |         1 |      5 |    1
           3 |         3 |      1 |    2
           3 |         2 |      0 |    3
    
    qid & accept id: (9197597, 9198019) query: How to determine first instance of multiple items in a table soup:

    As far as I know, MySQL can only do this using a correlated sub-query, or joining on a sub-query...

    \n


    \n

    Correlated-Sub-Query:

    \n
    SELECT\n  count(browser), browser\nFROM\n  access\nWHERE\n      date = (SELECT MIN(date) FROM access AS lookup WHERE ip = access.ip)\n  AND date > '2011-11-1'\n  AND date < '2011-12-1' \nGROUP BY\n  browser\n
    \n


    \n

    Sub-Query:

    \n
    SELECT\n  count(access.browser), access.browser\nFROM\n  (SELECT ip, MIN(date) AS date FROM access GROUP BY ip) AS lookup\nINNER JOIN\n  access\n    ON  access.ip   = lookup.ip\n    AND access.date = lookup.date\nWHERE\n      lookup.date > '2011-11-1'\n  AND lookup.date < '2011-12-1' \nGROUP BY\n  access.browser\n
    \n

    Either way, be sue to have an index on (ip, date)

    \n soup wrap:

    As far as I know, MySQL can only do this using a correlated sub-query, or joining on a sub-query...


    Correlated-Sub-Query:

    SELECT
      count(browser), browser
    FROM
      access
    WHERE
          date = (SELECT MIN(date) FROM access AS lookup WHERE ip = access.ip)
      AND date > '2011-11-1'
      AND date < '2011-12-1' 
    GROUP BY
      browser
    


    Sub-Query:

    SELECT
      count(access.browser), access.browser
    FROM
      (SELECT ip, MIN(date) AS date FROM access GROUP BY ip) AS lookup
    INNER JOIN
      access
        ON  access.ip   = lookup.ip
        AND access.date = lookup.date
    WHERE
          lookup.date > '2011-11-1'
      AND lookup.date < '2011-12-1' 
    GROUP BY
      access.browser
    

    Either way, be sue to have an index on (ip, date)

    qid & accept id: (9206962, 9207002) query: Oracle SQL - Using joins to find values in one table, and not another soup:

    SubSELECTs are fine when used appropriately... "someone does not like something" alone is not a good enough reason IMHO.

    \n

    There are several options - just 2 as examples:

    \n
    SELECT nums.number FROM nums \nLEFT OUTER JOIN even ON even.number = nums.number \nWHERE even.number IS NULL\n
    \n

    OR

    \n
    SELECT nums.number FROM nums\nMINUS\nSELECT even.number FROM even\n
    \n soup wrap:

    SubSELECTs are fine when used appropriately... "someone does not like something" alone is not a good enough reason IMHO.

    There are several options - just 2 as examples:

    SELECT nums.number FROM nums 
    LEFT OUTER JOIN even ON even.number = nums.number 
    WHERE even.number IS NULL
    

    OR

    SELECT nums.number FROM nums
    MINUS
    SELECT even.number FROM even
    
    qid & accept id: (9218949, 9219440) query: Query places that have common tags in database soup:

    You can use this query to produce results below:

    \n
    select p1.name, p2.name, t.name\nfrom places p1\njoin placestags pt1 on p1.id=pt1.placeid\njoin placestags pt2 on pt1.tagid=pt2.tagid and pt2.placeid <> p1.id\njoin places p2 on pt2.placeid=p2.id\njoin tags t on t.id=pt1.tagid\norder by p1.id, t.id\n
    \n

    This does not get everything into a single row like you wanted (you'd need a pivot for that, and I don't think sqlite has it), but it lets you see what is going on. Here is what you'd get from this query:

    \n
    Place1      |   Place2       | Shared_Tag\n------------|----------------|-----------\nMcDonalds       Burger King     Burgers\nMcDonalds       Burger King     Fries\nBurger King     McDonalds       Burgers\nBurger King     McDonalds       Fries\n
    \n

    EDIT (in response to a comment):

    \n

    If you are looking to shorten the query time, try reducing the number of joins, and remove the symmetric duplicates, like this:

    \n
    select pt1.placeid, pt2.placeid, pt1.tagid\nfrom placestags pt1\njoin placestags pt2 on pt1.tagid=pt2.tagid and pt2.placeid > pt1.placeid\norder by pt1.placeid, pt1.tagid\n
    \n soup wrap:

    You can use this query to produce results below:

    select p1.name, p2.name, t.name
    from places p1
    join placestags pt1 on p1.id=pt1.placeid
    join placestags pt2 on pt1.tagid=pt2.tagid and pt2.placeid <> p1.id
    join places p2 on pt2.placeid=p2.id
    join tags t on t.id=pt1.tagid
    order by p1.id, t.id
    

    This does not get everything into a single row like you wanted (you'd need a pivot for that, and I don't think sqlite has it), but it lets you see what is going on. Here is what you'd get from this query:

    Place1      |   Place2       | Shared_Tag
    ------------|----------------|-----------
    McDonalds       Burger King     Burgers
    McDonalds       Burger King     Fries
    Burger King     McDonalds       Burgers
    Burger King     McDonalds       Fries
    

    EDIT (in response to a comment):

    If you are looking to shorten the query time, try reducing the number of joins, and remove the symmetric duplicates, like this:

    select pt1.placeid, pt2.placeid, pt1.tagid
    from placestags pt1
    join placestags pt2 on pt1.tagid=pt2.tagid and pt2.placeid > pt1.placeid
    order by pt1.placeid, pt1.tagid
    
    qid & accept id: (9237650, 9237695) query: Sort Days of the Week in SQL soup:

    If you are stuck with the data as is, I would recommend that you add an ORDER BY clause. Within the ORDER BY clause you will want to map each distinct value to a numeric value.

    \n

    e.g., Using IIf

    \n
    SELECT Slot.Day\nFROM Slot\nGROUP BY Slot.Day\nORDER BY IIf(Slot.Day = "Monday", 1,\n         IIf(Slot.Day = "Tuesday", 2,\n         IIf(Slot.Day = "Wednesday", 3,\n         IIf(Slot.Day = "Thursday", 4,\n         IIf(Slot.Day = "Friday", 5)))));\n
    \n

    e.g., Using SWITCH

    \n
    SELECT Slot.Day\nFROM Slot\nGROUP BY Slot.Day\nORDER BY SWITCH(Slot.Day = 'Monday', 1,\n                Slot.Day = 'Tuesday', 2,\n                Slot.Day = 'Wednesday', 3,\n                Slot.Day = 'Thursday', 4,\n                Slot.Day = 'Friday', 5);\n
    \n soup wrap:

    If you are stuck with the data as is, I would recommend that you add an ORDER BY clause. Within the ORDER BY clause you will want to map each distinct value to a numeric value.

    e.g., Using IIf

    SELECT Slot.Day
    FROM Slot
    GROUP BY Slot.Day
    ORDER BY IIf(Slot.Day = "Monday", 1,
             IIf(Slot.Day = "Tuesday", 2,
             IIf(Slot.Day = "Wednesday", 3,
             IIf(Slot.Day = "Thursday", 4,
             IIf(Slot.Day = "Friday", 5)))));
    

    e.g., Using SWITCH

    SELECT Slot.Day
    FROM Slot
    GROUP BY Slot.Day
    ORDER BY SWITCH(Slot.Day = 'Monday', 1,
                    Slot.Day = 'Tuesday', 2,
                    Slot.Day = 'Wednesday', 3,
                    Slot.Day = 'Thursday', 4,
                    Slot.Day = 'Friday', 5);
    
    qid & accept id: (9288893, 9289059) query: SQL: ORDER BY based on two columns of interlaced values soup:

    You can't. The order is not well defined

    \n

    The simple set

    \n
    5    10\n7    null\nnull 8\n
    \n

    can be sorted

    \n
    null 8\n5    10\n7    null\n
    \n

    and

    \n
    5    10\n7    null\nnull 8\n
    \n

    depending on where you start sorting.

    \n

    If possible I would change the sort criteria to "X if available, otherwise Y". Then you could use the COALSECE operator as suggested by "mu is too short". (order by coalesce(x, y))

    \n soup wrap:

    You can't. The order is not well defined

    The simple set

    5    10
    7    null
    null 8
    

    can be sorted

    null 8
    5    10
    7    null
    

    and

    5    10
    7    null
    null 8
    

    depending on where you start sorting.

    If possible I would change the sort criteria to "X if available, otherwise Y". Then you could use the COALSECE operator as suggested by "mu is too short". (order by coalesce(x, y))

    qid & accept id: (9301321, 9301358) query: sql to find certain ids and fillins soup:

    You can use a UNION to join the records WHERE id IN (1, 2) and then the second query is your random record.

    \n
    SELECT *\nFROM table\nWHERE id IN (1, 2)\n\nUNION\n\nSELECT Top 1 *\nFROM table\n
    \n

    If you provide more details about your query, then I can provide a more detailed answer.

    \n

    Edit:\nBased on your comment you should be able to do something this like:

    \n
    SELECT * \nFROM list_cards \nWHERE card_id IN (1, 2) AND qty > 0\n\nUNION\n\nSELECT * \nFROM list_cards \nWHERE qty > 0\n
    \n

    If you want to be sure you always get 3 results:

    \n
    SELECT TOP 3 C.*\nFROM\n(\n    SELECT C.*, '1' as Priority\n    FROM list_cards C\n    WHERE C.card_id IN (1, 2) AND qty > 0\n\n    UNION\n\n    SELECT C.*, '2' as Priority\n    FROM list_cards C\n    WHERE qty > 0\n) C\nORDER BY C.Priority\n
    \n soup wrap:

    You can use a UNION to join the records WHERE id IN (1, 2) and then the second query is your random record.

    SELECT *
    FROM table
    WHERE id IN (1, 2)
    
    UNION
    
    SELECT Top 1 *
    FROM table
    

    If you provide more details about your query, then I can provide a more detailed answer.

    Edit: Based on your comment you should be able to do something this like:

    SELECT * 
    FROM list_cards 
    WHERE card_id IN (1, 2) AND qty > 0
    
    UNION
    
    SELECT * 
    FROM list_cards 
    WHERE qty > 0
    

    If you want to be sure you always get 3 results:

    SELECT TOP 3 C.*
    FROM
    (
        SELECT C.*, '1' as Priority
        FROM list_cards C
        WHERE C.card_id IN (1, 2) AND qty > 0
    
        UNION
    
        SELECT C.*, '2' as Priority
        FROM list_cards C
        WHERE qty > 0
    ) C
    ORDER BY C.Priority
    
    qid & accept id: (9355066, 9355094) query: A MySQL query addressing three tables: How many from A are not in B or C? soup:

    If you want no ads in either table, then the sort of query you are after is:

    \n
    SELECT id\nFROM members\nWHERE id NOT IN ( any id from any other table )\n
    \n

    To select ids from other tables:

    \n
    SELECT id\nFROM \n
    \n

    Hence:

    \n
    SELECT id\nFROM members\nWHERE id NOT IN (SELECT id FROM dog_shareoffered)\n AND  id NOT IN (SELECT id FROM dog_sharewanted)\n
    \n

    I added the 'SELECT DISTINCT' because one member may put in many ads, but there's only one id. I used to have a SELECT DISTINCT in the subqueries above but as comments below mention, this is not necessary.

    \n

    If you wanted to avoid a sub-query (a possible performance increase, depending..) you could use some LEFT JOINs:

    \n
    SELECT members.id\nFROM members\nLEFT JOIN dog_shareoffered\n ON dog_shareoffered.id = members.id\nLEFT JOIN dog_sharewanted\n ON dog_sharewanted.id = members.id\nWHERE dog_shareoffered.id IS NULL\n  AND dog_sharewanted.id IS NULL\n
    \n

    Why this works:

    \n

    It takes the table members and joins it to the other two tables on the id column.\nThe LEFT JOIN means that if a member exists in the members table but not the table we're joining to (e.g. dog_shareoffered), then the corresponding dog_shareoffered columns will have NULL in them.

    \n

    So, the WHERE condition picks out rows where there's a NULL id in both dog_shareoffered and dog_sharewanted, meaning we've found ids in members with no corresponding id in the other two tables.

    \n soup wrap:

    If you want no ads in either table, then the sort of query you are after is:

    SELECT id
    FROM members
    WHERE id NOT IN ( any id from any other table )
    

    To select ids from other tables:

    SELECT id
    FROM 
    

    Hence:

    SELECT id
    FROM members
    WHERE id NOT IN (SELECT id FROM dog_shareoffered)
     AND  id NOT IN (SELECT id FROM dog_sharewanted)
    

    I added the 'SELECT DISTINCT' because one member may put in many ads, but there's only one id. I used to have a SELECT DISTINCT in the subqueries above but as comments below mention, this is not necessary.

    If you wanted to avoid a sub-query (a possible performance increase, depending..) you could use some LEFT JOINs:

    SELECT members.id
    FROM members
    LEFT JOIN dog_shareoffered
     ON dog_shareoffered.id = members.id
    LEFT JOIN dog_sharewanted
     ON dog_sharewanted.id = members.id
    WHERE dog_shareoffered.id IS NULL
      AND dog_sharewanted.id IS NULL
    

    Why this works:

    It takes the table members and joins it to the other two tables on the id column. The LEFT JOIN means that if a member exists in the members table but not the table we're joining to (e.g. dog_shareoffered), then the corresponding dog_shareoffered columns will have NULL in them.

    So, the WHERE condition picks out rows where there's a NULL id in both dog_shareoffered and dog_sharewanted, meaning we've found ids in members with no corresponding id in the other two tables.

    qid & accept id: (9356686, 9356787) query: mysql query for related articles soup:

    Try

    \n
    select a.* from Article a\ninner join ArticleTag at\n  on at.idArticle = a.idArticle\nwhere at.idTag in (select idTag from ArticleTag where idArticle =5)\n
    \n

    or

    \n
    select a.* from Article a\ninner join ArticleTag at on at.idArticle= a.idArticle\ninner join ArticleTag at2 on at2.idTag = a.idTag and at2.IdArticle! = at.idArticle\nwhere at2.idArticle = 5\n
    \n soup wrap:

    Try

    select a.* from Article a
    inner join ArticleTag at
      on at.idArticle = a.idArticle
    where at.idTag in (select idTag from ArticleTag where idArticle =5)
    

    or

    select a.* from Article a
    inner join ArticleTag at on at.idArticle= a.idArticle
    inner join ArticleTag at2 on at2.idTag = a.idTag and at2.IdArticle! = at.idArticle
    where at2.idArticle = 5
    
    qid & accept id: (9394879, 9395523) query: Sum totals for columns soup:

    This is going to look complicated, but bear with me. It needs some clarification on what is meant by others/rate however the principle is sound. If you have a primary key on financies that you can use then a more elegant (GROUP BY ... ROLLUP) solution may be viable however I've not sufficient experience with that to offer reliable advice. Here goes how I would address the issue.

    \n

    Long-winded option

    \n
    (\n    SELECT\n        financesTallied.date,\n        financesTallied.rate,\n        financesTallied.supply_fee,\n        financesTallied.demand_fee,\n        financesTallied.charged_fee,\n        financesTallied.total_costs,\n        financesTallied.net_return\n\n    FROM (\n\n        SELECT\n            financeWithNetReturn.*,\n            @supplyFee := @supplyFee + financeWithNetReturn.supply_fee,\n            @demandFee := @demandFee + financeWithNetReturn.demand_fee,\n            @charedFee := @charedFee + financeWithNetReturn.charged_fee\n        FROM \n        ( // Calculate net return based off total costs\n            SELECT \n                financeData.*,\n                financeData.supply_fee - financeData.total_costs AS net_return\n            FROM \n            ( // Select the data\n                SELECT\n                    date, \n                    rate, \n                    supply_fee, \n                    demand_fee, \n                    charged_fee,\n                    (supply_fee+demand_fee+charged_fee)/rate AS total_costs // need clarification on others/rate\n                FROM financies\n                WHERE date BETWEEN '2010-01-10' AND '2011-01-01'\n                ORDER BY date ASC\n            ) AS financeData\n        ) AS financeWithNetReturn,\n        (\n            SELECT\n                @supplyFee := 0\n                @demandFee := 0\n                @charedFee := 0\n        ) AS variableInit\n    ) AS financesTallied\n) UNION (\n    SELECT\n        '*Total*',\n        NULL,\n        @supplyFee,\n        @demandFee,\n        @chargedFee,\n        NULL,\n        NULL\n)\n
    \n

    Working from the innermost query to the outermost. This query selects the basic fees and calculates the total_costs for this row. This total_costs formula will need adjustment as I'm not 100% clear on what you were looking for there. Will refer to this as [SQ1]

    \n
                SELECT\n                date, \n                rate, \n                supply_fee, \n                demand_fee, \n                charged_fee,\n                (supply_fee+demand_fee+charged_fee)/rate AS total_costs // need clarification on others/rate\n            FROM financies\n            WHERE date BETWEEN '2010-01-10' AND '2011-01-01'\n            ORDER BY date ASC\n
    \n

    Next level up I'm just reusing the calculated total_costs column with the supply_fee column to add in a net_return column. This concludes the basic data you need per-row, will refer to this as [SQL2]

    \n
            SELECT \n            financeData.*,\n            financeData.supply_fee - financeData.total_costs AS net_return\n        FROM \n        ([SQ1]) AS financeData\n
    \n

    At this level it's time to start tallying up the values, so need to initialise the variables required with 0 values ([SQL3])

    \n
            SELECT\n            @supplyFee := 0\n            @demandFee := 0\n            @charedFee := 0 \n
    \n

    Next level up, I'm using the calculated rows to calculate the totals ([SQL4])

    \n
        SELECT\n        financeWithNetReturn.*,\n        @supplyFee := @supplyFee + financeWithNetReturn.supply_fee,\n        @demandFee := @demandFee + financeWithNetReturn.demand_fee,\n        @charedFee := @charedFee + financeWithNetReturn.charged_fee\n    FROM \n    ([SQL2]) AS financeWithNetReturn,\n    ([SQL3]) AS variableInit\n
    \n

    Now finally at the top level, just need to output the desired columns without the calculated columns ([SQL5])

    \n
    SELECT\n    financesTallied.date,\n    financesTallied.rate,\n    financesTallied.supply_fee,\n    financesTallied.demand_fee,\n    financesTallied.charged_fee,\n    financesTallied.total_costs,\n    financesTallied.net_return\n\nFROM ([SQL4]) AS financesTallied\n
    \n

    And then output it UNIONED with a totals row

    \n
    ([SQL5]) UNION (\n    SELECT\n        '*Total*',\n        NULL,\n        @supplyFee,\n        @demandFee,\n        @chargedFee,\n        NULL,\n        NULL\n)\n
    \n soup wrap:

    This is going to look complicated, but bear with me. It needs some clarification on what is meant by others/rate however the principle is sound. If you have a primary key on financies that you can use then a more elegant (GROUP BY ... ROLLUP) solution may be viable however I've not sufficient experience with that to offer reliable advice. Here goes how I would address the issue.

    Long-winded option

    (
        SELECT
            financesTallied.date,
            financesTallied.rate,
            financesTallied.supply_fee,
            financesTallied.demand_fee,
            financesTallied.charged_fee,
            financesTallied.total_costs,
            financesTallied.net_return
    
        FROM (
    
            SELECT
                financeWithNetReturn.*,
                @supplyFee := @supplyFee + financeWithNetReturn.supply_fee,
                @demandFee := @demandFee + financeWithNetReturn.demand_fee,
                @charedFee := @charedFee + financeWithNetReturn.charged_fee
            FROM 
            ( // Calculate net return based off total costs
                SELECT 
                    financeData.*,
                    financeData.supply_fee - financeData.total_costs AS net_return
                FROM 
                ( // Select the data
                    SELECT
                        date, 
                        rate, 
                        supply_fee, 
                        demand_fee, 
                        charged_fee,
                        (supply_fee+demand_fee+charged_fee)/rate AS total_costs // need clarification on others/rate
                    FROM financies
                    WHERE date BETWEEN '2010-01-10' AND '2011-01-01'
                    ORDER BY date ASC
                ) AS financeData
            ) AS financeWithNetReturn,
            (
                SELECT
                    @supplyFee := 0
                    @demandFee := 0
                    @charedFee := 0
            ) AS variableInit
        ) AS financesTallied
    ) UNION (
        SELECT
            '*Total*',
            NULL,
            @supplyFee,
            @demandFee,
            @chargedFee,
            NULL,
            NULL
    )
    

    Working from the innermost query to the outermost. This query selects the basic fees and calculates the total_costs for this row. This total_costs formula will need adjustment as I'm not 100% clear on what you were looking for there. Will refer to this as [SQ1]

                SELECT
                    date, 
                    rate, 
                    supply_fee, 
                    demand_fee, 
                    charged_fee,
                    (supply_fee+demand_fee+charged_fee)/rate AS total_costs // need clarification on others/rate
                FROM financies
                WHERE date BETWEEN '2010-01-10' AND '2011-01-01'
                ORDER BY date ASC
    

    Next level up I'm just reusing the calculated total_costs column with the supply_fee column to add in a net_return column. This concludes the basic data you need per-row, will refer to this as [SQL2]

            SELECT 
                financeData.*,
                financeData.supply_fee - financeData.total_costs AS net_return
            FROM 
            ([SQ1]) AS financeData
    

    At this level it's time to start tallying up the values, so need to initialise the variables required with 0 values ([SQL3])

            SELECT
                @supplyFee := 0
                @demandFee := 0
                @charedFee := 0 
    

    Next level up, I'm using the calculated rows to calculate the totals ([SQL4])

        SELECT
            financeWithNetReturn.*,
            @supplyFee := @supplyFee + financeWithNetReturn.supply_fee,
            @demandFee := @demandFee + financeWithNetReturn.demand_fee,
            @charedFee := @charedFee + financeWithNetReturn.charged_fee
        FROM 
        ([SQL2]) AS financeWithNetReturn,
        ([SQL3]) AS variableInit
    

    Now finally at the top level, just need to output the desired columns without the calculated columns ([SQL5])

    SELECT
        financesTallied.date,
        financesTallied.rate,
        financesTallied.supply_fee,
        financesTallied.demand_fee,
        financesTallied.charged_fee,
        financesTallied.total_costs,
        financesTallied.net_return
    
    FROM ([SQL4]) AS financesTallied
    

    And then output it UNIONED with a totals row

    ([SQL5]) UNION (
        SELECT
            '*Total*',
            NULL,
            @supplyFee,
            @demandFee,
            @chargedFee,
            NULL,
            NULL
    )
    
    qid & accept id: (9403894, 9403921) query: Sort by a particular value soup:

    Use the FIND_IN_SET function:

    \n

    http://dev.mysql.com/doc/refman/5.1/en/string-functions.html#function_find-in-set

    \n

    Code will look like this:

    \n
    ORDER BY FIND_IN_SET(pa.status, 'pending,failed,application,submitted,canceled')\n
    \n
    \n

    Here is how I would rewrite your SQL query:

    \n
    SELECT\n  cl.id, cl.lead_id, cl.client_name, \n  po.id, po.carrier,\n  pa.downpayment_time, pa.status, pa.policy_id\nFROM\n  pdp_client_info AS cl\n  JOIN pdp_policy_info AS po ON (cl.id = po.id)\n  JOIN pdp_payment AS pa ON (po.id = pa.policy_id)\nWHERE\n  (pa.downpayment_date = '$current_date')\n  AND (pa.status IN ('pending', 'failed', 'application', 'submitted', 'canceled'))\nORDER BY\n  FIND_IN_SET(pa.status, 'pending,failed,application,submitted,canceled')\n
    \n soup wrap:

    Use the FIND_IN_SET function:

    http://dev.mysql.com/doc/refman/5.1/en/string-functions.html#function_find-in-set

    Code will look like this:

    ORDER BY FIND_IN_SET(pa.status, 'pending,failed,application,submitted,canceled')
    

    Here is how I would rewrite your SQL query:

    SELECT
      cl.id, cl.lead_id, cl.client_name, 
      po.id, po.carrier,
      pa.downpayment_time, pa.status, pa.policy_id
    FROM
      pdp_client_info AS cl
      JOIN pdp_policy_info AS po ON (cl.id = po.id)
      JOIN pdp_payment AS pa ON (po.id = pa.policy_id)
    WHERE
      (pa.downpayment_date = '$current_date')
      AND (pa.status IN ('pending', 'failed', 'application', 'submitted', 'canceled'))
    ORDER BY
      FIND_IN_SET(pa.status, 'pending,failed,application,submitted,canceled')
    
    qid & accept id: (9414038, 9414525) query: Join between two master and one detail Table soup:

    Haven't got access to Northwind right now, so this is untested, but you should get the idea...

    \n

    There's no need to get data about employees if all you want is sales per region. Your subquery is therefore redundant...

    \n
    select\n      r.RegionDescription, \n      sum(OD.Quantity*OD.UnitPrice)\nfrom\n   Region R\n   inner join Territories T\n       on R.RegionID=T.RegionID\n   inner join EmployeeTerritories ET\n       on T.TerritoryID=ET.TerritoryID\n   inner join  Employees E\n       on ET.EmployeeID=E.EmployeeID\n   inner join Orders O\n       on E.EmployeeID=o.EmployeeID\n   inner join [Order Details] OD\n                on o.OrderID=OD.OrderID\n  Group by r.RegionDescription\n
    \n

    As discussed in the comments, this "double counts" sales where an employee is assigned to more than one region. In many cases, this is desired behaviour - if you want to know how well a region is doing, you need to know how many sales came from that region, and if an employee is assigned to more than one region, that doesn't affect the region's performance.

    \n

    However, it means you overstate the sales if you add up all the regions.

    \n

    There are two strategies to avoid this. One is to assign the sale to just one region; in the comments, you say there's no data on which to make that decision, so you could do it on the "lowest regionID) - something like:

    \n
    select\n      r.RegionDescription, \n      sum(OD.Quantity*OD.UnitPrice)\nfrom\n   Region R\n   inner join Territories T\n       on R.RegionID=T.RegionID\n   inner join EmployeeTerritories ET\n       on T.TerritoryID=ET.TerritoryID\n   inner join  Employees E\n       on ET.EmployeeID=E.EmployeeID\n   inner join Orders O\n       on E.EmployeeID=o.EmployeeID\n   inner join [Order Details] OD\n                on o.OrderID=OD.OrderID\n  Group by r.RegionDescription\n  having et.TerritoryID = min(territoryID)\n
    \n

    (again, no access to DB, so can't test - but this should filter out duplicates).

    \n

    Alternatively, you can assign a proportion of the sale to each region - though then rounding may cause the totals not to add up properly. That's a query I'd like to try before posting though!

    \n soup wrap:

    Haven't got access to Northwind right now, so this is untested, but you should get the idea...

    There's no need to get data about employees if all you want is sales per region. Your subquery is therefore redundant...

    select
          r.RegionDescription, 
          sum(OD.Quantity*OD.UnitPrice)
    from
       Region R
       inner join Territories T
           on R.RegionID=T.RegionID
       inner join EmployeeTerritories ET
           on T.TerritoryID=ET.TerritoryID
       inner join  Employees E
           on ET.EmployeeID=E.EmployeeID
       inner join Orders O
           on E.EmployeeID=o.EmployeeID
       inner join [Order Details] OD
                    on o.OrderID=OD.OrderID
      Group by r.RegionDescription
    

    As discussed in the comments, this "double counts" sales where an employee is assigned to more than one region. In many cases, this is desired behaviour - if you want to know how well a region is doing, you need to know how many sales came from that region, and if an employee is assigned to more than one region, that doesn't affect the region's performance.

    However, it means you overstate the sales if you add up all the regions.

    There are two strategies to avoid this. One is to assign the sale to just one region; in the comments, you say there's no data on which to make that decision, so you could do it on the "lowest regionID) - something like:

    select
          r.RegionDescription, 
          sum(OD.Quantity*OD.UnitPrice)
    from
       Region R
       inner join Territories T
           on R.RegionID=T.RegionID
       inner join EmployeeTerritories ET
           on T.TerritoryID=ET.TerritoryID
       inner join  Employees E
           on ET.EmployeeID=E.EmployeeID
       inner join Orders O
           on E.EmployeeID=o.EmployeeID
       inner join [Order Details] OD
                    on o.OrderID=OD.OrderID
      Group by r.RegionDescription
      having et.TerritoryID = min(territoryID)
    

    (again, no access to DB, so can't test - but this should filter out duplicates).

    Alternatively, you can assign a proportion of the sale to each region - though then rounding may cause the totals not to add up properly. That's a query I'd like to try before posting though!

    qid & accept id: (9419615, 19190385) query: SQL query on many-to-many with redundant constraint soup:

    Well I think I wasn't very clear in my description. But the solution I found out is to proceed by steps, without using only SQL, but with PHP.

    \n

    I do a first reasearch with the first criterium :

    \n
    where Topics.PK_TOPICS=8\n
    \n

    I get the result in a PHP array. Then, a second one, with the second criterium :

    \n
    where Topics.PK_TOPICS=15\n
    \n

    I get the results in another, temporary PHP array.\nAnd then I use PHP array_intersect() :

    \n
    $results = array_intersect($results, $temp_results);\n
    \n

    to find only the results that are matching both criteriums. Obviously, I can reuse $results to intersect as many time as I want. So, no limits to search criteriums.

    \n

    Hope this help…

    \n soup wrap:

    Well I think I wasn't very clear in my description. But the solution I found out is to proceed by steps, without using only SQL, but with PHP.

    I do a first reasearch with the first criterium :

    where Topics.PK_TOPICS=8
    

    I get the result in a PHP array. Then, a second one, with the second criterium :

    where Topics.PK_TOPICS=15
    

    I get the results in another, temporary PHP array. And then I use PHP array_intersect() :

    $results = array_intersect($results, $temp_results);
    

    to find only the results that are matching both criteriums. Obviously, I can reuse $results to intersect as many time as I want. So, no limits to search criteriums.

    Hope this help…

    qid & accept id: (9429371, 9429424) query: Sql Query to count same date entries soup:

    The reason you get what you get is because you also compare the time, down to a second apart. So any entries created the same second will be grouped together.

    \n

    To achieve what you actually want, you need to apply a date function to the created_at column:

    \n
    SELECT COUNT(1) AS entries, DATE(created_at) as date\nFROM wp_frm_items\nWHERE user_id =1\nGROUP BY DATE(created_at)\nLIMIT 0 , 30\n
    \n

    This would remove the time part from the column field, and so group together any entries created on the same day. You could take this further by removing the day part to group entries created on the same month of the same year etc.

    \n

    To restrict the query to entries created in the current month, you add a WHERE-clause to the query to only select entries that satisfy that condition. Here's an example:

    \n
    SELECT COUNT(1) AS entries, DATE(created_at) as date \nFROM  wp_frm_items\nWHERE user_id = 1 \n  AND created_at >= DATE_FORMAT(CURDATE(),'%Y-%m-01') \nGROUP BY DATE(created_at)\n
    \n

    Note: The COUNT(1)-part of the query simply means Count each row, and you could just as well have written COUNT(*), COUNT(id) or any other field. Historically, the most efficient approach was to count the primary key, since that is always available in whatever index the query engine could utilize. COUNT(*) used to have to leave the index and retrieve the corresponding row in the table, which was sometimes inefficient. In more modern query planners this is probably no longer the case. COUNT(1) is another variant of this that didn't force the query planner to retrieve the rows from the table.

    \n

    Edit: The query to group by month can be created in a number of different ways. Here is an example:

    \n
    SELECT COUNT(1) AS entries, DATE_FORMAT(created_at,'%Y-%c') as month\nFROM wp_frm_items\nWHERE user_id =1\nGROUP BY DATE_FORMAT(created_at,'%Y-%c')\n
    \n soup wrap:

    The reason you get what you get is because you also compare the time, down to a second apart. So any entries created the same second will be grouped together.

    To achieve what you actually want, you need to apply a date function to the created_at column:

    SELECT COUNT(1) AS entries, DATE(created_at) as date
    FROM wp_frm_items
    WHERE user_id =1
    GROUP BY DATE(created_at)
    LIMIT 0 , 30
    

    This would remove the time part from the column field, and so group together any entries created on the same day. You could take this further by removing the day part to group entries created on the same month of the same year etc.

    To restrict the query to entries created in the current month, you add a WHERE-clause to the query to only select entries that satisfy that condition. Here's an example:

    SELECT COUNT(1) AS entries, DATE(created_at) as date 
    FROM  wp_frm_items
    WHERE user_id = 1 
      AND created_at >= DATE_FORMAT(CURDATE(),'%Y-%m-01') 
    GROUP BY DATE(created_at)
    

    Note: The COUNT(1)-part of the query simply means Count each row, and you could just as well have written COUNT(*), COUNT(id) or any other field. Historically, the most efficient approach was to count the primary key, since that is always available in whatever index the query engine could utilize. COUNT(*) used to have to leave the index and retrieve the corresponding row in the table, which was sometimes inefficient. In more modern query planners this is probably no longer the case. COUNT(1) is another variant of this that didn't force the query planner to retrieve the rows from the table.

    Edit: The query to group by month can be created in a number of different ways. Here is an example:

    SELECT COUNT(1) AS entries, DATE_FORMAT(created_at,'%Y-%c') as month
    FROM wp_frm_items
    WHERE user_id =1
    GROUP BY DATE_FORMAT(created_at,'%Y-%c')
    
    qid & accept id: (9432630, 9433395) query: String concatenation in SQL server soup:

    In case you need to do this as a set and not one row at a time. Given the following split function:

    \n
    USE tempdb;\nGO\nCREATE FUNCTION dbo.SplitStrings(@List NVARCHAR(MAX))\nRETURNS TABLE\nAS\n   RETURN ( SELECT Item FROM\n       ( SELECT Item = x.i.value('(./text())[1]', 'nvarchar(max)')\n         FROM ( SELECT [XML] = CONVERT(XML, ''\n         + REPLACE(@List,',', '') + '').query('.')\n           ) AS a CROSS APPLY [XML].nodes('i') AS x(i) ) AS y\n       WHERE Item IS NOT NULL\n   );\nGO\n
    \n

    Then with the following table and sample data, and string variable, you can get all of the results this way:

    \n
    DECLARE @foo TABLE(ID INT IDENTITY(1,1), col NVARCHAR(MAX));\n\nINSERT @foo(col) SELECT N'c,d,e,f,g';\nINSERT @foo(col) SELECT N'c,e,b';\nINSERT @foo(col) SELECT N'd,e,f,x,a,e';\n\nDECLARE @string NVARCHAR(MAX) = N'a,b,c,d';\n\n;WITH x AS\n(\n    SELECT f.ID, c.Item FROM @foo AS f\n    CROSS APPLY dbo.SplitStrings(f.col) AS c\n), y AS\n(\n    SELECT ID, Item FROM x\n    UNION\n    SELECT x.ID, s.Item\n        FROM dbo.SplitStrings(@string) AS s\n        CROSS JOIN x\n)\nSELECT DISTINCT ID, Items = STUFF((SELECT ',' + Item \n    FROM y AS y2 WHERE y2.ID = y.ID \n    FOR XML PATH(''), TYPE).value('.[1]', 'nvarchar(max)'), 1, 1, N'')\nFROM y;\n
    \n

    Results:

    \n
    ID   Items\n--   ----------\n 1   a,b,c,d,e,f,g\n 2   a,b,c,d,e\n 3   a,b,c,d,e,f,x\n
    \n

    Now that all said, what you really should do is follow the previous advice and store these things in a related table in the first place. You can use the same type of splitting methodology to store the strings separately whenever an insert or update happens, instead of just dumping the CSV into a single column, and your applications shouldn't really have to change the way they're passing data into your procedures. But it sure will be easier to get the data out!

    \n

    EDIT

    \n

    Adding a potential solution for SQL Server 2008 that is a bit more convoluted but gets things done with one less loop (using a massive table scan and replace instead). I don't think this is any better than the solution above, and it is certainly less maintainable, but it is an option to test out should you find you are able to upgrade to 2008 or better (and also for any 2008+ users who come across this question).

    \n
    SET NOCOUNT ON;\n\n-- let's pretend this is our static table:\n\nCREATE TABLE #x\n(\n    ID INT IDENTITY(1,1),\n    col NVARCHAR(MAX)\n);\n\nINSERT #x(col) VALUES(N'c,d,e,f,g'), (N'c,e,b'), (N'd,e,f,x,a,e');\n\n-- and here is our parameter:\n\nDECLARE @string NVARCHAR(MAX) = N'a,b,c,d';\n
    \n

    The code:

    \n
    DECLARE @sql NVARCHAR(MAX) = N'DECLARE @src TABLE(ID INT, col NVARCHAR(32));\n    DECLARE @dest TABLE(ID INT, col NVARCHAR(32));';\n\nSELECT @sql += '\n    INSERT @src VALUES(' + RTRIM(ID) + ','''\n    + REPLACE(col, ',', '''),(' + RTRIM(ID) + ',''') + ''');'\nFROM #x;\n\nSELECT @sql += '\n    INSERT @dest VALUES(' + RTRIM(ID) + ','''\n    + REPLACE(@string, ',', '''),(' + RTRIM(ID) + ',''') + ''');'\nFROM #x;\n\nSELECT @sql += '\n    WITH x AS (SELECT ID, col FROM @src UNION SELECT ID, col FROM @dest)\n    SELECT DISTINCT ID, Items = STUFF((SELECT '','' + col\n     FROM x AS x2 WHERE x2.ID = x.ID FOR XML PATH('''')), 1, 1, N'''')\n     FROM x;'\n\nEXEC sp_executesql @sql;\nGO\nDROP TABLE #x;\n
    \n

    This is much trickier to do in 2005 (though not impossible) because you need to change the VALUES() clauses to UNION ALL...

    \n soup wrap:

    In case you need to do this as a set and not one row at a time. Given the following split function:

    USE tempdb;
    GO
    CREATE FUNCTION dbo.SplitStrings(@List NVARCHAR(MAX))
    RETURNS TABLE
    AS
       RETURN ( SELECT Item FROM
           ( SELECT Item = x.i.value('(./text())[1]', 'nvarchar(max)')
             FROM ( SELECT [XML] = CONVERT(XML, ''
             + REPLACE(@List,',', '') + '').query('.')
               ) AS a CROSS APPLY [XML].nodes('i') AS x(i) ) AS y
           WHERE Item IS NOT NULL
       );
    GO
    

    Then with the following table and sample data, and string variable, you can get all of the results this way:

    DECLARE @foo TABLE(ID INT IDENTITY(1,1), col NVARCHAR(MAX));
    
    INSERT @foo(col) SELECT N'c,d,e,f,g';
    INSERT @foo(col) SELECT N'c,e,b';
    INSERT @foo(col) SELECT N'd,e,f,x,a,e';
    
    DECLARE @string NVARCHAR(MAX) = N'a,b,c,d';
    
    ;WITH x AS
    (
        SELECT f.ID, c.Item FROM @foo AS f
        CROSS APPLY dbo.SplitStrings(f.col) AS c
    ), y AS
    (
        SELECT ID, Item FROM x
        UNION
        SELECT x.ID, s.Item
            FROM dbo.SplitStrings(@string) AS s
            CROSS JOIN x
    )
    SELECT DISTINCT ID, Items = STUFF((SELECT ',' + Item 
        FROM y AS y2 WHERE y2.ID = y.ID 
        FOR XML PATH(''), TYPE).value('.[1]', 'nvarchar(max)'), 1, 1, N'')
    FROM y;
    

    Results:

    ID   Items
    --   ----------
     1   a,b,c,d,e,f,g
     2   a,b,c,d,e
     3   a,b,c,d,e,f,x
    

    Now that all said, what you really should do is follow the previous advice and store these things in a related table in the first place. You can use the same type of splitting methodology to store the strings separately whenever an insert or update happens, instead of just dumping the CSV into a single column, and your applications shouldn't really have to change the way they're passing data into your procedures. But it sure will be easier to get the data out!

    EDIT

    Adding a potential solution for SQL Server 2008 that is a bit more convoluted but gets things done with one less loop (using a massive table scan and replace instead). I don't think this is any better than the solution above, and it is certainly less maintainable, but it is an option to test out should you find you are able to upgrade to 2008 or better (and also for any 2008+ users who come across this question).

    SET NOCOUNT ON;
    
    -- let's pretend this is our static table:
    
    CREATE TABLE #x
    (
        ID INT IDENTITY(1,1),
        col NVARCHAR(MAX)
    );
    
    INSERT #x(col) VALUES(N'c,d,e,f,g'), (N'c,e,b'), (N'd,e,f,x,a,e');
    
    -- and here is our parameter:
    
    DECLARE @string NVARCHAR(MAX) = N'a,b,c,d';
    

    The code:

    DECLARE @sql NVARCHAR(MAX) = N'DECLARE @src TABLE(ID INT, col NVARCHAR(32));
        DECLARE @dest TABLE(ID INT, col NVARCHAR(32));';
    
    SELECT @sql += '
        INSERT @src VALUES(' + RTRIM(ID) + ','''
        + REPLACE(col, ',', '''),(' + RTRIM(ID) + ',''') + ''');'
    FROM #x;
    
    SELECT @sql += '
        INSERT @dest VALUES(' + RTRIM(ID) + ','''
        + REPLACE(@string, ',', '''),(' + RTRIM(ID) + ',''') + ''');'
    FROM #x;
    
    SELECT @sql += '
        WITH x AS (SELECT ID, col FROM @src UNION SELECT ID, col FROM @dest)
        SELECT DISTINCT ID, Items = STUFF((SELECT '','' + col
         FROM x AS x2 WHERE x2.ID = x.ID FOR XML PATH('''')), 1, 1, N'''')
         FROM x;'
    
    EXEC sp_executesql @sql;
    GO
    DROP TABLE #x;
    

    This is much trickier to do in 2005 (though not impossible) because you need to change the VALUES() clauses to UNION ALL...

    qid & accept id: (9459554, 9460084) query: Get the max value of a column from set of rows soup:

    I think this is the query you're looking for:

    \n
    select b.*, c.filenumber from b\njoin (\n  select id, max(count) as count from a\n  group by id\n) as NewA on b.id = NewA.id\njoin c on NewA.count = c.count\n
    \n

    However, you should take into account that I don't get why for id=1 in tableA you choose the 16 to match against table C (which is the max) and for id=2 in tableA you choose the 10 to match against table C (which is the min). I assumed you meant the max in both cases.

    \n

    Edit:

    \n

    I see you've updated tableA data. The query results in this, given the previous data:

    \n
    +----+---------------+------------+\n| ID |   FILENAME    | FILENUMBER |\n+----+---------------+------------+\n|  1 | sample1.file  |       1234 |\n|  2 | sample2.file  |       3456 |\n|  3 | sample3.file  |       4567 |\n+----+---------------+------------+\n
    \n

    Here is a working example

    \n soup wrap:

    I think this is the query you're looking for:

    select b.*, c.filenumber from b
    join (
      select id, max(count) as count from a
      group by id
    ) as NewA on b.id = NewA.id
    join c on NewA.count = c.count
    

    However, you should take into account that I don't get why for id=1 in tableA you choose the 16 to match against table C (which is the max) and for id=2 in tableA you choose the 10 to match against table C (which is the min). I assumed you meant the max in both cases.

    Edit:

    I see you've updated tableA data. The query results in this, given the previous data:

    +----+---------------+------------+
    | ID |   FILENAME    | FILENUMBER |
    +----+---------------+------------+
    |  1 | sample1.file  |       1234 |
    |  2 | sample2.file  |       3456 |
    |  3 | sample3.file  |       4567 |
    +----+---------------+------------+
    

    Here is a working example

    qid & accept id: (9475177, 9486410) query: SQL: Select transactions where rows are not of criteria inside the same table soup:

    Here is a solution based on nested subqueries. First, I added a few rows to catch a few more cases. Transaction 10, for example, should not be cancelled by transaction 12, because transaction 11 comes in between.

    \n
    > select * from transactions order by date_time;\n+----+---------+------+---------------------+--------+\n| id | account | type | date_time           | amount |\n+----+---------+------+---------------------+--------+\n|  1 |       1 | R    | 2012-01-01 10:01:00 |   1000 |\n|  2 |       3 | R    | 2012-01-02 12:53:10 |   1500 |\n|  3 |       3 | A    | 2012-01-03 13:10:01 |  -1500 |\n|  4 |       2 | R    | 2012-01-03 17:56:00 |   2000 |\n|  5 |       1 | R    | 2012-01-04 12:30:01 |   1000 |\n|  6 |       2 | A    | 2012-01-04 13:23:01 |  -2000 |\n|  7 |       3 | R    | 2012-01-04 15:13:10 |   3000 |\n|  8 |       3 | R    | 2012-01-05 12:12:00 |   1250 |\n|  9 |       3 | A    | 2012-01-06 17:24:01 |  -1250 |\n| 10 |       3 | R    | 2012-01-07 00:00:00 |   1250 |\n| 11 |       3 | R    | 2012-01-07 05:00:00 |   4000 |\n| 12 |       3 | A    | 2012-01-08 00:00:00 |  -1250 |\n| 14 |       2 | R    | 2012-01-09 00:00:00 |   2000 |\n| 13 |       3 | A    | 2012-01-10 00:00:00 |  -1500 |\n| 15 |       2 | A    | 2012-01-11 04:00:00 |  -2000 |\n| 16 |       2 | R    | 2012-01-12 00:00:00 |   5000 |\n+----+---------+------+---------------------+--------+\n16 rows in set (0.00 sec)\n
    \n

    First, create a query to grab, for each transaction, "the date of the most recent transaction before that one in the same account":

    \n
    SELECT t2.*,\n       MAX(t1.date_time) AS prev_date\nFROM transactions t1\nJOIN transactions t2\nON (t1.account = t2.account\n   AND t2.date_time > t1.date_time)\nGROUP BY t2.account,t2.date_time\nORDER BY t2.date_time;\n\n+----+---------+------+---------------------+--------+---------------------+\n| id | account | type | date_time           | amount | prev_date           |\n+----+---------+------+---------------------+--------+---------------------+\n|  3 |       3 | A    | 2012-01-03 13:10:01 |  -1500 | 2012-01-02 12:53:10 |\n|  5 |       1 | R    | 2012-01-04 12:30:01 |   1000 | 2012-01-01 10:01:00 |\n|  6 |       2 | A    | 2012-01-04 13:23:01 |  -2000 | 2012-01-03 17:56:00 |\n|  7 |       3 | R    | 2012-01-04 15:13:10 |   3000 | 2012-01-03 13:10:01 |\n|  8 |       3 | R    | 2012-01-05 12:12:00 |   1250 | 2012-01-04 15:13:10 |\n|  9 |       3 | A    | 2012-01-06 17:24:01 |  -1250 | 2012-01-05 12:12:00 |\n| 10 |       3 | R    | 2012-01-07 00:00:00 |   1250 | 2012-01-06 17:24:01 |\n| 11 |       3 | R    | 2012-01-07 05:00:00 |   4000 | 2012-01-07 00:00:00 |\n| 12 |       3 | A    | 2012-01-08 00:00:00 |  -1250 | 2012-01-07 05:00:00 |\n| 14 |       2 | R    | 2012-01-09 00:00:00 |   2000 | 2012-01-04 13:23:01 |\n| 13 |       3 | A    | 2012-01-10 00:00:00 |  -1500 | 2012-01-08 00:00:00 |\n| 15 |       2 | A    | 2012-01-11 04:00:00 |  -2000 | 2012-01-09 00:00:00 |\n| 16 |       2 | R    | 2012-01-12 00:00:00 |   5000 | 2012-01-11 04:00:00 |\n+----+---------+------+---------------------+--------+---------------------+\n13 rows in set (0.00 sec)\n
    \n

    Use that as a subquery to get each transaction and its predecessor on the same row. Use some filtering to pull out the transactions we're interested in - namely, 'A' transactions whose predecessors are 'R' transactions that they exactly cancel out -

    \n
    SELECT\n  t3.*,transactions.*\nFROM\n  transactions\n  JOIN\n  (SELECT t2.*,\n          MAX(t1.date_time) AS prev_date\n   FROM transactions t1\n   JOIN transactions t2\n   ON (t1.account = t2.account\n      AND t2.date_time > t1.date_time)\n   GROUP BY t2.account,t2.date_time) t3\n  ON t3.account = transactions.account\n     AND t3.prev_date = transactions.date_time\n     AND t3.type='A'\n     AND transactions.type='R'\n     AND t3.amount + transactions.amount = 0\n  ORDER BY t3.date_time;\n\n\n+----+---------+------+---------------------+--------+---------------------+----+---------+------+---------------------+--------+\n| id | account | type | date_time           | amount | prev_date           | id | account | type | date_time           | amount |\n+----+---------+------+---------------------+--------+---------------------+----+---------+------+---------------------+--------+\n|  3 |       3 | A    | 2012-01-03 13:10:01 |  -1500 | 2012-01-02 12:53:10 |  2 |       3 | R    | 2012-01-02 12:53:10 |   1500 |\n|  6 |       2 | A    | 2012-01-04 13:23:01 |  -2000 | 2012-01-03 17:56:00 |  4 |       2 | R    | 2012-01-03 17:56:00 |   2000 |\n|  9 |       3 | A    | 2012-01-06 17:24:01 |  -1250 | 2012-01-05 12:12:00 |  8 |       3 | R    | 2012-01-05 12:12:00 |   1250 |\n| 15 |       2 | A    | 2012-01-11 04:00:00 |  -2000 | 2012-01-09 00:00:00 | 14 |       2 | R    | 2012-01-09 00:00:00 |   2000 |\n+----+---------+------+---------------------+--------+---------------------+----+---------+------+---------------------+--------+\n4 rows in set (0.00 sec)\n
    \n

    From the result above it's apparent we're almost there - we've identified the unwanted transactions. Using LEFT JOIN we can filter these out of the whole transaction set:

    \n
    SELECT\n  transactions.*\nFROM\n  transactions\nLEFT JOIN\n  (SELECT\n     transactions.id\n   FROM\n     transactions\n     JOIN\n     (SELECT t2.*,\n             MAX(t1.date_time) AS prev_date\n      FROM transactions t1\n      JOIN transactions t2\n      ON (t1.account = t2.account\n         AND t2.date_time > t1.date_time)\n      GROUP BY t2.account,t2.date_time) t3\n     ON t3.account = transactions.account\n        AND t3.prev_date = transactions.date_time\n        AND t3.type='A'\n        AND transactions.type='R'\n        AND t3.amount + transactions.amount = 0) t4\n  USING(id)\n  WHERE t4.id IS NULL\n    AND transactions.type = 'R'\n  ORDER BY transactions.date_time;\n\n+----+---------+------+---------------------+--------+\n| id | account | type | date_time           | amount |\n+----+---------+------+---------------------+--------+\n|  1 |       1 | R    | 2012-01-01 10:01:00 |   1000 |\n|  5 |       1 | R    | 2012-01-04 12:30:01 |   1000 |\n|  7 |       3 | R    | 2012-01-04 15:13:10 |   3000 |\n| 10 |       3 | R    | 2012-01-07 00:00:00 |   1250 |\n| 11 |       3 | R    | 2012-01-07 05:00:00 |   4000 |\n| 16 |       2 | R    | 2012-01-12 00:00:00 |   5000 |\n+----+---------+------+---------------------+--------+\n
    \n soup wrap:

    Here is a solution based on nested subqueries. First, I added a few rows to catch a few more cases. Transaction 10, for example, should not be cancelled by transaction 12, because transaction 11 comes in between.

    > select * from transactions order by date_time;
    +----+---------+------+---------------------+--------+
    | id | account | type | date_time           | amount |
    +----+---------+------+---------------------+--------+
    |  1 |       1 | R    | 2012-01-01 10:01:00 |   1000 |
    |  2 |       3 | R    | 2012-01-02 12:53:10 |   1500 |
    |  3 |       3 | A    | 2012-01-03 13:10:01 |  -1500 |
    |  4 |       2 | R    | 2012-01-03 17:56:00 |   2000 |
    |  5 |       1 | R    | 2012-01-04 12:30:01 |   1000 |
    |  6 |       2 | A    | 2012-01-04 13:23:01 |  -2000 |
    |  7 |       3 | R    | 2012-01-04 15:13:10 |   3000 |
    |  8 |       3 | R    | 2012-01-05 12:12:00 |   1250 |
    |  9 |       3 | A    | 2012-01-06 17:24:01 |  -1250 |
    | 10 |       3 | R    | 2012-01-07 00:00:00 |   1250 |
    | 11 |       3 | R    | 2012-01-07 05:00:00 |   4000 |
    | 12 |       3 | A    | 2012-01-08 00:00:00 |  -1250 |
    | 14 |       2 | R    | 2012-01-09 00:00:00 |   2000 |
    | 13 |       3 | A    | 2012-01-10 00:00:00 |  -1500 |
    | 15 |       2 | A    | 2012-01-11 04:00:00 |  -2000 |
    | 16 |       2 | R    | 2012-01-12 00:00:00 |   5000 |
    +----+---------+------+---------------------+--------+
    16 rows in set (0.00 sec)
    

    First, create a query to grab, for each transaction, "the date of the most recent transaction before that one in the same account":

    SELECT t2.*,
           MAX(t1.date_time) AS prev_date
    FROM transactions t1
    JOIN transactions t2
    ON (t1.account = t2.account
       AND t2.date_time > t1.date_time)
    GROUP BY t2.account,t2.date_time
    ORDER BY t2.date_time;
    
    +----+---------+------+---------------------+--------+---------------------+
    | id | account | type | date_time           | amount | prev_date           |
    +----+---------+------+---------------------+--------+---------------------+
    |  3 |       3 | A    | 2012-01-03 13:10:01 |  -1500 | 2012-01-02 12:53:10 |
    |  5 |       1 | R    | 2012-01-04 12:30:01 |   1000 | 2012-01-01 10:01:00 |
    |  6 |       2 | A    | 2012-01-04 13:23:01 |  -2000 | 2012-01-03 17:56:00 |
    |  7 |       3 | R    | 2012-01-04 15:13:10 |   3000 | 2012-01-03 13:10:01 |
    |  8 |       3 | R    | 2012-01-05 12:12:00 |   1250 | 2012-01-04 15:13:10 |
    |  9 |       3 | A    | 2012-01-06 17:24:01 |  -1250 | 2012-01-05 12:12:00 |
    | 10 |       3 | R    | 2012-01-07 00:00:00 |   1250 | 2012-01-06 17:24:01 |
    | 11 |       3 | R    | 2012-01-07 05:00:00 |   4000 | 2012-01-07 00:00:00 |
    | 12 |       3 | A    | 2012-01-08 00:00:00 |  -1250 | 2012-01-07 05:00:00 |
    | 14 |       2 | R    | 2012-01-09 00:00:00 |   2000 | 2012-01-04 13:23:01 |
    | 13 |       3 | A    | 2012-01-10 00:00:00 |  -1500 | 2012-01-08 00:00:00 |
    | 15 |       2 | A    | 2012-01-11 04:00:00 |  -2000 | 2012-01-09 00:00:00 |
    | 16 |       2 | R    | 2012-01-12 00:00:00 |   5000 | 2012-01-11 04:00:00 |
    +----+---------+------+---------------------+--------+---------------------+
    13 rows in set (0.00 sec)
    

    Use that as a subquery to get each transaction and its predecessor on the same row. Use some filtering to pull out the transactions we're interested in - namely, 'A' transactions whose predecessors are 'R' transactions that they exactly cancel out -

    SELECT
      t3.*,transactions.*
    FROM
      transactions
      JOIN
      (SELECT t2.*,
              MAX(t1.date_time) AS prev_date
       FROM transactions t1
       JOIN transactions t2
       ON (t1.account = t2.account
          AND t2.date_time > t1.date_time)
       GROUP BY t2.account,t2.date_time) t3
      ON t3.account = transactions.account
         AND t3.prev_date = transactions.date_time
         AND t3.type='A'
         AND transactions.type='R'
         AND t3.amount + transactions.amount = 0
      ORDER BY t3.date_time;
    
    
    +----+---------+------+---------------------+--------+---------------------+----+---------+------+---------------------+--------+
    | id | account | type | date_time           | amount | prev_date           | id | account | type | date_time           | amount |
    +----+---------+------+---------------------+--------+---------------------+----+---------+------+---------------------+--------+
    |  3 |       3 | A    | 2012-01-03 13:10:01 |  -1500 | 2012-01-02 12:53:10 |  2 |       3 | R    | 2012-01-02 12:53:10 |   1500 |
    |  6 |       2 | A    | 2012-01-04 13:23:01 |  -2000 | 2012-01-03 17:56:00 |  4 |       2 | R    | 2012-01-03 17:56:00 |   2000 |
    |  9 |       3 | A    | 2012-01-06 17:24:01 |  -1250 | 2012-01-05 12:12:00 |  8 |       3 | R    | 2012-01-05 12:12:00 |   1250 |
    | 15 |       2 | A    | 2012-01-11 04:00:00 |  -2000 | 2012-01-09 00:00:00 | 14 |       2 | R    | 2012-01-09 00:00:00 |   2000 |
    +----+---------+------+---------------------+--------+---------------------+----+---------+------+---------------------+--------+
    4 rows in set (0.00 sec)
    

    From the result above it's apparent we're almost there - we've identified the unwanted transactions. Using LEFT JOIN we can filter these out of the whole transaction set:

    SELECT
      transactions.*
    FROM
      transactions
    LEFT JOIN
      (SELECT
         transactions.id
       FROM
         transactions
         JOIN
         (SELECT t2.*,
                 MAX(t1.date_time) AS prev_date
          FROM transactions t1
          JOIN transactions t2
          ON (t1.account = t2.account
             AND t2.date_time > t1.date_time)
          GROUP BY t2.account,t2.date_time) t3
         ON t3.account = transactions.account
            AND t3.prev_date = transactions.date_time
            AND t3.type='A'
            AND transactions.type='R'
            AND t3.amount + transactions.amount = 0) t4
      USING(id)
      WHERE t4.id IS NULL
        AND transactions.type = 'R'
      ORDER BY transactions.date_time;
    
    +----+---------+------+---------------------+--------+
    | id | account | type | date_time           | amount |
    +----+---------+------+---------------------+--------+
    |  1 |       1 | R    | 2012-01-01 10:01:00 |   1000 |
    |  5 |       1 | R    | 2012-01-04 12:30:01 |   1000 |
    |  7 |       3 | R    | 2012-01-04 15:13:10 |   3000 |
    | 10 |       3 | R    | 2012-01-07 00:00:00 |   1250 |
    | 11 |       3 | R    | 2012-01-07 05:00:00 |   4000 |
    | 16 |       2 | R    | 2012-01-12 00:00:00 |   5000 |
    +----+---------+------+---------------------+--------+
    
    qid & accept id: (9518900, 9519129) query: how to find teams with sql command soup:

    I know there is nothing like ROW_NUMBER() OVER... in SQLite, but I cannot find anything about something similar to a CROSS APPLY.

    \n

    If there is something equivalent to a CROSS APPLY, then you can do the following. (EDIT: I noticed the requirement for schools to be able to have multiple teams. This solution would only work with one team per school. You will need a recursive CTE and ROW_NUMBER as far as I can tell, otherwise---which are not available in SQLite to my knowledge)

    \n
    SELECT  TeamTable.*\nFROM    Table\nCROSS APPLY\n    (\n        SELECT  TOP 4 *\n        FROM Table AS InnerTable\n        WHERE   InnerTable.school = Table.School\n        ORDER BY InnerTable.Pos\n    ) AS TeamTable\n
    \n

    If not, then you would probably have to use a while loop and temp tables to fill this. If that is the case, then there is no real gain from using the SQL and I would suggest going the code route.

    \n

    EDIT:\nHowever, this is the temp table solution as was requested. You need the inner while since you could have multiple teams within the school (something I had disregarded before and makes the CROSS APPLY solution not work without a recursive CTE and ROW_NUMBER, which has been edited to acknowledge)

    \n
    CREATE TABLE #SchoolList \n    (Id INT IDENTITY(1,1), School VARCHAR(50))\n\nINSERT INTO #SchoolList\nSELECT DISTINCT School\nFROM TeamTable\n\nCREATE TABLE #TeamList\n    (TeamNumber INT IDENTITY(1,1), Pos INT, Name VARCHAR(50),\n        School VARCHAR(50))\n\nDECLARE @CurrentSchool VARCHAR(50), @CurrentSchoolPos INT\nDECLARE @CurrentSchoolLookupId INT\nSET @CurrentSchoolId = 1\nWHILE EXISTS (SELECT 1 FROM #SchoolList WHERE Id > @CurrentSchoolLookupId)\nBEGIN\n    SELECT @CurrentSchool = School FROM #SchoolList\n        WHERE Id = @CurrentSchoolLookupId\n    SET @CurrentSchoolPos = SELECT TOP 1 Pos FROM TeamTable \n                            WHERE School = @CurrentSchool \n                            ORDER BY POS\n    WHILE ISNULL(@CurrentSchoolPos, 0) > 0\n    BEGIN\n        INSERT INTO #TeamList\n        SELECT Pos, Name, School \n        FROM TeamTable \n        WHERE School = @CurrentSchool AND Pos = @CurrentSchoolPos\n\n        SET @CurrentSchoolPos = SELECT TOP 1 Pos FROM TeamTable \n                                WHERE School = @CurrentSchool \n                                    AND Pos > @CurrentSchoolPos ORDER BY POS\n    END\n    SET @CurrentSchoolLookupId = @CurrentSchoolLookupId + 1\nEND\n\nSELECT * FROM #TeamList\n
    \n soup wrap:

    I know there is nothing like ROW_NUMBER() OVER... in SQLite, but I cannot find anything about something similar to a CROSS APPLY.

    If there is something equivalent to a CROSS APPLY, then you can do the following. (EDIT: I noticed the requirement for schools to be able to have multiple teams. This solution would only work with one team per school. You will need a recursive CTE and ROW_NUMBER as far as I can tell, otherwise---which are not available in SQLite to my knowledge)

    SELECT  TeamTable.*
    FROM    Table
    CROSS APPLY
        (
            SELECT  TOP 4 *
            FROM Table AS InnerTable
            WHERE   InnerTable.school = Table.School
            ORDER BY InnerTable.Pos
        ) AS TeamTable
    

    If not, then you would probably have to use a while loop and temp tables to fill this. If that is the case, then there is no real gain from using the SQL and I would suggest going the code route.

    EDIT: However, this is the temp table solution as was requested. You need the inner while since you could have multiple teams within the school (something I had disregarded before and makes the CROSS APPLY solution not work without a recursive CTE and ROW_NUMBER, which has been edited to acknowledge)

    CREATE TABLE #SchoolList 
        (Id INT IDENTITY(1,1), School VARCHAR(50))
    
    INSERT INTO #SchoolList
    SELECT DISTINCT School
    FROM TeamTable
    
    CREATE TABLE #TeamList
        (TeamNumber INT IDENTITY(1,1), Pos INT, Name VARCHAR(50),
            School VARCHAR(50))
    
    DECLARE @CurrentSchool VARCHAR(50), @CurrentSchoolPos INT
    DECLARE @CurrentSchoolLookupId INT
    SET @CurrentSchoolId = 1
    WHILE EXISTS (SELECT 1 FROM #SchoolList WHERE Id > @CurrentSchoolLookupId)
    BEGIN
        SELECT @CurrentSchool = School FROM #SchoolList
            WHERE Id = @CurrentSchoolLookupId
        SET @CurrentSchoolPos = SELECT TOP 1 Pos FROM TeamTable 
                                WHERE School = @CurrentSchool 
                                ORDER BY POS
        WHILE ISNULL(@CurrentSchoolPos, 0) > 0
        BEGIN
            INSERT INTO #TeamList
            SELECT Pos, Name, School 
            FROM TeamTable 
            WHERE School = @CurrentSchool AND Pos = @CurrentSchoolPos
    
            SET @CurrentSchoolPos = SELECT TOP 1 Pos FROM TeamTable 
                                    WHERE School = @CurrentSchool 
                                        AND Pos > @CurrentSchoolPos ORDER BY POS
        END
        SET @CurrentSchoolLookupId = @CurrentSchoolLookupId + 1
    END
    
    SELECT * FROM #TeamList
    
    qid & accept id: (9535224, 9535281) query: Concatenate Two Values On Insert - SQL soup:

    You are currently using double quotes you should instead use single quotes since that is a valid string in SQL.

    \n
     DOSQL "INSERT INTO Leads (DateTimeField) VALUES (cbdate1 + ' ' + cbtime1)"\n
    \n

    Edit:

    \n

    Now if you get further problems it might be because your DateTimeField is a datetime datatype. Now you could then after concatenating convert or cast the string to the correct format.

    \n

    Like:

    \n
     DOSQL "INSERT INTO Leads (DateTimeField) VALUES (Convert(datetime, cbdate1 + ' ' + cbtime1))"\n
    \n

    Edit #2:

    \n

    Without a 24 hour part you would need a mon dd yyyy format ex: Oct 22 2012. Otherwise you might have to try and get the time part into a 24 hour format.

    \n soup wrap:

    You are currently using double quotes you should instead use single quotes since that is a valid string in SQL.

     DOSQL "INSERT INTO Leads (DateTimeField) VALUES (cbdate1 + ' ' + cbtime1)"
    

    Edit:

    Now if you get further problems it might be because your DateTimeField is a datetime datatype. Now you could then after concatenating convert or cast the string to the correct format.

    Like:

     DOSQL "INSERT INTO Leads (DateTimeField) VALUES (Convert(datetime, cbdate1 + ' ' + cbtime1))"
    

    Edit #2:

    Without a 24 hour part you would need a mon dd yyyy format ex: Oct 22 2012. Otherwise you might have to try and get the time part into a 24 hour format.

    qid & accept id: (9548686, 9548717) query: query inside of query soup:

    first join things up.

    \n
    select q.question_id, q.title\nfrom question q, post p\nwhere q.question_id = p.question_id\n
    \n

    then filter down to the posts you want

    \n
    select q.question_id, q.title\nfrom question q, post p\nwhere q.question_id = p.question_id\nand p.post like '%SEARCHTERM%'\n
    \n

    (or full text or whatever)

    \n

    then count up

    \n
    select q.question_id, q.title, count( post_id )\nfrom question q, post p\nwhere q.question_id = p.question_id\nand p.post like '%SEARCHTERM%'\ngroup by q.question_id, q.title\n
    \n soup wrap:

    first join things up.

    select q.question_id, q.title
    from question q, post p
    where q.question_id = p.question_id
    

    then filter down to the posts you want

    select q.question_id, q.title
    from question q, post p
    where q.question_id = p.question_id
    and p.post like '%SEARCHTERM%'
    

    (or full text or whatever)

    then count up

    select q.question_id, q.title, count( post_id )
    from question q, post p
    where q.question_id = p.question_id
    and p.post like '%SEARCHTERM%'
    group by q.question_id, q.title
    
    qid & accept id: (9573470, 9573531) query: MySQL Selecting from One table into Another Based on ID soup:

    You can use either a subquery (SQLize):

    \n
    UPDATE Table1\nSET Val2 = ( SELECT Val1 FROM Table2 WHERE Table1.ID = Table2.ID )\nWHERE Val2 IS NULL\n
    \n

    or a multi-table update (SQLize):

    \n
    UPDATE Table1, Table2\nSET Table1.Val2 = Table2.Val1\nWHERE Table1.ID = Table2.ID AND Table1.Val2 IS NULL\n
    \n

    or the same with an explicit JOIN (SQLize):

    \n
    UPDATE Table1 JOIN Table2 ON Table1.ID = Table2.ID\nSET Table1.Val2 = Table2.Val1\nWHERE Table1.Val2 IS NULL\n
    \n

    (I assume you only want to update the rows in Table1 for which Val2 is NULL. If you'd rather overwrite the values for all rows with matching IDs in Table2, just remove the WHERE Table1.Val2 IS NULL condition.)

    \n soup wrap:

    You can use either a subquery (SQLize):

    UPDATE Table1
    SET Val2 = ( SELECT Val1 FROM Table2 WHERE Table1.ID = Table2.ID )
    WHERE Val2 IS NULL
    

    or a multi-table update (SQLize):

    UPDATE Table1, Table2
    SET Table1.Val2 = Table2.Val1
    WHERE Table1.ID = Table2.ID AND Table1.Val2 IS NULL
    

    or the same with an explicit JOIN (SQLize):

    UPDATE Table1 JOIN Table2 ON Table1.ID = Table2.ID
    SET Table1.Val2 = Table2.Val1
    WHERE Table1.Val2 IS NULL
    

    (I assume you only want to update the rows in Table1 for which Val2 is NULL. If you'd rather overwrite the values for all rows with matching IDs in Table2, just remove the WHERE Table1.Val2 IS NULL condition.)

    qid & accept id: (9581458, 9583374) query: How can I prevent date overlaps in SQL? soup:

    Consider this query:

    \n
    SELECT *\nFROM Hire AS H1, Hire AS H2\nWHERE H1.carId = H2.carId\nAND H1.hireId < H2.hireId \nAND \n   CASE \n   WHEN H1.onHireDate > H2.onHireDate THEN H1.onHireDate \n   ELSE H2.onHireDate END\n   <\n   CASE \n   WHEN H1.offHireDate > H2.offHireDate THEN H2.offHireDate \n   ELSE H1.offHireDate END\n
    \n

    If all rows meet you business rule then this query will be the empty set (assuming closed-open representation of periods i.e. where the end date is the earliest time granule that is not considered within the period).

    \n

    Because SQL Server does not support subqueries within CHECK constraints, put the same logic in a trigger (but not an INSTEAD OF trigger, unless you can provide logic to resolve overlaps).

    \n
    \n

    Alternative query using Fowler:

    \n
    SELECT *\n  FROM Hire AS H1, Hire AS H2\n WHERE H1.carId = H2.carId\n       AND H1.hireId < H2.hireId \n       AND H1.onHireDate < H2.offHireDate \n       AND H2.onHireDate < H1.offHireDate;\n
    \n soup wrap:

    Consider this query:

    SELECT *
    FROM Hire AS H1, Hire AS H2
    WHERE H1.carId = H2.carId
    AND H1.hireId < H2.hireId 
    AND 
       CASE 
       WHEN H1.onHireDate > H2.onHireDate THEN H1.onHireDate 
       ELSE H2.onHireDate END
       <
       CASE 
       WHEN H1.offHireDate > H2.offHireDate THEN H2.offHireDate 
       ELSE H1.offHireDate END
    

    If all rows meet you business rule then this query will be the empty set (assuming closed-open representation of periods i.e. where the end date is the earliest time granule that is not considered within the period).

    Because SQL Server does not support subqueries within CHECK constraints, put the same logic in a trigger (but not an INSTEAD OF trigger, unless you can provide logic to resolve overlaps).


    Alternative query using Fowler:

    SELECT *
      FROM Hire AS H1, Hire AS H2
     WHERE H1.carId = H2.carId
           AND H1.hireId < H2.hireId 
           AND H1.onHireDate < H2.offHireDate 
           AND H2.onHireDate < H1.offHireDate;
    
    qid & accept id: (9623187, 9626026) query: Best way to replicate Oracles range windowing function in SQL Server soup:

    If I understand correct, you want the following

    \n

    For each case_id, channel_index combination:

    \n
      \n
    1. Find the lowest MAX value for all 3 minute windows (min sustained\nvalue)
    2. \n
    3. Find the highest MIN value for all 3 minutes windows (max\nsustained value).
    4. \n
    5. Use data from the preceeding 3 minutes. If 3 minutes has not elapsed since the first (MIN) start_time value, exclude that data.
    6. \n
    \n

    There are still several unexplained differences between the Oracle query and your solution (both the stored procedure and CLR stored procedure):

    \n
      \n
    1. The Oracle query doesn't ensure the time difference for each window is exactly 3 minutes. It only takes the min/max value for the preceeding 3 minutes. The WHERE clause first_time + numtodsinterval(3, 'minute') <= start_time removes the time windows before the first 3 minutes has elapsed.
    2. \n
    3. The value_duration column is in the sample data, but not used in the solution
    4. \n
    5. The sample data does not include 3 minutes of data, so I changed the time range to 10 seconds
    6. \n
    7. You did not list the expected results for the sample data
    8. \n
    \n

    SOLUTION\n-- This may not be the fastest solution, but it should work --

    \n

    Step 0: Window Time Range -- The sample data does not include 3 minutes of data, so I used a variable to hold the desired number of seconds for the window time range. For the actual data, you could use 180 seconds.

    \n
    DECLARE @seconds int\nSET @seconds = 10\n
    \n

    Step 1: First Time -- Although the first_time isn't important, it is still necessary to make sure we don't include incomplete time periods. It will be used later to exclude data before the first complete time period has elapsed.

    \n
    -- Query to return the first_time, last_time, and range_time\n-- range_time is first complete time period using the time range\nSELECT  case_id \n    ,   channel_index \n    ,   MIN(start_time) AS first_time\n    ,   DATEADD(ss, @seconds, MIN(start_time)) AS range_time\n    ,   MAX(start_time) AS last_time\nFROM    #continuous_data \nGROUP BY case_id, channel_index\nORDER BY case_id, channel_index\n\n-- Results from the sample data\ncase_id     channel_index first_time              range_time              last_time\n----------- ------------- ----------------------- ----------------------- -----------------------\n2081        50            2011-05-18 09:36:39.000 2011-05-18 09:36:49.000 2011-05-18 09:37:08.000\n2081        51            2011-05-18 09:36:34.000 2011-05-18 09:36:44.000 2011-05-18 09:37:04.000\n
    \n

    Step 2: Time Windows -- The Oracle query uses partition by case_id, channel_index order by start_time range numtodsinterval(3, 'minute') preceeding to find the minimum and maximum dms_value as well as the first_time in the subquery. Since SQL Server does not have the range functionality, you need to use a subquery to define the 3 minute windows. The Oracle query uses range ... preceeding, so the SQL Server range will use DATEADD with a negative value:

    \n
    -- Windowing for each time range. Window is the negative time\n-- range from each start_time row\nSELECT  case_id \n    ,   channel_index \n    ,   DATEADD(ss, -@seconds, start_time) AS window_start\n    ,   start_time                         AS window_end\nFROM    #continuous_data \nORDER BY case_id, channel_index, start_time\n
    \n

    Step 3: MIN/MAX for Time Windows -- Next you need to find the minimum and maximum values for each window. This is where the majority of the calculation is performed and needs the most debugging to get the expected results.

    \n
    -- Find the maximum and minimum values for each window range\n-- I included the start_time min/max/diff for debugging\nSELECT  su.case_id \n    ,   su.channel_index \n    ,   win.window_end \n    ,   MAX(dms_value) AS dms_max\n    ,   MIN(dms_value) AS dms_min\n    ,   MIN(su.start_time) AS time_min\n    ,   MAX(su.start_time) AS time_max\n    ,   DATEDIFF(ss, MIN(su.start_time), MAX(su.start_time)) AS time_diff\nFROM    #continuous_data AS su\n   JOIN (\n        -- Windowing for each time range. Window is the negative time\n        -- range from each start_time row\n        SELECT  case_id \n            ,   channel_index \n            ,   DATEADD(ss, -@seconds, start_time) AS window_start\n            ,   start_time                         AS window_end\n        FROM    #continuous_data \n    ) AS win\n        ON (    su.case_id       = win.case_id\n            AND su.channel_index = win.channel_index)\n   JOIN (\n        -- Find the first_time and add the time range\n        SELECT  case_id \n            ,   channel_index \n            ,   MIN(start_time)                        AS first_time\n            ,   DATEADD(ss, @seconds, MIN(start_time)) AS range_time\n        FROM    #continuous_data \n        GROUP BY case_id, channel_index\n    ) AS fir\n        ON (    su.case_id       = fir.case_id\n            AND su.channel_index = fir.channel_index)\nWHERE   su.start_time BETWEEN win.window_start AND win.window_end\n    AND win.window_end >= fir.range_time\nGROUP BY su.case_id, su.channel_index, win.window_end\nORDER BY su.case_id, su.channel_index, win.window_end\n\n-- Results from sample data:\ncase_id     channel_index window_end              dms_max                dms_min                time_min                time_max                time_diff\n----------- ------------- ----------------------- ---------------------- ---------------------- ----------------------- ----------------------- -----------\n2081        50            2011-05-18 09:36:49.000 104.5625               94.8125                2011-05-18 09:36:39.000 2011-05-18 09:36:49.000 10\n2081        50            2011-05-18 09:36:50.000 105.8125               95.4375                2011-05-18 09:36:40.000 2011-05-18 09:36:50.000 10\n2081        50            2011-05-18 09:36:52.000 107.125                98.0625                2011-05-18 09:36:42.000 2011-05-18 09:36:52.000 10\n2081        50            2011-05-18 09:36:53.000 108.4375               99.3125                2011-05-18 09:36:44.000 2011-05-18 09:36:53.000 9\n2081        50            2011-05-18 09:36:54.000 109.75                 99.3125                2011-05-18 09:36:44.000 2011-05-18 09:36:54.000 10\n2081        50            2011-05-18 09:36:55.000 111.0625               100.625                2011-05-18 09:36:45.000 2011-05-18 09:36:55.000 10\n2081        50            2011-05-18 09:36:57.000 112.3125               103.25                 2011-05-18 09:36:48.000 2011-05-18 09:36:57.000 9\n2081        50            2011-05-18 09:36:58.000 113.625                103.25                 2011-05-18 09:36:48.000 2011-05-18 09:36:58.000 10\n2081        50            2011-05-18 09:36:59.000 114.9375               104.5625               2011-05-18 09:36:49.000 2011-05-18 09:36:59.000 10\n2081        50            2011-05-18 09:37:01.000 116.25                 107.125                2011-05-18 09:36:52.000 2011-05-18 09:37:01.000 9\n2081        50            2011-05-18 09:37:02.000 117.5                  107.125                2011-05-18 09:36:52.000 2011-05-18 09:37:02.000 10\n2081        50            2011-05-18 09:37:03.000 118.8125               108.4375               2011-05-18 09:36:53.000 2011-05-18 09:37:03.000 10\n2081        50            2011-05-18 09:37:05.000 120.125                111.0625               2011-05-18 09:36:55.000 2011-05-18 09:37:05.000 10\n2081        50            2011-05-18 09:37:06.000 121.4375               112.3125               2011-05-18 09:36:57.000 2011-05-18 09:37:06.000 9\n2081        50            2011-05-18 09:37:07.000 122.75                 112.3125               2011-05-18 09:36:57.000 2011-05-18 09:37:07.000 10\n2081        50            2011-05-18 09:37:08.000 124.0625               113.625                2011-05-18 09:36:58.000 2011-05-18 09:37:08.000 10\n2081        51            2011-05-18 09:36:46.000 98                     96                     2011-05-18 09:36:40.000 2011-05-18 09:36:46.000 6\n2081        51            2011-05-18 09:36:52.000 98                     92                     2011-05-18 09:36:46.000 2011-05-18 09:36:52.000 6\n2081        51            2011-05-18 09:36:58.000 92                     86                     2011-05-18 09:36:52.000 2011-05-18 09:36:58.000 6\n2081        51            2011-05-18 09:37:04.000 86                     80                     2011-05-18 09:36:58.000 2011-05-18 09:37:04.000 6\n
    \n

    Step 4: Finally, you can put it all together to return the lowest MAX value and highest MIN value for each time window:

    \n
    SELECT  su.case_id \n    ,   su.channel_index \n    ,   MIN(dms_max) AS su_min\n    ,   MAX(dms_min) AS su_max\nFROM    (\n        SELECT  su.case_id \n            ,   su.channel_index \n            ,   win.window_end \n            ,   MAX(dms_value) AS dms_max\n            ,   MIN(dms_value) AS dms_min\n        FROM    #continuous_data AS su\n           JOIN (\n                -- Windowing for each time range. Window is the negative time\n                -- range from each start_time row\n                SELECT  case_id \n                    ,   channel_index \n                    ,   DATEADD(ss, -@seconds, start_time) AS window_start\n                    ,   start_time                         AS window_end\n                FROM    #continuous_data \n            ) AS win\n                ON (    su.case_id       = win.case_id\n                    AND su.channel_index = win.channel_index)\n           JOIN (\n                -- Find the first_time and add the time range\n                SELECT  case_id \n                    ,   channel_index \n                    ,   MIN(start_time)                        AS first_time\n                    ,   DATEADD(ss, @seconds, MIN(start_time)) AS range_time\n                FROM    #continuous_data \n                GROUP BY case_id, channel_index\n            ) AS fir\n                ON (    su.case_id       = fir.case_id\n                    AND su.channel_index = fir.channel_index)\n        WHERE   su.start_time BETWEEN win.window_start AND win.window_end\n            AND win.window_end >= fir.range_time\n        GROUP BY su.case_id, su.channel_index, win.window_end\n) AS su\nGROUP BY su.case_id, su.channel_index\nORDER BY su.case_id, su.channel_index\n\n-- Results from sample data:\ncase_id     channel_index su_min                 su_max\n----------- ------------- ---------------------- ----------------------\n2081        50            104.5625               113.625\n2081        51            86                     96\n
    \n soup wrap:

    If I understand correct, you want the following

    For each case_id, channel_index combination:

    1. Find the lowest MAX value for all 3 minute windows (min sustained value)
    2. Find the highest MIN value for all 3 minutes windows (max sustained value).
    3. Use data from the preceeding 3 minutes. If 3 minutes has not elapsed since the first (MIN) start_time value, exclude that data.

    There are still several unexplained differences between the Oracle query and your solution (both the stored procedure and CLR stored procedure):

    1. The Oracle query doesn't ensure the time difference for each window is exactly 3 minutes. It only takes the min/max value for the preceeding 3 minutes. The WHERE clause first_time + numtodsinterval(3, 'minute') <= start_time removes the time windows before the first 3 minutes has elapsed.
    2. The value_duration column is in the sample data, but not used in the solution
    3. The sample data does not include 3 minutes of data, so I changed the time range to 10 seconds
    4. You did not list the expected results for the sample data

    SOLUTION -- This may not be the fastest solution, but it should work --

    Step 0: Window Time Range -- The sample data does not include 3 minutes of data, so I used a variable to hold the desired number of seconds for the window time range. For the actual data, you could use 180 seconds.

    DECLARE @seconds int
    SET @seconds = 10
    

    Step 1: First Time -- Although the first_time isn't important, it is still necessary to make sure we don't include incomplete time periods. It will be used later to exclude data before the first complete time period has elapsed.

    -- Query to return the first_time, last_time, and range_time
    -- range_time is first complete time period using the time range
    SELECT  case_id 
        ,   channel_index 
        ,   MIN(start_time) AS first_time
        ,   DATEADD(ss, @seconds, MIN(start_time)) AS range_time
        ,   MAX(start_time) AS last_time
    FROM    #continuous_data 
    GROUP BY case_id, channel_index
    ORDER BY case_id, channel_index
    
    -- Results from the sample data
    case_id     channel_index first_time              range_time              last_time
    ----------- ------------- ----------------------- ----------------------- -----------------------
    2081        50            2011-05-18 09:36:39.000 2011-05-18 09:36:49.000 2011-05-18 09:37:08.000
    2081        51            2011-05-18 09:36:34.000 2011-05-18 09:36:44.000 2011-05-18 09:37:04.000
    

    Step 2: Time Windows -- The Oracle query uses partition by case_id, channel_index order by start_time range numtodsinterval(3, 'minute') preceeding to find the minimum and maximum dms_value as well as the first_time in the subquery. Since SQL Server does not have the range functionality, you need to use a subquery to define the 3 minute windows. The Oracle query uses range ... preceeding, so the SQL Server range will use DATEADD with a negative value:

    -- Windowing for each time range. Window is the negative time
    -- range from each start_time row
    SELECT  case_id 
        ,   channel_index 
        ,   DATEADD(ss, -@seconds, start_time) AS window_start
        ,   start_time                         AS window_end
    FROM    #continuous_data 
    ORDER BY case_id, channel_index, start_time
    

    Step 3: MIN/MAX for Time Windows -- Next you need to find the minimum and maximum values for each window. This is where the majority of the calculation is performed and needs the most debugging to get the expected results.

    -- Find the maximum and minimum values for each window range
    -- I included the start_time min/max/diff for debugging
    SELECT  su.case_id 
        ,   su.channel_index 
        ,   win.window_end 
        ,   MAX(dms_value) AS dms_max
        ,   MIN(dms_value) AS dms_min
        ,   MIN(su.start_time) AS time_min
        ,   MAX(su.start_time) AS time_max
        ,   DATEDIFF(ss, MIN(su.start_time), MAX(su.start_time)) AS time_diff
    FROM    #continuous_data AS su
       JOIN (
            -- Windowing for each time range. Window is the negative time
            -- range from each start_time row
            SELECT  case_id 
                ,   channel_index 
                ,   DATEADD(ss, -@seconds, start_time) AS window_start
                ,   start_time                         AS window_end
            FROM    #continuous_data 
        ) AS win
            ON (    su.case_id       = win.case_id
                AND su.channel_index = win.channel_index)
       JOIN (
            -- Find the first_time and add the time range
            SELECT  case_id 
                ,   channel_index 
                ,   MIN(start_time)                        AS first_time
                ,   DATEADD(ss, @seconds, MIN(start_time)) AS range_time
            FROM    #continuous_data 
            GROUP BY case_id, channel_index
        ) AS fir
            ON (    su.case_id       = fir.case_id
                AND su.channel_index = fir.channel_index)
    WHERE   su.start_time BETWEEN win.window_start AND win.window_end
        AND win.window_end >= fir.range_time
    GROUP BY su.case_id, su.channel_index, win.window_end
    ORDER BY su.case_id, su.channel_index, win.window_end
    
    -- Results from sample data:
    case_id     channel_index window_end              dms_max                dms_min                time_min                time_max                time_diff
    ----------- ------------- ----------------------- ---------------------- ---------------------- ----------------------- ----------------------- -----------
    2081        50            2011-05-18 09:36:49.000 104.5625               94.8125                2011-05-18 09:36:39.000 2011-05-18 09:36:49.000 10
    2081        50            2011-05-18 09:36:50.000 105.8125               95.4375                2011-05-18 09:36:40.000 2011-05-18 09:36:50.000 10
    2081        50            2011-05-18 09:36:52.000 107.125                98.0625                2011-05-18 09:36:42.000 2011-05-18 09:36:52.000 10
    2081        50            2011-05-18 09:36:53.000 108.4375               99.3125                2011-05-18 09:36:44.000 2011-05-18 09:36:53.000 9
    2081        50            2011-05-18 09:36:54.000 109.75                 99.3125                2011-05-18 09:36:44.000 2011-05-18 09:36:54.000 10
    2081        50            2011-05-18 09:36:55.000 111.0625               100.625                2011-05-18 09:36:45.000 2011-05-18 09:36:55.000 10
    2081        50            2011-05-18 09:36:57.000 112.3125               103.25                 2011-05-18 09:36:48.000 2011-05-18 09:36:57.000 9
    2081        50            2011-05-18 09:36:58.000 113.625                103.25                 2011-05-18 09:36:48.000 2011-05-18 09:36:58.000 10
    2081        50            2011-05-18 09:36:59.000 114.9375               104.5625               2011-05-18 09:36:49.000 2011-05-18 09:36:59.000 10
    2081        50            2011-05-18 09:37:01.000 116.25                 107.125                2011-05-18 09:36:52.000 2011-05-18 09:37:01.000 9
    2081        50            2011-05-18 09:37:02.000 117.5                  107.125                2011-05-18 09:36:52.000 2011-05-18 09:37:02.000 10
    2081        50            2011-05-18 09:37:03.000 118.8125               108.4375               2011-05-18 09:36:53.000 2011-05-18 09:37:03.000 10
    2081        50            2011-05-18 09:37:05.000 120.125                111.0625               2011-05-18 09:36:55.000 2011-05-18 09:37:05.000 10
    2081        50            2011-05-18 09:37:06.000 121.4375               112.3125               2011-05-18 09:36:57.000 2011-05-18 09:37:06.000 9
    2081        50            2011-05-18 09:37:07.000 122.75                 112.3125               2011-05-18 09:36:57.000 2011-05-18 09:37:07.000 10
    2081        50            2011-05-18 09:37:08.000 124.0625               113.625                2011-05-18 09:36:58.000 2011-05-18 09:37:08.000 10
    2081        51            2011-05-18 09:36:46.000 98                     96                     2011-05-18 09:36:40.000 2011-05-18 09:36:46.000 6
    2081        51            2011-05-18 09:36:52.000 98                     92                     2011-05-18 09:36:46.000 2011-05-18 09:36:52.000 6
    2081        51            2011-05-18 09:36:58.000 92                     86                     2011-05-18 09:36:52.000 2011-05-18 09:36:58.000 6
    2081        51            2011-05-18 09:37:04.000 86                     80                     2011-05-18 09:36:58.000 2011-05-18 09:37:04.000 6
    

    Step 4: Finally, you can put it all together to return the lowest MAX value and highest MIN value for each time window:

    SELECT  su.case_id 
        ,   su.channel_index 
        ,   MIN(dms_max) AS su_min
        ,   MAX(dms_min) AS su_max
    FROM    (
            SELECT  su.case_id 
                ,   su.channel_index 
                ,   win.window_end 
                ,   MAX(dms_value) AS dms_max
                ,   MIN(dms_value) AS dms_min
            FROM    #continuous_data AS su
               JOIN (
                    -- Windowing for each time range. Window is the negative time
                    -- range from each start_time row
                    SELECT  case_id 
                        ,   channel_index 
                        ,   DATEADD(ss, -@seconds, start_time) AS window_start
                        ,   start_time                         AS window_end
                    FROM    #continuous_data 
                ) AS win
                    ON (    su.case_id       = win.case_id
                        AND su.channel_index = win.channel_index)
               JOIN (
                    -- Find the first_time and add the time range
                    SELECT  case_id 
                        ,   channel_index 
                        ,   MIN(start_time)                        AS first_time
                        ,   DATEADD(ss, @seconds, MIN(start_time)) AS range_time
                    FROM    #continuous_data 
                    GROUP BY case_id, channel_index
                ) AS fir
                    ON (    su.case_id       = fir.case_id
                        AND su.channel_index = fir.channel_index)
            WHERE   su.start_time BETWEEN win.window_start AND win.window_end
                AND win.window_end >= fir.range_time
            GROUP BY su.case_id, su.channel_index, win.window_end
    ) AS su
    GROUP BY su.case_id, su.channel_index
    ORDER BY su.case_id, su.channel_index
    
    -- Results from sample data:
    case_id     channel_index su_min                 su_max
    ----------- ------------- ---------------------- ----------------------
    2081        50            104.5625               113.625
    2081        51            86                     96
    
    qid & accept id: (9630004, 9630219) query: How to decrease the Auto increment _id in android SQLite? soup:

    EDIT: Maybe I should make it clear that just inserting the rows with the correct id instead of manipulating the sequence number definitely is a better idea than the below method. If there's no row with id=3 in the table, you can just insert with a fixed value in the id even in an AUTOINCREMENT table.

    \n
    \n

    That said, if you're really sure, you can set the auto increment value to any value using;

    \n
    UPDATE sqlite_sequence set seq= where name=
    ;\n\n

    That is, if you want AUTOINCREMENT on the next insert on table 'TableA' to generate 5, you do;

    \n
    UPDATE sqlite_sequence set seq=4 where name='TableA';\n
    \n

    Note that resetting seq behaves a bit different from what you may expect, it just means that the lowest id generated will be the greater of seq + 1 and the max id still in the table + 1.

    \n

    That is, if you delete all values >=5, you can reset the sequence value to 4 and have 5 generated as the next sequence number, but if you still have the id 10 in the table, the next number generated will be 11 instead.

    \n

    Maybe I should point out the fact that I cannot find this exact behavior documented anywhere, so I'd not rely on the behavior for every future version of sqlite. It works now, it may not tomorrow.

    \n soup wrap:

    EDIT: Maybe I should make it clear that just inserting the rows with the correct id instead of manipulating the sequence number definitely is a better idea than the below method. If there's no row with id=3 in the table, you can just insert with a fixed value in the id even in an AUTOINCREMENT table.


    That said, if you're really sure, you can set the auto increment value to any value using;

    UPDATE sqlite_sequence set seq= where name=
    ;

    That is, if you want AUTOINCREMENT on the next insert on table 'TableA' to generate 5, you do;

    UPDATE sqlite_sequence set seq=4 where name='TableA';
    

    Note that resetting seq behaves a bit different from what you may expect, it just means that the lowest id generated will be the greater of seq + 1 and the max id still in the table + 1.

    That is, if you delete all values >=5, you can reset the sequence value to 4 and have 5 generated as the next sequence number, but if you still have the id 10 in the table, the next number generated will be 11 instead.

    Maybe I should point out the fact that I cannot find this exact behavior documented anywhere, so I'd not rely on the behavior for every future version of sqlite. It works now, it may not tomorrow.

    qid & accept id: (9630859, 9631098) query: fetching data from database and set it on edittext soup:

    Check code for database in the following link Android SQLite

    \n

    You have to store the value in an arraylist and which is retrieved from database and set the value to the edit text as

    \n
    // myarraylist is the arraylist which contains \n// the data retrieved from database\neditText.setText(myarraylist.get(0)); \n
    \n

    After the data is retrieved, you have to check the condition whether editText.getText().toString() length is greater then zero you should not allow them to edit the text in editText by using following

    \n
      editText.setFocusable(false);\n
    \n soup wrap:

    Check code for database in the following link Android SQLite

    You have to store the value in an arraylist and which is retrieved from database and set the value to the edit text as

    // myarraylist is the arraylist which contains 
    // the data retrieved from database
    editText.setText(myarraylist.get(0)); 
    

    After the data is retrieved, you have to check the condition whether editText.getText().toString() length is greater then zero you should not allow them to edit the text in editText by using following

      editText.setFocusable(false);
    
    qid & accept id: (9655852, 9656733) query: sum of customer transactions soup:

    Actually, your example is not appropriate or you're missing information about the problem itself. Answer this question: If you want one line including a total what serial number do you want for that line? It is against common sense to have a total with detailed information (as long as you don't specify a criteria such as and also I want the most recent purchase date for each email).

    \n

    Another way to see this is: What criteria did you apply to select this serial number 1087-7072 instead of 2447-7971for zzz@msn.com? The same questions applies for fields 1 and 3.

    \n

    So, what I understand it would be useful for you (and minimal, of course) would be this:

    \n
    36.00   T T     xxx@gmail.com\n6.00    R T     yyy@gmail.com\n46.00   P B     zzz@msn.com  \n10.00   y a     aaa@aol.com\n
    \n

    You can get this with the following query (based on your table schea, I assume name has those values P B):

    \n
    select sum(`Purchase Price`) as total_sum, name, email from purchases\nwhere `Purchase Date` between '2012-01-01' and '2012-01-31'\ngroup by email, name\norder by email\n
    \n

    Let me know if this is what you're (actually) looking for.

    \n soup wrap:

    Actually, your example is not appropriate or you're missing information about the problem itself. Answer this question: If you want one line including a total what serial number do you want for that line? It is against common sense to have a total with detailed information (as long as you don't specify a criteria such as and also I want the most recent purchase date for each email).

    Another way to see this is: What criteria did you apply to select this serial number 1087-7072 instead of 2447-7971for zzz@msn.com? The same questions applies for fields 1 and 3.

    So, what I understand it would be useful for you (and minimal, of course) would be this:

    36.00   T T     xxx@gmail.com
    6.00    R T     yyy@gmail.com
    46.00   P B     zzz@msn.com  
    10.00   y a     aaa@aol.com
    

    You can get this with the following query (based on your table schea, I assume name has those values P B):

    select sum(`Purchase Price`) as total_sum, name, email from purchases
    where `Purchase Date` between '2012-01-01' and '2012-01-31'
    group by email, name
    order by email
    

    Let me know if this is what you're (actually) looking for.

    qid & accept id: (9704624, 9739296) query: Oracle APEX - Saving Shuttle Item selections to a new table soup:

    APEX provides a utility to split the values out of a shuttle item like this:

    \n
    declare\n    tab apex_application_global.vc_arr2;\nbegin\n    tab := apex_util.string_to_table (:p1_multiple_item);\n    ...\nend;\n
    \n

    So for your requirement you could do:

    \n
    declare\n    tab apex_application_global.vc_arr2;\nbegin\n    tab := apex_util.string_to_table (:p1_multiple_item);\n    for i in 1..tab.count loop\n        insert into order_parts_table (order_number, part_number, order_status)\n        values (:p1_order_number, tab(i), 'ACTIVE');\n    end loop;\nend;\n
    \n

    (NB I have not dealt with whether the row already exists, but you get the idea.)

    \n

    The processing for removing items will be along the same lines, though a bit more complicated.

    \n soup wrap:

    APEX provides a utility to split the values out of a shuttle item like this:

    declare
        tab apex_application_global.vc_arr2;
    begin
        tab := apex_util.string_to_table (:p1_multiple_item);
        ...
    end;
    

    So for your requirement you could do:

    declare
        tab apex_application_global.vc_arr2;
    begin
        tab := apex_util.string_to_table (:p1_multiple_item);
        for i in 1..tab.count loop
            insert into order_parts_table (order_number, part_number, order_status)
            values (:p1_order_number, tab(i), 'ACTIVE');
        end loop;
    end;
    

    (NB I have not dealt with whether the row already exists, but you get the idea.)

    The processing for removing items will be along the same lines, though a bit more complicated.

    qid & accept id: (9755681, 9755750) query: Use regexp_instr to get the last number in a string soup:

    If you were using 11g, you could use regexp_count to determine the number of times that a pattern exists in the string and feed that into the regexp_instr

    \n
    regexp_instr( str,\n              '[[:digit:]]',\n              1,\n              regexp_count( str, '[[:digit:]]')\n            )\n
    \n

    Since you're on 10g, however, the simplest option is probably to reverse the string and subtract the position that is found from the length of the string

    \n
    length(str) - regexp_instr(reverse(str),'[[:digit:]]') + 1\n
    \n

    Both approaches should work in 11g

    \n
    SQL> ed\nWrote file afiedt.buf\n\n  1  with x as (\n  2    select '500 Oracle Parkway, Redwood Shores, CA' str\n  3      from dual\n  4  )\n  5  select length(str) - regexp_instr(reverse(str),'[[:digit:]]') + 1,\n  6         regexp_instr( str,\n  7                       '[[:digit:]]',\n  8                       1,\n  9                       regexp_count( str, '[[:digit:]]')\n 10                     )\n 11*   from x\nSQL> /\n\nLENGTH(STR)-REGEXP_INSTR(REVERSE(STR),'[[:DIGIT:]]')+1\n------------------------------------------------------\nREGEXP_INSTR(STR,'[[:DIGIT:]]',1,REGEXP_COUNT(STR,'[[:DIGIT:]]'))\n-----------------------------------------------------------------\n                                                     3\n                                                                3\n
    \n soup wrap:

    If you were using 11g, you could use regexp_count to determine the number of times that a pattern exists in the string and feed that into the regexp_instr

    regexp_instr( str,
                  '[[:digit:]]',
                  1,
                  regexp_count( str, '[[:digit:]]')
                )
    

    Since you're on 10g, however, the simplest option is probably to reverse the string and subtract the position that is found from the length of the string

    length(str) - regexp_instr(reverse(str),'[[:digit:]]') + 1
    

    Both approaches should work in 11g

    SQL> ed
    Wrote file afiedt.buf
    
      1  with x as (
      2    select '500 Oracle Parkway, Redwood Shores, CA' str
      3      from dual
      4  )
      5  select length(str) - regexp_instr(reverse(str),'[[:digit:]]') + 1,
      6         regexp_instr( str,
      7                       '[[:digit:]]',
      8                       1,
      9                       regexp_count( str, '[[:digit:]]')
     10                     )
     11*   from x
    SQL> /
    
    LENGTH(STR)-REGEXP_INSTR(REVERSE(STR),'[[:DIGIT:]]')+1
    ------------------------------------------------------
    REGEXP_INSTR(STR,'[[:DIGIT:]]',1,REGEXP_COUNT(STR,'[[:DIGIT:]]'))
    -----------------------------------------------------------------
                                                         3
                                                                    3
    
    qid & accept id: (9760884, 9760912) query: Hebrew and other languages in sql soup:

    you need to store it as nvarchar and make sure to prefix the text with N

    \n

    example

    \n
    declare @n nchar(1)\nset @n = N'文' \n\nselect @n\nGO\n\ndeclare @n nchar(1)\nset @n = '文' \n\nselect @n\n
    \n

    output

    \n
    ----\n文\n\n(1 row(s) affected)\n\n\n----\n?\n\n(1 row(s) affected)\n
    \n

    The N before the string value tells SQL Server to treat it as unicode, notice that you get a question mark back when you don't use N?

    \n

    In terms of searching, take a look at Performance Impacts of Unicode, Equals vs LIKE, and Partially Filled Fixed Width

    \n soup wrap:

    you need to store it as nvarchar and make sure to prefix the text with N

    example

    declare @n nchar(1)
    set @n = N'文' 
    
    select @n
    GO
    
    declare @n nchar(1)
    set @n = '文' 
    
    select @n
    

    output

    ----
    文
    
    (1 row(s) affected)
    
    
    ----
    ?
    
    (1 row(s) affected)
    

    The N before the string value tells SQL Server to treat it as unicode, notice that you get a question mark back when you don't use N?

    In terms of searching, take a look at Performance Impacts of Unicode, Equals vs LIKE, and Partially Filled Fixed Width

    qid & accept id: (9764030, 9767068) query: SQL Server 2008 Prior String Extract soup:

    The following may appear somewhat specific and too assuming, even though it might also look a bit too complicated for a specific and over-assuming solution. Still, I hope it will at least make a good starting point.

    \n

    These are the assumptions I had to make to avoid complicating the script even further:

    \n
      \n
    1. The values to be extracted never contain a decimal point (are integers).

    2. \n
    3. The values to be extracted are always either preceded by a space or at the beginning of the column value.

    4. \n
    5. Neither GB nor MB can possibly be part of anything else than a traffic size (a value to be extracted).

    6. \n
    7. Neither GB nor MB is ever preceded by a space.

    8. \n
    9. All the strings are either unique or accompanied by another column or columns that can be used as key values. (My solution, in particular, uses an additional column as a key.)

    10. \n
    \n

    So, here's my attempt (which did return the expected results for all the sample data provided in the original post):

    \n
    WITH data (id, str) AS (\n             SELECT 1, '$15 / 1GB 24m + Intern 120MB' ----------> 1.12 GB\n  UNION ALL  SELECT 2, '$19.95 / 500MB + $49.95 / 9GB Blackberry' -----> 9.5GB\n  UNION ALL  SELECT 3, '$174.95 Blackberry 24GB + $10 / 1GB Datapack' ----> 25GB\n  UNION ALL  SELECT 4, '$79 / 6GB' --> 6GB\n  UNION ALL  SELECT 5, Null --> Null\n  UNION ALL  SELECT 6, '$20 Plan' --> 0GB\n  UNION ALL  SELECT 7, '460MB' --> 0.46GB\n),\nunified AS (\n  SELECT\n    id,\n    oldstr = str,\n    str = REPLACE(str, 'GB', '000MB')\n  FROM data\n),\nsplit AS (\n  SELECT\n    id,\n    ofs    = 0,\n    endpos = CHARINDEX('MB', str),\n    length = ISNULL(CHARINDEX(' ', REVERSE(SUBSTRING(str, 1, NULLIF(CHARINDEX('MB', str), 0) - 1)) + ' ') - 1, 0),\n    str    = SUBSTRING(str, NULLIF(CHARINDEX('MB', str), 0) + 2, 999999)\n  FROM unified\n  UNION ALL\n  SELECT\n    id,\n    ofs    = NULLIF(endpos, 0) + 1,\n    endpos = CHARINDEX('MB', str),\n    length = ISNULL(CHARINDEX(' ', REVERSE(SUBSTRING(str, 1, NULLIF(CHARINDEX('MB', str), 0) - 1)) + ' ') - 1, 0),\n    str    = SUBSTRING(str, NULLIF(CHARINDEX('MB', str), 0) + 2, 999999)\n  FROM split\n  WHERE length > 0\n),\nextracted AS (\n  SELECT\n    d.id,\n    str = d.oldstr,\n    mb = CAST(SUBSTRING(d.str, s.ofs + s.endpos - s.length, s.length) AS int)\n  FROM unified d\n  INNER JOIN split s ON d.id = s.id\n)\nSELECT\n  id,\n  str,\n  gb = RTRIM(CAST(SUM(mb) AS float) / 1000) + 'GB'\nFROM extracted\nGROUP BY id, str\nORDER BY id\n
    \n

    Basically, the idea is first to convert all gigabytes to megabytes, to then be able search and extract only megabyte amounts. The search & extract method involves a recursive CTE and consists essentially of these steps:

    \n

    1) find the position of the first MB;

    \n

    2) find the length of the number immediately before the MB;

    \n

    3) cut off the beginning of the string right at the end of the first MB;

    \n

    4) repeat from Step 1 until no MB is found;

    \n

    5) join the found figures to the original string list to extract the amounts themselves.

    \n

    Afterwards, it only remains for us to group by key values and sum the obtained amounts. Here's the output:

    \n
    id  str                                           gb\n--  --------------------------------------------  ------\n1   $15 / 1GB 24m + Intern 120MB                  1.12GB\n2   $19.95 / 500MB + $49.95 / 9GB Blackberry      9.5GB\n3   $174.95 Blackberry 24GB + $10 / 1GB Datapack  25GB\n4   $79 / 6GB                                     6GB\n5   NULL                                          NULL\n6   $20 Plan                                      0GB\n7   460MB                                         0.46GB\n
    \n soup wrap:

    The following may appear somewhat specific and too assuming, even though it might also look a bit too complicated for a specific and over-assuming solution. Still, I hope it will at least make a good starting point.

    These are the assumptions I had to make to avoid complicating the script even further:

    1. The values to be extracted never contain a decimal point (are integers).

    2. The values to be extracted are always either preceded by a space or at the beginning of the column value.

    3. Neither GB nor MB can possibly be part of anything else than a traffic size (a value to be extracted).

    4. Neither GB nor MB is ever preceded by a space.

    5. All the strings are either unique or accompanied by another column or columns that can be used as key values. (My solution, in particular, uses an additional column as a key.)

    So, here's my attempt (which did return the expected results for all the sample data provided in the original post):

    WITH data (id, str) AS (
                 SELECT 1, '$15 / 1GB 24m + Intern 120MB' ----------> 1.12 GB
      UNION ALL  SELECT 2, '$19.95 / 500MB + $49.95 / 9GB Blackberry' -----> 9.5GB
      UNION ALL  SELECT 3, '$174.95 Blackberry 24GB + $10 / 1GB Datapack' ----> 25GB
      UNION ALL  SELECT 4, '$79 / 6GB' --> 6GB
      UNION ALL  SELECT 5, Null --> Null
      UNION ALL  SELECT 6, '$20 Plan' --> 0GB
      UNION ALL  SELECT 7, '460MB' --> 0.46GB
    ),
    unified AS (
      SELECT
        id,
        oldstr = str,
        str = REPLACE(str, 'GB', '000MB')
      FROM data
    ),
    split AS (
      SELECT
        id,
        ofs    = 0,
        endpos = CHARINDEX('MB', str),
        length = ISNULL(CHARINDEX(' ', REVERSE(SUBSTRING(str, 1, NULLIF(CHARINDEX('MB', str), 0) - 1)) + ' ') - 1, 0),
        str    = SUBSTRING(str, NULLIF(CHARINDEX('MB', str), 0) + 2, 999999)
      FROM unified
      UNION ALL
      SELECT
        id,
        ofs    = NULLIF(endpos, 0) + 1,
        endpos = CHARINDEX('MB', str),
        length = ISNULL(CHARINDEX(' ', REVERSE(SUBSTRING(str, 1, NULLIF(CHARINDEX('MB', str), 0) - 1)) + ' ') - 1, 0),
        str    = SUBSTRING(str, NULLIF(CHARINDEX('MB', str), 0) + 2, 999999)
      FROM split
      WHERE length > 0
    ),
    extracted AS (
      SELECT
        d.id,
        str = d.oldstr,
        mb = CAST(SUBSTRING(d.str, s.ofs + s.endpos - s.length, s.length) AS int)
      FROM unified d
      INNER JOIN split s ON d.id = s.id
    )
    SELECT
      id,
      str,
      gb = RTRIM(CAST(SUM(mb) AS float) / 1000) + 'GB'
    FROM extracted
    GROUP BY id, str
    ORDER BY id
    

    Basically, the idea is first to convert all gigabytes to megabytes, to then be able search and extract only megabyte amounts. The search & extract method involves a recursive CTE and consists essentially of these steps:

    1) find the position of the first MB;

    2) find the length of the number immediately before the MB;

    3) cut off the beginning of the string right at the end of the first MB;

    4) repeat from Step 1 until no MB is found;

    5) join the found figures to the original string list to extract the amounts themselves.

    Afterwards, it only remains for us to group by key values and sum the obtained amounts. Here's the output:

    id  str                                           gb
    --  --------------------------------------------  ------
    1   $15 / 1GB 24m + Intern 120MB                  1.12GB
    2   $19.95 / 500MB + $49.95 / 9GB Blackberry      9.5GB
    3   $174.95 Blackberry 24GB + $10 / 1GB Datapack  25GB
    4   $79 / 6GB                                     6GB
    5   NULL                                          NULL
    6   $20 Plan                                      0GB
    7   460MB                                         0.46GB
    
    qid & accept id: (9777457, 9777799) query: DB schema, many-many or bool values in table soup:

    Another suggestion is that you use a linker table. This would be more maintainable and easily documented. A linker table is used when you have a many-to-many relationship. (A restaurant can have many types of menu, and a particular type of menu can be utilized by many restaurants.)

    \n

    This lets you add additional menu types as a row in a "menu_types" table later, without changing the structure of any table.

    \n

    It does make your queries somewhat more complicated, though, as you have to perform some joins.

    \n

    First, you would have three tables something like this:

    \n
    restaurants\n---------------\nid    name\n1     Moe's\n2     Steak & Shrimp House\n3     McDonald's\n\nrestaurant_menus\n----------------\nrestaurant_id    menu_type\n1                1\n1                3\n2                4\n3                1\n3                3\n3                4\n\nmenu_types\n---------------\nid    type\n1     Breakfast\n2     Brunch\n3     Lunch\n4     Dinner\n
    \n

    So, to see what kind of menus each restaurant offers, your query goes like this:

    \n
    SELECT r.name, mt.type\nFROM restaurants r\n    JOIN restaurant_menus rm\n        ON (r.id = rm.restaurant_id)\n    JOIN menu_types mt\n        ON (rm.menu_type = mt.id)\nORDER BY r.name ASC;\n
    \n

    This would produce:

    \n
    name                  type       \n--------------------  -----------\nMcDonald's            Lunch      \nMcDonald's            Breakfast  \nMcDonald's            Dinner     \nMoe's                 Breakfast  \nMoe's                 Lunch      \nSteak & Shrimp House  Dinner     \n
    \n soup wrap:

    Another suggestion is that you use a linker table. This would be more maintainable and easily documented. A linker table is used when you have a many-to-many relationship. (A restaurant can have many types of menu, and a particular type of menu can be utilized by many restaurants.)

    This lets you add additional menu types as a row in a "menu_types" table later, without changing the structure of any table.

    It does make your queries somewhat more complicated, though, as you have to perform some joins.

    First, you would have three tables something like this:

    restaurants
    ---------------
    id    name
    1     Moe's
    2     Steak & Shrimp House
    3     McDonald's
    
    restaurant_menus
    ----------------
    restaurant_id    menu_type
    1                1
    1                3
    2                4
    3                1
    3                3
    3                4
    
    menu_types
    ---------------
    id    type
    1     Breakfast
    2     Brunch
    3     Lunch
    4     Dinner
    

    So, to see what kind of menus each restaurant offers, your query goes like this:

    SELECT r.name, mt.type
    FROM restaurants r
        JOIN restaurant_menus rm
            ON (r.id = rm.restaurant_id)
        JOIN menu_types mt
            ON (rm.menu_type = mt.id)
    ORDER BY r.name ASC;
    

    This would produce:

    name                  type       
    --------------------  -----------
    McDonald's            Lunch      
    McDonald's            Breakfast  
    McDonald's            Dinner     
    Moe's                 Breakfast  
    Moe's                 Lunch      
    Steak & Shrimp House  Dinner     
    
    qid & accept id: (9789395, 9789631) query: Ignore emails that match a regexp in Postgres soup:

    Escape (double escape) the plus sign:

    \n
    E'^(info)\\+[A-Za-z0-9._%-]+@[A-Za-z0-9.-]+[.][A-Za-z]+'\n  here __^^\n
    \n

    Moreover, there're no need to make a group with (info)

    \n
    E'^info\\+[A-Za-z0-9._%-]+@[A-Za-z0-9.-]+[.][A-Za-z]+'\n
    \n soup wrap:

    Escape (double escape) the plus sign:

    E'^(info)\\+[A-Za-z0-9._%-]+@[A-Za-z0-9.-]+[.][A-Za-z]+'
      here __^^
    

    Moreover, there're no need to make a group with (info)

    E'^info\\+[A-Za-z0-9._%-]+@[A-Za-z0-9.-]+[.][A-Za-z]+'
    
    qid & accept id: (9807875, 9809089) query: SQL: Feeding SELECT output to LIKE soup:
    with prefix_list as (\n  select regexp_substr( str1, '^[A-Z]*' ) prefix from t1 where str2 = 'NAME1'\n)\nselect t1.str1 from t1 join prefix_list\n        on t1.str1 = prefix_list.prefix\n           or regexp_like( t1.str1, prefix_list.prefix||'_[0-9]' )\n
    \n

    To do it without the regexp functions (for older Oracle versions), it depends a bit on how much you want to validate the format of the strings.

    \n
    select t1.str1\n  from (\n  select case when instr( str1, '_' ) > 0\n                then substr( str1, 1, instr( str1, '_' ) - 1 )\n              else str1\n         end prefix\n    from t1 where str2 = 'NAME1'\n) prefix_list,\n  t1\nwhere t1.str1 = prefix\n   or t2.str1 like prefix || '\__' escape '\'\n
    \n soup wrap:
    with prefix_list as (
      select regexp_substr( str1, '^[A-Z]*' ) prefix from t1 where str2 = 'NAME1'
    )
    select t1.str1 from t1 join prefix_list
            on t1.str1 = prefix_list.prefix
               or regexp_like( t1.str1, prefix_list.prefix||'_[0-9]' )
    

    To do it without the regexp functions (for older Oracle versions), it depends a bit on how much you want to validate the format of the strings.

    select t1.str1
      from (
      select case when instr( str1, '_' ) > 0
                    then substr( str1, 1, instr( str1, '_' ) - 1 )
                  else str1
             end prefix
        from t1 where str2 = 'NAME1'
    ) prefix_list,
      t1
    where t1.str1 = prefix
       or t2.str1 like prefix || '\__' escape '\'
    
    qid & accept id: (9861297, 9861596) query: Fixing duplicate customers in SQL soup:

    Update the Order table:

    \n
    UPDATE o\nSET o.person_id = cc.max_person_id\nFROM\n    [Order] AS o\n  JOIN\n    Customer AS c\n        ON c.person_id = o.person_id\n  JOIN\n    ( SELECT customer_id\n           , MAX(person_id) AS max_person_id\n      FROM Customer\n      GROUP BY customer_id\n    ) AS cc\n        ON cc.customer_id = c.customer_id ;\n
    \n

    Then, update the Customer table:

    \n
    UPDATE c\nSET c.person_id = cc.max_person_id\nFROM\n    Customer AS c\n  JOIN\n    ( SELECT customer_id\n           , MAX(person_id) AS max_person_id\n      FROM Customer\n      GROUP BY customer_id\n    ) AS cc\n        ON cc.customer_id = c.customer_id ;\n
    \n
    \n

    After that, it would be good to have Customer(person_id) defined as PRIMARY KEY or with a UNIQUE constraint.

    \n

    And a FOREIGN KEY constraint from Order(person_id) to Customer(person_id)

    \n soup wrap:

    Update the Order table:

    UPDATE o
    SET o.person_id = cc.max_person_id
    FROM
        [Order] AS o
      JOIN
        Customer AS c
            ON c.person_id = o.person_id
      JOIN
        ( SELECT customer_id
               , MAX(person_id) AS max_person_id
          FROM Customer
          GROUP BY customer_id
        ) AS cc
            ON cc.customer_id = c.customer_id ;
    

    Then, update the Customer table:

    UPDATE c
    SET c.person_id = cc.max_person_id
    FROM
        Customer AS c
      JOIN
        ( SELECT customer_id
               , MAX(person_id) AS max_person_id
          FROM Customer
          GROUP BY customer_id
        ) AS cc
            ON cc.customer_id = c.customer_id ;
    

    After that, it would be good to have Customer(person_id) defined as PRIMARY KEY or with a UNIQUE constraint.

    And a FOREIGN KEY constraint from Order(person_id) to Customer(person_id)

    qid & accept id: (9919278, 9921779) query: SQL multiple replace soup:

    Ed Northridge's answer will work, and I have upvoted it, but just in case multiple replacements are required I am adding another option using his sample data. If, for example one of the companies was called "The PC Company LTD" This would duplicate rows in the output with one being "The PC LTD" and the other "The PC Company". To resolve this there are 2 option depending on your desired outcome. The first is to only replace the "Bad Strings" when they occur at the end of the name.

    \n
    SELECT  c.ID, RTRIM(x.Name) [Name]\nFROM    @companies c\n        OUTER APPLY \n        (   SELECT  REPLACE(c.name, item, '') AS [Name]\n            FROM    @badStrings\n                    -- WHERE CLAUSE ADDED HERE\n            WHERE   CHARINDEX(item, c.Name) = 1 + LEN(c.Name) - LEN(Item)\n        ) x\nWHERE   c.name != '' \nAND     x.[Name] != c.Name\n
    \n

    This would yield "The PC Company" with no duplicates.

    \n

    The other option is replace All occurances of the Bad Strings recursively:

    \n
    ;WITH CTE AS\n(   SELECT  c.ID, c.Name [OriginalName], RTRIM(x.Name) [Name], 1 [Level]\n    FROM    @companies c\n            OUTER APPLY \n            (   SELECT  REPLACE(c.name, item, '') AS [Name]\n                FROM    @badStrings\n                WHERE   CHARINDEX(item, c.Name) = 1 + LEN(c.Name) - LEN(Item)\n            ) x\n    WHERE   c.name != '' \n    AND     RTRIM(x.Name) != c.Name\n    UNION ALL\n    SELECT  c.ID, OriginalName, RTRIM(x.Name) [Name], Level + 1 [Level]\n    FROM    CTE c\n            OUTER APPLY \n            (   SELECT  REPLACE(c.name, item, '') AS [Name]\n                FROM    @badStrings\n                WHERE   CHARINDEX(item, c.Name) = 1 + LEN(c.Name) - LEN(Item)\n            ) x\n    WHERE   c.name != '' \n    AND     x.[Name] != c.Name  \n)\n\nSELECT  DISTINCT ID, Name, OriginalName\nFROM    (   SELECT  *, MAX(Level) OVER(PARTITION BY ID) [MaxLevel]\n            FROM    CTE\n        ) c\nWHERE   Level = maxLevel\n
    \n

    This would yield "The PC" from "The PC Company".

    \n soup wrap:

    Ed Northridge's answer will work, and I have upvoted it, but just in case multiple replacements are required I am adding another option using his sample data. If, for example one of the companies was called "The PC Company LTD" This would duplicate rows in the output with one being "The PC LTD" and the other "The PC Company". To resolve this there are 2 option depending on your desired outcome. The first is to only replace the "Bad Strings" when they occur at the end of the name.

    SELECT  c.ID, RTRIM(x.Name) [Name]
    FROM    @companies c
            OUTER APPLY 
            (   SELECT  REPLACE(c.name, item, '') AS [Name]
                FROM    @badStrings
                        -- WHERE CLAUSE ADDED HERE
                WHERE   CHARINDEX(item, c.Name) = 1 + LEN(c.Name) - LEN(Item)
            ) x
    WHERE   c.name != '' 
    AND     x.[Name] != c.Name
    

    This would yield "The PC Company" with no duplicates.

    The other option is replace All occurances of the Bad Strings recursively:

    ;WITH CTE AS
    (   SELECT  c.ID, c.Name [OriginalName], RTRIM(x.Name) [Name], 1 [Level]
        FROM    @companies c
                OUTER APPLY 
                (   SELECT  REPLACE(c.name, item, '') AS [Name]
                    FROM    @badStrings
                    WHERE   CHARINDEX(item, c.Name) = 1 + LEN(c.Name) - LEN(Item)
                ) x
        WHERE   c.name != '' 
        AND     RTRIM(x.Name) != c.Name
        UNION ALL
        SELECT  c.ID, OriginalName, RTRIM(x.Name) [Name], Level + 1 [Level]
        FROM    CTE c
                OUTER APPLY 
                (   SELECT  REPLACE(c.name, item, '') AS [Name]
                    FROM    @badStrings
                    WHERE   CHARINDEX(item, c.Name) = 1 + LEN(c.Name) - LEN(Item)
                ) x
        WHERE   c.name != '' 
        AND     x.[Name] != c.Name  
    )
    
    SELECT  DISTINCT ID, Name, OriginalName
    FROM    (   SELECT  *, MAX(Level) OVER(PARTITION BY ID) [MaxLevel]
                FROM    CTE
            ) c
    WHERE   Level = maxLevel
    

    This would yield "The PC" from "The PC Company".

    qid & accept id: (10011337, 10013471) query: How to design a database table to enforce non-duplicate Unique Key records soup:

    Okay, this isn't the prettiest of code, but it does enforce the constraint, I think. The trick is to create an indexed view with two unique indexes defined on it:

    \n
    create table dbo.ABC (\n    Col1 int not null,\n    Col2 int not null\n)\ngo\ncreate view dbo.ABC_Col1_Col2_dep\nwith schemabinding\nas\n    select Col1,Col2,COUNT_BIG(*) as Cnt\n    from\n        dbo.ABC\n    group by\n        Col1,Col2\ngo\ncreate unique clustered index IX_Col1_UniqueCol2 on dbo.ABC_Col1_Col2_dep (Col1)\ngo\ncreate unique nonclustered index IX_Col2_UniqueCol1 on dbo.ABC_Col1_Col2_dep (Col2)\ngo\n
    \n

    Now we insert some initial data:

    \n
    insert into dbo.ABC (Col1,Col2)\nselect 1,3 union all\nselect 2,19 union all\nselect 3,12\n
    \n

    We can add another row with exactly the same values for Col1 and Col2:

    \n
    insert into dbo.ABC (Col1,Col2)\nselect 1,3\n
    \n

    But if we pick a value for Col2 that has been used for another Col1, or vice versa, we get errors:

    \n
    insert into dbo.ABC (Col1,Col2)\nselect 2,3\ngo\ninsert into dbo.ABC (Col1,Col2)\nselect 1,5\n
    \n
    \n

    The trick here was to observe that this query:

    \n
        select Col1,Col2,COUNT_BIG(*) as Cnt\n    from\n        dbo.ABC\n    group by\n        Col1,Col2\n
    \n

    will only have one row for a particular Col1 value, and only one row with a particular Col2 value, provided that the constraint you're seeking to enforce has not been broken - but as soon as a non-matching row is inserted into the base table, this query returns multiple rows.

    \n soup wrap:

    Okay, this isn't the prettiest of code, but it does enforce the constraint, I think. The trick is to create an indexed view with two unique indexes defined on it:

    create table dbo.ABC (
        Col1 int not null,
        Col2 int not null
    )
    go
    create view dbo.ABC_Col1_Col2_dep
    with schemabinding
    as
        select Col1,Col2,COUNT_BIG(*) as Cnt
        from
            dbo.ABC
        group by
            Col1,Col2
    go
    create unique clustered index IX_Col1_UniqueCol2 on dbo.ABC_Col1_Col2_dep (Col1)
    go
    create unique nonclustered index IX_Col2_UniqueCol1 on dbo.ABC_Col1_Col2_dep (Col2)
    go
    

    Now we insert some initial data:

    insert into dbo.ABC (Col1,Col2)
    select 1,3 union all
    select 2,19 union all
    select 3,12
    

    We can add another row with exactly the same values for Col1 and Col2:

    insert into dbo.ABC (Col1,Col2)
    select 1,3
    

    But if we pick a value for Col2 that has been used for another Col1, or vice versa, we get errors:

    insert into dbo.ABC (Col1,Col2)
    select 2,3
    go
    insert into dbo.ABC (Col1,Col2)
    select 1,5
    

    The trick here was to observe that this query:

        select Col1,Col2,COUNT_BIG(*) as Cnt
        from
            dbo.ABC
        group by
            Col1,Col2
    

    will only have one row for a particular Col1 value, and only one row with a particular Col2 value, provided that the constraint you're seeking to enforce has not been broken - but as soon as a non-matching row is inserted into the base table, this query returns multiple rows.

    qid & accept id: (10019557, 10019607) query: Join SQL Server tables on a like statement soup:

    Cast StateID to a compatible type, e.g.

    \n
    WHERE URL LIKE '%' + CONVERT(varchar(50), StateID) + '%'\n
    \n

    or

    \n
    WHERE URL LIKE N'%' + CONVERT(nvarchar(50), StateID) + N'%'\n
    \n

    if URL is nvarchar(...)

    \n

    EDIT

    \n

    As pointed out in another answer, this could result in poor performance on large tables.\nThe LIKE combined with a CONVERT will result in a table scan. This may not be a problem for small tables, but you should consider splitting the URL into two columns if performance becomes a problem. One column would contain 'page.aspx?id=' and the other the UNIQUEIDENTIFIER. Your query could then be optimized much more easily.

    \n soup wrap:

    Cast StateID to a compatible type, e.g.

    WHERE URL LIKE '%' + CONVERT(varchar(50), StateID) + '%'
    

    or

    WHERE URL LIKE N'%' + CONVERT(nvarchar(50), StateID) + N'%'
    

    if URL is nvarchar(...)

    EDIT

    As pointed out in another answer, this could result in poor performance on large tables. The LIKE combined with a CONVERT will result in a table scan. This may not be a problem for small tables, but you should consider splitting the URL into two columns if performance becomes a problem. One column would contain 'page.aspx?id=' and the other the UNIQUEIDENTIFIER. Your query could then be optimized much more easily.

    qid & accept id: (10025996, 10026264) query: Selecting different condition based on presence of association? soup:

    I'm assuming event contains the period association?

    \n

    In any case you want a left join between the discounts table and the periods table. This will give you the period data to do the begin = today where clause, and null if there is no period. Thus the SQL to select the data would be

    \n
    SELECT [columns]\nFROM discounts_table\nLEFT JOIN periods_table ON periods_table.discount_id = discounts_table.id\nWHERE (periods_table.begin = [today]) OR (periods_table.begin IS NULL AND discounts_table.created_at BETWEEN [yesterday] AND [today])\n
    \n

    in rails you should be able to achieve this as follows:

    \n
    Discount\n  .joins("LEFT JOIN periods_table ON periods_table.discount_id = discounts_table.id")\n  .where("(periods_table.begin = ?) OR (periods_table.begin IS NULL AND discounts_table.created_at BETWEEN ? AND ?)", today, today, 1.day.ago.to_date)\n
    \n

    Unfortunately you need the use SQL statements rather than letting rails create it for you as:

    \n
      \n
    1. joins with a symbol only creates an INNER JOIN, not a LEFT JOIN
    2. \n
    3. where with symbols, hashes etc will combine conditions using AND, not OR
    4. \n
    \n soup wrap:

    I'm assuming event contains the period association?

    In any case you want a left join between the discounts table and the periods table. This will give you the period data to do the begin = today where clause, and null if there is no period. Thus the SQL to select the data would be

    SELECT [columns]
    FROM discounts_table
    LEFT JOIN periods_table ON periods_table.discount_id = discounts_table.id
    WHERE (periods_table.begin = [today]) OR (periods_table.begin IS NULL AND discounts_table.created_at BETWEEN [yesterday] AND [today])
    

    in rails you should be able to achieve this as follows:

    Discount
      .joins("LEFT JOIN periods_table ON periods_table.discount_id = discounts_table.id")
      .where("(periods_table.begin = ?) OR (periods_table.begin IS NULL AND discounts_table.created_at BETWEEN ? AND ?)", today, today, 1.day.ago.to_date)
    

    Unfortunately you need the use SQL statements rather than letting rails create it for you as:

    1. joins with a symbol only creates an INNER JOIN, not a LEFT JOIN
    2. where with symbols, hashes etc will combine conditions using AND, not OR
    qid & accept id: (10035769, 10035842) query: Query to Select Between Two Times of Day soup:

    Since you're on SQL Server 2008, you can use the new TIME datatype:

    \n
    SELECT * FROM MyTable\nWHERE CAST(SyncDate AS TIME) BETWEEN '14:00' and '14:30'\n
    \n

    If your backend isn't 2008 yet :-) then you'd need something like:

    \n
    SELECT * FROM MyTable\nWHERE DATEPART(HOUR, SyncDate) = 14 AND DATEPART(MINUTE, SyncDate) BETWEEN 0 AND 30\n
    \n

    to check for 14:00-14:30 hours.

    \n soup wrap:

    Since you're on SQL Server 2008, you can use the new TIME datatype:

    SELECT * FROM MyTable
    WHERE CAST(SyncDate AS TIME) BETWEEN '14:00' and '14:30'
    

    If your backend isn't 2008 yet :-) then you'd need something like:

    SELECT * FROM MyTable
    WHERE DATEPART(HOUR, SyncDate) = 14 AND DATEPART(MINUTE, SyncDate) BETWEEN 0 AND 30
    

    to check for 14:00-14:30 hours.

    qid & accept id: (10109770, 10110046) query: get all child from an parent id soup:

    This should do it for you:

    \n
    create table #temp \n(\n    id int, \n    parentid int,\n    data varchar(1)\n)\ninsert #temp (id, parentid, data) values (1, -1, 'a')\ninsert #temp (id, parentid, data) values (2,1, 'b')\ninsert #temp (id, parentid, data) values  (3,2, 'c')\ninsert #temp (id, parentid, data) values  (4,3, 'd')\ninsert #temp (id, parentid, data) values  (5,3, 'f')\n\n; with cte as (\n    select  id, parentid, data, id as topparent\n    from    #temp\n    union all\n    select  child.id, child.parentid, child.data, parent.topparent\n    from    #temp child\n    join    cte parent\n    on      parent.id = child.parentid\n\n)\nselect  id, parentid, data\nfrom    cte\nwhere topparent = 2\n\ndrop table #temp\n
    \n

    EDIT or you can put the WHERE clause inside the first select

    \n
    create table #temp \n(\n    id int, \n    parentid int,\n    data varchar(1)\n)\ninsert #temp (id, parentid, data) values (1, -1, 'a')\ninsert #temp (id, parentid, data) values (2,1, 'b')\ninsert #temp (id, parentid, data) values  (3,2, 'c')\ninsert #temp (id, parentid, data) values  (4,3, 'd')\ninsert #temp (id, parentid, data) values  (5,3, 'f')\n\n; with cte as (\n    select  id, parentid, data, id as topparent\n    from    #temp\n    WHERE id = 2\n    union all\n    select  child.id, child.parentid, child.data, parent.topparent\n    from    #temp child\n    join    cte parent\n    on      parent.id = child.parentid\n\n)\nselect  id, parentid, data\nfrom    cte\n\ndrop table #temp\n
    \n

    Results:

    \n
    id  parentid      data\n2   1              b\n3   2              c\n4   3              d\n5   3              f\n
    \n soup wrap:

    This should do it for you:

    create table #temp 
    (
        id int, 
        parentid int,
        data varchar(1)
    )
    insert #temp (id, parentid, data) values (1, -1, 'a')
    insert #temp (id, parentid, data) values (2,1, 'b')
    insert #temp (id, parentid, data) values  (3,2, 'c')
    insert #temp (id, parentid, data) values  (4,3, 'd')
    insert #temp (id, parentid, data) values  (5,3, 'f')
    
    ; with cte as (
        select  id, parentid, data, id as topparent
        from    #temp
        union all
        select  child.id, child.parentid, child.data, parent.topparent
        from    #temp child
        join    cte parent
        on      parent.id = child.parentid
    
    )
    select  id, parentid, data
    from    cte
    where topparent = 2
    
    drop table #temp
    

    EDIT or you can put the WHERE clause inside the first select

    create table #temp 
    (
        id int, 
        parentid int,
        data varchar(1)
    )
    insert #temp (id, parentid, data) values (1, -1, 'a')
    insert #temp (id, parentid, data) values (2,1, 'b')
    insert #temp (id, parentid, data) values  (3,2, 'c')
    insert #temp (id, parentid, data) values  (4,3, 'd')
    insert #temp (id, parentid, data) values  (5,3, 'f')
    
    ; with cte as (
        select  id, parentid, data, id as topparent
        from    #temp
        WHERE id = 2
        union all
        select  child.id, child.parentid, child.data, parent.topparent
        from    #temp child
        join    cte parent
        on      parent.id = child.parentid
    
    )
    select  id, parentid, data
    from    cte
    
    drop table #temp
    

    Results:

    id  parentid      data
    2   1              b
    3   2              c
    4   3              d
    5   3              f
    
    qid & accept id: (10121680, 10121774) query: SQL query over multiple rows soup:

    Another approach would be -

    \n
    SELECT housing_id\nFROM mytable\nWHERE facility_id IN (4,7)\nGROUP BY housing_id\nHAVING COUNT(DISTINCT facility_id) = 2\n
    \n

    UPDATE - inspired by the comment by Josvic I decided to do some more testing and thought I would include my findings.

    \n

    One of the benefits of using this query is that it is easy to modify to include more facility_ids. If you want to find all housing_ids that have facility_ids 1, 3, 4 & 7 you just do -

    \n
    SELECT housing_id\nFROM mytable\nWHERE facility_id IN (1,3,4,7)\nGROUP BY housing_id\nHAVING COUNT(DISTINCT facility_id) = 4\n
    \n

    The performance of all three of these queries varies hugely based on the indexing strategy employed. I was unable to get reasonable performance, on my test dataset, from the dependant subquery version regardless of indexing used.

    \n

    The self join solution provided by Tim performs very well given separate single column indices on the two columns but does not perform quite so well as the number of criteria increases.

    \n

    Here are some basic stats on my test table - 500k rows - 147963 housing_ids with potential values for facility_id between 1 and 9.

    \n

    Here are the indices used for running all these tests -

    \n
    SHOW INDEXES FROM mytable;\n+---------+------------+---------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+\n| Table   | Non_unique | Key_name            | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type |\n+---------+------------+---------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+\n| mytable |          0 | UQ_housing_facility |            1 | housing_id  | A         |      500537 |     NULL | NULL   |      | BTREE      |\n| mytable |          0 | UQ_housing_facility |            2 | facility_id | A         |      500537 |     NULL | NULL   |      | BTREE      |\n| mytable |          0 | UQ_facility_housing |            1 | facility_id | A         |          12 |     NULL | NULL   |      | BTREE      |\n| mytable |          0 | UQ_facility_housing |            2 | housing_id  | A         |      500537 |     NULL | NULL   |      | BTREE      |\n| mytable |          1 | IX_housing          |            1 | housing_id  | A         |      500537 |     NULL | NULL   |      | BTREE      |\n| mytable |          1 | IX_facility         |            1 | facility_id | A         |          12 |     NULL | NULL   |      | BTREE      |\n+---------+------------+---------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+\n
    \n

    First query tested is the dependant subquery -

    \n
    SELECT SQL_NO_CACHE DISTINCT housing_id\nFROM mytable\nWHERE housing_id IN (SELECT housing_id FROM mytable WHERE facility_id=4)\nAND housing_id IN (SELECT housing_id FROM mytable WHERE facility_id=7);\n\n17321 rows in set (9.15 sec)\n\n+----+--------------------+---------+-----------------+----------------------------------------------------------------+---------------------+---------+------------+--------+---------------------------------------+\n| id | select_type        | table   | type            | possible_keys                                                  | key                 | key_len | ref        | rows   | Extra                                 |\n+----+--------------------+---------+-----------------+----------------------------------------------------------------+---------------------+---------+------------+--------+---------------------------------------+\n|  1 | PRIMARY            | mytable | range           | NULL                                                           | IX_housing          | 4       | NULL       | 500538 | Using where; Using index for group-by |\n|  3 | DEPENDENT SUBQUERY | mytable | unique_subquery | UQ_housing_facility,UQ_facility_housing,IX_housing,IX_facility | UQ_housing_facility | 8       | func,const |      1 | Using index; Using where              |\n|  2 | DEPENDENT SUBQUERY | mytable | unique_subquery | UQ_housing_facility,UQ_facility_housing,IX_housing,IX_facility | UQ_housing_facility | 8       | func,const |      1 | Using index; Using where              |\n+----+--------------------+---------+-----------------+----------------------------------------------------------------+---------------------+---------+------------+--------+---------------------------------------+\n\nSELECT SQL_NO_CACHE DISTINCT housing_id\nFROM mytable\nWHERE housing_id IN (SELECT housing_id FROM mytable WHERE facility_id=1)\nAND housing_id IN (SELECT housing_id FROM mytable WHERE facility_id=3)\nAND housing_id IN (SELECT housing_id FROM mytable WHERE facility_id=4)\nAND housing_id IN (SELECT housing_id FROM mytable WHERE facility_id=7);\n\n567 rows in set (9.30 sec)\n\n+----+--------------------+---------+-----------------+----------------------------------------------------------------+---------------------+---------+------------+--------+---------------------------------------+\n| id | select_type        | table   | type            | possible_keys                                                  | key                 | key_len | ref        | rows   | Extra                                 |\n+----+--------------------+---------+-----------------+----------------------------------------------------------------+---------------------+---------+------------+--------+---------------------------------------+\n|  1 | PRIMARY            | mytable | range           | NULL                                                           | IX_housing          | 4       | NULL       | 500538 | Using where; Using index for group-by |\n|  5 | DEPENDENT SUBQUERY | mytable | unique_subquery | UQ_housing_facility,UQ_facility_housing,IX_housing,IX_facility | UQ_housing_facility | 8       | func,const |      1 | Using index; Using where              |\n|  4 | DEPENDENT SUBQUERY | mytable | unique_subquery | UQ_housing_facility,UQ_facility_housing,IX_housing,IX_facility | UQ_housing_facility | 8       | func,const |      1 | Using index; Using where              |\n|  3 | DEPENDENT SUBQUERY | mytable | unique_subquery | UQ_housing_facility,UQ_facility_housing,IX_housing,IX_facility | UQ_housing_facility | 8       | func,const |      1 | Using index; Using where              |\n|  2 | DEPENDENT SUBQUERY | mytable | unique_subquery | UQ_housing_facility,UQ_facility_housing,IX_housing,IX_facility | UQ_housing_facility | 8       | func,const |      1 | Using index; Using where              |\n+----+--------------------+---------+-----------------+----------------------------------------------------------------+---------------------+---------+------------+--------+---------------------------------------+\n
    \n

    Next is my version using the GROUP BY ... HAVING COUNT ...

    \n
    SELECT SQL_NO_CACHE housing_id\nFROM mytable\nWHERE facility_id IN (4,7)\nGROUP BY housing_id\nHAVING COUNT(DISTINCT facility_id) = 2;\n\n17321 rows in set (0.79 sec)\n\n+----+-------------+---------+-------+---------------------------------+-------------+---------+------+--------+------------------------------------------+\n| id | select_type | table   | type  | possible_keys                   | key         | key_len | ref  | rows   | Extra                                    |\n+----+-------------+---------+-------+---------------------------------+-------------+---------+------+--------+------------------------------------------+\n|  1 | SIMPLE      | mytable | range | UQ_facility_housing,IX_facility | IX_facility | 4       | NULL | 198646 | Using where; Using index; Using filesort |\n+----+-------------+---------+-------+---------------------------------+-------------+---------+------+--------+------------------------------------------+\n\nSELECT SQL_NO_CACHE housing_id\nFROM mytable\nWHERE facility_id IN (1,3,4,7)\nGROUP BY housing_id\nHAVING COUNT(DISTINCT facility_id) = 4;\n\n567 rows in set (1.25 sec)\n\n+----+-------------+---------+-------+---------------------------------+-------------+---------+------+--------+------------------------------------------+\n| id | select_type | table   | type  | possible_keys                   | key         | key_len | ref  | rows   | Extra                                    |\n+----+-------------+---------+-------+---------------------------------+-------------+---------+------+--------+------------------------------------------+\n|  1 | SIMPLE      | mytable | range | UQ_facility_housing,IX_facility | IX_facility | 4       | NULL | 407160 | Using where; Using index; Using filesort |\n+----+-------------+---------+-------+---------------------------------+-------------+---------+------+--------+------------------------------------------+\n
    \n

    And last but not least the self join -

    \n
    SELECT SQL_NO_CACHE a.housing_id\nFROM mytable a\nINNER JOIN mytable b\n    ON a.housing_id = b.housing_id\nWHERE a.facility_id = 4 AND b.facility_id = 7;\n\n17321 rows in set (1.37 sec)\n\n+----+-------------+-------+--------+----------------------------------------------------------------+---------------------+---------+-------------------------+-------+-------------+\n| id | select_type | table | type   | possible_keys                                                  | key                 | key_len | ref                     | rows  | Extra       |\n+----+-------------+-------+--------+----------------------------------------------------------------+---------------------+---------+-------------------------+-------+-------------+\n|  1 | SIMPLE      | b     | ref    | UQ_housing_facility,UQ_facility_housing,IX_housing,IX_facility | IX_facility         | 4       | const                   | 94598 | Using index |\n|  1 | SIMPLE      | a     | eq_ref | UQ_housing_facility,UQ_facility_housing,IX_housing,IX_facility | UQ_housing_facility | 8       | test.b.housing_id,const |     1 | Using index |\n+----+-------------+-------+--------+----------------------------------------------------------------+---------------------+---------+-------------------------+-------+-------------+\n\nSELECT SQL_NO_CACHE a.housing_id\nFROM mytable a\nINNER JOIN mytable b\n    ON a.housing_id = b.housing_id\nINNER JOIN mytable c\n    ON a.housing_id = c.housing_id\nINNER JOIN mytable d\n    ON a.housing_id = d.housing_id\nWHERE a.facility_id = 1\nAND b.facility_id = 3\nAND c.facility_id = 4\nAND d.facility_id = 7;\n\n567 rows in set (1.64 sec)\n\n+----+-------------+-------+--------+----------------------------------------------------------------+---------------------+---------+-------------------------+-------+--------------------------+\n| id | select_type | table | type   | possible_keys                                                  | key                 | key_len | ref                     | rows  | Extra                    |\n+----+-------------+-------+--------+----------------------------------------------------------------+---------------------+---------+-------------------------+-------+--------------------------+\n|  1 | SIMPLE      | b     | ref    | UQ_housing_facility,UQ_facility_housing,IX_housing,IX_facility | IX_facility         | 4       | const                   | 93782 | Using index              |\n|  1 | SIMPLE      | d     | eq_ref | UQ_housing_facility,UQ_facility_housing,IX_housing,IX_facility | UQ_housing_facility | 8       | test.b.housing_id,const |     1 | Using index              |\n|  1 | SIMPLE      | c     | eq_ref | UQ_housing_facility,UQ_facility_housing,IX_housing,IX_facility | UQ_housing_facility | 8       | test.b.housing_id,const |     1 | Using index              |\n|  1 | SIMPLE      | a     | eq_ref | UQ_housing_facility,UQ_facility_housing,IX_housing,IX_facility | UQ_housing_facility | 8       | test.d.housing_id,const |     1 | Using where; Using index |\n+----+-------------+-------+--------+----------------------------------------------------------------+---------------------+---------+-------------------------+-------+--------------------------+\n
    \n soup wrap:

    Another approach would be -

    SELECT housing_id
    FROM mytable
    WHERE facility_id IN (4,7)
    GROUP BY housing_id
    HAVING COUNT(DISTINCT facility_id) = 2
    

    UPDATE - inspired by the comment by Josvic I decided to do some more testing and thought I would include my findings.

    One of the benefits of using this query is that it is easy to modify to include more facility_ids. If you want to find all housing_ids that have facility_ids 1, 3, 4 & 7 you just do -

    SELECT housing_id
    FROM mytable
    WHERE facility_id IN (1,3,4,7)
    GROUP BY housing_id
    HAVING COUNT(DISTINCT facility_id) = 4
    

    The performance of all three of these queries varies hugely based on the indexing strategy employed. I was unable to get reasonable performance, on my test dataset, from the dependant subquery version regardless of indexing used.

    The self join solution provided by Tim performs very well given separate single column indices on the two columns but does not perform quite so well as the number of criteria increases.

    Here are some basic stats on my test table - 500k rows - 147963 housing_ids with potential values for facility_id between 1 and 9.

    Here are the indices used for running all these tests -

    SHOW INDEXES FROM mytable;
    +---------+------------+---------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+
    | Table   | Non_unique | Key_name            | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type |
    +---------+------------+---------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+
    | mytable |          0 | UQ_housing_facility |            1 | housing_id  | A         |      500537 |     NULL | NULL   |      | BTREE      |
    | mytable |          0 | UQ_housing_facility |            2 | facility_id | A         |      500537 |     NULL | NULL   |      | BTREE      |
    | mytable |          0 | UQ_facility_housing |            1 | facility_id | A         |          12 |     NULL | NULL   |      | BTREE      |
    | mytable |          0 | UQ_facility_housing |            2 | housing_id  | A         |      500537 |     NULL | NULL   |      | BTREE      |
    | mytable |          1 | IX_housing          |            1 | housing_id  | A         |      500537 |     NULL | NULL   |      | BTREE      |
    | mytable |          1 | IX_facility         |            1 | facility_id | A         |          12 |     NULL | NULL   |      | BTREE      |
    +---------+------------+---------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+
    

    First query tested is the dependant subquery -

    SELECT SQL_NO_CACHE DISTINCT housing_id
    FROM mytable
    WHERE housing_id IN (SELECT housing_id FROM mytable WHERE facility_id=4)
    AND housing_id IN (SELECT housing_id FROM mytable WHERE facility_id=7);
    
    17321 rows in set (9.15 sec)
    
    +----+--------------------+---------+-----------------+----------------------------------------------------------------+---------------------+---------+------------+--------+---------------------------------------+
    | id | select_type        | table   | type            | possible_keys                                                  | key                 | key_len | ref        | rows   | Extra                                 |
    +----+--------------------+---------+-----------------+----------------------------------------------------------------+---------------------+---------+------------+--------+---------------------------------------+
    |  1 | PRIMARY            | mytable | range           | NULL                                                           | IX_housing          | 4       | NULL       | 500538 | Using where; Using index for group-by |
    |  3 | DEPENDENT SUBQUERY | mytable | unique_subquery | UQ_housing_facility,UQ_facility_housing,IX_housing,IX_facility | UQ_housing_facility | 8       | func,const |      1 | Using index; Using where              |
    |  2 | DEPENDENT SUBQUERY | mytable | unique_subquery | UQ_housing_facility,UQ_facility_housing,IX_housing,IX_facility | UQ_housing_facility | 8       | func,const |      1 | Using index; Using where              |
    +----+--------------------+---------+-----------------+----------------------------------------------------------------+---------------------+---------+------------+--------+---------------------------------------+
    
    SELECT SQL_NO_CACHE DISTINCT housing_id
    FROM mytable
    WHERE housing_id IN (SELECT housing_id FROM mytable WHERE facility_id=1)
    AND housing_id IN (SELECT housing_id FROM mytable WHERE facility_id=3)
    AND housing_id IN (SELECT housing_id FROM mytable WHERE facility_id=4)
    AND housing_id IN (SELECT housing_id FROM mytable WHERE facility_id=7);
    
    567 rows in set (9.30 sec)
    
    +----+--------------------+---------+-----------------+----------------------------------------------------------------+---------------------+---------+------------+--------+---------------------------------------+
    | id | select_type        | table   | type            | possible_keys                                                  | key                 | key_len | ref        | rows   | Extra                                 |
    +----+--------------------+---------+-----------------+----------------------------------------------------------------+---------------------+---------+------------+--------+---------------------------------------+
    |  1 | PRIMARY            | mytable | range           | NULL                                                           | IX_housing          | 4       | NULL       | 500538 | Using where; Using index for group-by |
    |  5 | DEPENDENT SUBQUERY | mytable | unique_subquery | UQ_housing_facility,UQ_facility_housing,IX_housing,IX_facility | UQ_housing_facility | 8       | func,const |      1 | Using index; Using where              |
    |  4 | DEPENDENT SUBQUERY | mytable | unique_subquery | UQ_housing_facility,UQ_facility_housing,IX_housing,IX_facility | UQ_housing_facility | 8       | func,const |      1 | Using index; Using where              |
    |  3 | DEPENDENT SUBQUERY | mytable | unique_subquery | UQ_housing_facility,UQ_facility_housing,IX_housing,IX_facility | UQ_housing_facility | 8       | func,const |      1 | Using index; Using where              |
    |  2 | DEPENDENT SUBQUERY | mytable | unique_subquery | UQ_housing_facility,UQ_facility_housing,IX_housing,IX_facility | UQ_housing_facility | 8       | func,const |      1 | Using index; Using where              |
    +----+--------------------+---------+-----------------+----------------------------------------------------------------+---------------------+---------+------------+--------+---------------------------------------+
    

    Next is my version using the GROUP BY ... HAVING COUNT ...

    SELECT SQL_NO_CACHE housing_id
    FROM mytable
    WHERE facility_id IN (4,7)
    GROUP BY housing_id
    HAVING COUNT(DISTINCT facility_id) = 2;
    
    17321 rows in set (0.79 sec)
    
    +----+-------------+---------+-------+---------------------------------+-------------+---------+------+--------+------------------------------------------+
    | id | select_type | table   | type  | possible_keys                   | key         | key_len | ref  | rows   | Extra                                    |
    +----+-------------+---------+-------+---------------------------------+-------------+---------+------+--------+------------------------------------------+
    |  1 | SIMPLE      | mytable | range | UQ_facility_housing,IX_facility | IX_facility | 4       | NULL | 198646 | Using where; Using index; Using filesort |
    +----+-------------+---------+-------+---------------------------------+-------------+---------+------+--------+------------------------------------------+
    
    SELECT SQL_NO_CACHE housing_id
    FROM mytable
    WHERE facility_id IN (1,3,4,7)
    GROUP BY housing_id
    HAVING COUNT(DISTINCT facility_id) = 4;
    
    567 rows in set (1.25 sec)
    
    +----+-------------+---------+-------+---------------------------------+-------------+---------+------+--------+------------------------------------------+
    | id | select_type | table   | type  | possible_keys                   | key         | key_len | ref  | rows   | Extra                                    |
    +----+-------------+---------+-------+---------------------------------+-------------+---------+------+--------+------------------------------------------+
    |  1 | SIMPLE      | mytable | range | UQ_facility_housing,IX_facility | IX_facility | 4       | NULL | 407160 | Using where; Using index; Using filesort |
    +----+-------------+---------+-------+---------------------------------+-------------+---------+------+--------+------------------------------------------+
    

    And last but not least the self join -

    SELECT SQL_NO_CACHE a.housing_id
    FROM mytable a
    INNER JOIN mytable b
        ON a.housing_id = b.housing_id
    WHERE a.facility_id = 4 AND b.facility_id = 7;
    
    17321 rows in set (1.37 sec)
    
    +----+-------------+-------+--------+----------------------------------------------------------------+---------------------+---------+-------------------------+-------+-------------+
    | id | select_type | table | type   | possible_keys                                                  | key                 | key_len | ref                     | rows  | Extra       |
    +----+-------------+-------+--------+----------------------------------------------------------------+---------------------+---------+-------------------------+-------+-------------+
    |  1 | SIMPLE      | b     | ref    | UQ_housing_facility,UQ_facility_housing,IX_housing,IX_facility | IX_facility         | 4       | const                   | 94598 | Using index |
    |  1 | SIMPLE      | a     | eq_ref | UQ_housing_facility,UQ_facility_housing,IX_housing,IX_facility | UQ_housing_facility | 8       | test.b.housing_id,const |     1 | Using index |
    +----+-------------+-------+--------+----------------------------------------------------------------+---------------------+---------+-------------------------+-------+-------------+
    
    SELECT SQL_NO_CACHE a.housing_id
    FROM mytable a
    INNER JOIN mytable b
        ON a.housing_id = b.housing_id
    INNER JOIN mytable c
        ON a.housing_id = c.housing_id
    INNER JOIN mytable d
        ON a.housing_id = d.housing_id
    WHERE a.facility_id = 1
    AND b.facility_id = 3
    AND c.facility_id = 4
    AND d.facility_id = 7;
    
    567 rows in set (1.64 sec)
    
    +----+-------------+-------+--------+----------------------------------------------------------------+---------------------+---------+-------------------------+-------+--------------------------+
    | id | select_type | table | type   | possible_keys                                                  | key                 | key_len | ref                     | rows  | Extra                    |
    +----+-------------+-------+--------+----------------------------------------------------------------+---------------------+---------+-------------------------+-------+--------------------------+
    |  1 | SIMPLE      | b     | ref    | UQ_housing_facility,UQ_facility_housing,IX_housing,IX_facility | IX_facility         | 4       | const                   | 93782 | Using index              |
    |  1 | SIMPLE      | d     | eq_ref | UQ_housing_facility,UQ_facility_housing,IX_housing,IX_facility | UQ_housing_facility | 8       | test.b.housing_id,const |     1 | Using index              |
    |  1 | SIMPLE      | c     | eq_ref | UQ_housing_facility,UQ_facility_housing,IX_housing,IX_facility | UQ_housing_facility | 8       | test.b.housing_id,const |     1 | Using index              |
    |  1 | SIMPLE      | a     | eq_ref | UQ_housing_facility,UQ_facility_housing,IX_housing,IX_facility | UQ_housing_facility | 8       | test.d.housing_id,const |     1 | Using where; Using index |
    +----+-------------+-------+--------+----------------------------------------------------------------+---------------------+---------+-------------------------+-------+--------------------------+
    
    qid & accept id: (10139181, 10139312) query: Extract maximum value with object name in Access using SQL - handle identical values soup:
    SELECT a.custID, MAX(a.product), MAX(a.price)\nFROM orders AS a \nWHERE a.price = (select MAX(b.price) from orders b where a.custID=b.custID)\nGROUP by a.custID\n
    \n

    Just a side note:
    \nIf you have a more advanced SQL server that supports windowing functions, like SQL Server 2008 you can instead write

    \n
    SELECT custID, product, price FROM (\n    SELECT custID, product, price,  ROW_NUMBER()\n        OVER (partition by custid order by price desc) AS rowNo\n    FROM orders \n) AS a\nWHERE a.rowNo = 1\n
    \n soup wrap:
    SELECT a.custID, MAX(a.product), MAX(a.price)
    FROM orders AS a 
    WHERE a.price = (select MAX(b.price) from orders b where a.custID=b.custID)
    GROUP by a.custID
    

    Just a side note:
    If you have a more advanced SQL server that supports windowing functions, like SQL Server 2008 you can instead write

    SELECT custID, product, price FROM (
        SELECT custID, product, price,  ROW_NUMBER()
            OVER (partition by custid order by price desc) AS rowNo
        FROM orders 
    ) AS a
    WHERE a.rowNo = 1
    
    qid & accept id: (10164354, 10164387) query: Designing a table with a column need to stored in four different languages soup:

    I recommend against using either of these methods you described. Instead, create a single highlight table with 3 columns:

    \n
    CREATE TABLE highlight \n(\n  article_id INT NOT NULL,\n  language VARCHAR(),\n  highlight_text VARCHAR() CHARACTER SET utf8,\n  PRIMARY KEY (article_id, language),\n  FOREIGN KEY (article_id) REFERENCES articles (article_id)\n)\n
    \n

    Each row links to an article by article_id, and contains a language version and the relevant text. This allows you to add as many languages as you ever need to, and it doesn't matter if one is missing for an article - it simply doesn't appear in the table. It also allows you to use entirely different language sets per article if it ever becomes necessary.

    \n

    Values then look like:

    \n
    2  en  The English text for article 2\n2  dr  The French text for article 2\n2  de  The German text for article 2\n3  en  The English text for article 3\n3  dr  The French text for article 3\n3  de  The German text for article 3\n3  sw  Oh wait, article 3 also needed Swahili text!\n
    \n soup wrap:

    I recommend against using either of these methods you described. Instead, create a single highlight table with 3 columns:

    CREATE TABLE highlight 
    (
      article_id INT NOT NULL,
      language VARCHAR(),
      highlight_text VARCHAR() CHARACTER SET utf8,
      PRIMARY KEY (article_id, language),
      FOREIGN KEY (article_id) REFERENCES articles (article_id)
    )
    

    Each row links to an article by article_id, and contains a language version and the relevant text. This allows you to add as many languages as you ever need to, and it doesn't matter if one is missing for an article - it simply doesn't appear in the table. It also allows you to use entirely different language sets per article if it ever becomes necessary.

    Values then look like:

    2  en  The English text for article 2
    2  dr  The French text for article 2
    2  de  The German text for article 2
    3  en  The English text for article 3
    3  dr  The French text for article 3
    3  de  The German text for article 3
    3  sw  Oh wait, article 3 also needed Swahili text!
    
    qid & accept id: (10182533, 10182629) query: Efficient query to split a delimited column into a separate table soup:

    Create a split function:

    \n
    CREATE FUNCTION dbo.SplitStrings(@List NVARCHAR(MAX))\nRETURNS TABLE\nAS\n   RETURN ( SELECT Item FROM\n       ( SELECT Item = x.i.value('(./text())[1]', 'nvarchar(max)')\n         FROM ( SELECT [XML] = CONVERT(XML, ''\n         + REPLACE(@List, '.', '') + '').query('.')\n           ) AS a CROSS APPLY [XML].nodes('i') AS x(i) ) AS y\n       WHERE Item IS NOT NULL\n   );\nGO\n
    \n

    Then get rid of all the cursor and looping nonsense and do this:

    \n
    INSERT dbo.mrhierlookup\n(\n  heiraui,\n  aui\n)\nSELECT s.Item, m.aui\n  FROM dbo.mrhier3 AS m\n  CROSS APPLY dbo.SplitStrings(m.ptr) AS s\nGROUP BY s.Item, m.aui;\n
    \n soup wrap:

    Create a split function:

    CREATE FUNCTION dbo.SplitStrings(@List NVARCHAR(MAX))
    RETURNS TABLE
    AS
       RETURN ( SELECT Item FROM
           ( SELECT Item = x.i.value('(./text())[1]', 'nvarchar(max)')
             FROM ( SELECT [XML] = CONVERT(XML, ''
             + REPLACE(@List, '.', '') + '').query('.')
               ) AS a CROSS APPLY [XML].nodes('i') AS x(i) ) AS y
           WHERE Item IS NOT NULL
       );
    GO
    

    Then get rid of all the cursor and looping nonsense and do this:

    INSERT dbo.mrhierlookup
    (
      heiraui,
      aui
    )
    SELECT s.Item, m.aui
      FROM dbo.mrhier3 AS m
      CROSS APPLY dbo.SplitStrings(m.ptr) AS s
    GROUP BY s.Item, m.aui;
    
    qid & accept id: (10189129, 10189450) query: Retrieving the previous 10 results from a table that are nearest to a certain date and maintaining ascending sorting order soup:

    This is a simple one, LIMIT the subquery

    \n
    SELECT ci.*\nFROM `calendar_item` AS `ci` \nWHERE (ci.id IN (\n    SELECT id FROM calendar_item \n    WHERE (end_time < FROM_UNIXTIME(1334667600))\n    ORDER BY end_time DESC\n    LIMIT 10\n))\nGROUP BY `ci`.`id` \nORDER BY `ci`.`end_time` ASC\nLIMIT 10\n
    \n

    Without the limit in the subquery you are selecting ALL rows with a timestamp < FROM_UNIXTIMESTAMP. You are then reordering ASC and selecting the first 10, i.e. the earliest 10.

    \n

    If you limit the subquery you get the 10 highest which satisfy your FROM_UNIXTIME, and the outer can then select them.

    \n

    An alternative (and my preferred) would be the following, where the subquery gets the data, and the outer query simply reorders it before spitting it back out.

    \n
    SELECT i.*\nFROM (\n    SELECT ci.*\n    FROM calendar_item AS ci\n    WHERE ci.end_time < FROM_UNIXTIME(1334667600)\n    ORDER BY ci.end_time DESC\n    LIMIT 10\n) AS i\nORDER BY i.`end_time` ASC\n
    \n soup wrap:

    This is a simple one, LIMIT the subquery

    SELECT ci.*
    FROM `calendar_item` AS `ci` 
    WHERE (ci.id IN (
        SELECT id FROM calendar_item 
        WHERE (end_time < FROM_UNIXTIME(1334667600))
        ORDER BY end_time DESC
        LIMIT 10
    ))
    GROUP BY `ci`.`id` 
    ORDER BY `ci`.`end_time` ASC
    LIMIT 10
    

    Without the limit in the subquery you are selecting ALL rows with a timestamp < FROM_UNIXTIMESTAMP. You are then reordering ASC and selecting the first 10, i.e. the earliest 10.

    If you limit the subquery you get the 10 highest which satisfy your FROM_UNIXTIME, and the outer can then select them.

    An alternative (and my preferred) would be the following, where the subquery gets the data, and the outer query simply reorders it before spitting it back out.

    SELECT i.*
    FROM (
        SELECT ci.*
        FROM calendar_item AS ci
        WHERE ci.end_time < FROM_UNIXTIME(1334667600)
        ORDER BY ci.end_time DESC
        LIMIT 10
    ) AS i
    ORDER BY i.`end_time` ASC
    
    qid & accept id: (10196144, 10196628) query: SQL include duplicates in an SELECT statement soup:

    It seems like you just want something like this:

    \n
    SELECT C_NAME, AnswerNum\nFROM\n(\nSELECT C.C_NAME, "1" AS AnswerNum, T.USER_ID\nFROM COUNTRY C \n    JOIN TBL_ANSWERS T \n        ON  T.ANSWER1_ID = C.C_ID \nUNION ALL\nSELECT C.C_NAME, "2" AS AnswerNum, T.USER_ID\nFROM COUNTRY C \n    JOIN TBL_ANSWERS T \n        ON  T.ANSWER2_ID = C.C_ID \n...\nUNION ALL\nSELECT C.C_NAME, "8" AS AnswerNum, T.USER_ID\nFROM COUNTRY C \n    JOIN TBL_ANSWERS T \n        ON  T.ANSWER8_ID = C.C_ID \n) AS AnswersJoined\nWHERE USER_ID = '4' \n
    \n

    However, I would seriously consider reworking your tables so that you use relationship mapping tables to figure out the questions and answers. This would allow this to be more easily created in one query

    \n

    Something like

    \n

    Tbl_Answer

    \n
     Question_Id|User_Id|Response_Id\n
    \n

    Tbl_Question

    \n
     Id|QuestionNumber\n
    \n

    This would allow you to just run a simple BETWEEN. Something like this:

    \n
    SELECT C.Name\nFROM Country C\nWHERE EXISTS\n(\n    SELECT 1 \n    FROM Tbl_Answer T\n        JOIN Tbl_Question Q\n            ON Q.Id = T.Question_Id\n    WHERE T.User_Id = 4 AND T.Response_Id = C.C_ID\n        AND Q.QuestionNumber BETWEEN 1 AND 8\n)\n
    \n soup wrap:

    It seems like you just want something like this:

    SELECT C_NAME, AnswerNum
    FROM
    (
    SELECT C.C_NAME, "1" AS AnswerNum, T.USER_ID
    FROM COUNTRY C 
        JOIN TBL_ANSWERS T 
            ON  T.ANSWER1_ID = C.C_ID 
    UNION ALL
    SELECT C.C_NAME, "2" AS AnswerNum, T.USER_ID
    FROM COUNTRY C 
        JOIN TBL_ANSWERS T 
            ON  T.ANSWER2_ID = C.C_ID 
    ...
    UNION ALL
    SELECT C.C_NAME, "8" AS AnswerNum, T.USER_ID
    FROM COUNTRY C 
        JOIN TBL_ANSWERS T 
            ON  T.ANSWER8_ID = C.C_ID 
    ) AS AnswersJoined
    WHERE USER_ID = '4' 
    

    However, I would seriously consider reworking your tables so that you use relationship mapping tables to figure out the questions and answers. This would allow this to be more easily created in one query

    Something like

    Tbl_Answer

     Question_Id|User_Id|Response_Id
    

    Tbl_Question

     Id|QuestionNumber
    

    This would allow you to just run a simple BETWEEN. Something like this:

    SELECT C.Name
    FROM Country C
    WHERE EXISTS
    (
        SELECT 1 
        FROM Tbl_Answer T
            JOIN Tbl_Question Q
                ON Q.Id = T.Question_Id
        WHERE T.User_Id = 4 AND T.Response_Id = C.C_ID
            AND Q.QuestionNumber BETWEEN 1 AND 8
    )
    
    qid & accept id: (10199927, 10200631) query: Find chars in any order in Sql Server soup:

    The easiest thing to do is split and pivot and then join.

    \n

    So avi becomes three rows in a letters table:

    \n
    a\nv\ni\n
    \n

    Then join to the word list with INNER JOIN ON CHARINDEX(letter, word) > 0

    \n

    Use GROUP BY word

    \n

    with HAVING COUNT(*) = (SELECT COUNT(*) FROM letters)

    \n

    In this example, I just picked up and modified a cte from here Split a string into individual characters in Sql Server 2005 to avoid having to fool around with a numbers table (but I normally would probably use a numbers table to do my pivot).

    \n

    http://data.stackexchange.com/stackoverflow/query/67103/http-stackoverflow-com-questions-10199927-find-chars-in-any-order-in-sql-server

    \n
    DECLARE @t AS TABLE (search varchar(100));\nINSERT INTO @t VALUES ('avi');\n\nDECLARE @words AS TABLE (word varchar(100));\nINSERT INTO @words VALUES ('avion'), ('iva'), ('name');\nwith cte as\n(\n  select substring(search, 1, 1) as letter,\n         stuff(search, 1, 1, '') as search,\n         1 as RowID\n  from @t\n  union all\n  select substring(search, 1, 1) as letter,\n         stuff(search, 1, 1, '') as search,\n         RowID + 1 as RowID\n  from cte\n  where len(search) > 0\n)\n,letters AS (\n  SELECT DISTINCT letter FROM cte\n)\nSELECT words.word\nFROM letters\nINNER JOIN @words AS words\n    ON CHARINDEX(letter, word) > 0\nGROUP BY words.word\nHAVING COUNT(*) = (SELECT COUNT(*) FROM letters)\n
    \n soup wrap:

    The easiest thing to do is split and pivot and then join.

    So avi becomes three rows in a letters table:

    a
    v
    i
    

    Then join to the word list with INNER JOIN ON CHARINDEX(letter, word) > 0

    Use GROUP BY word

    with HAVING COUNT(*) = (SELECT COUNT(*) FROM letters)

    In this example, I just picked up and modified a cte from here Split a string into individual characters in Sql Server 2005 to avoid having to fool around with a numbers table (but I normally would probably use a numbers table to do my pivot).

    http://data.stackexchange.com/stackoverflow/query/67103/http-stackoverflow-com-questions-10199927-find-chars-in-any-order-in-sql-server

    DECLARE @t AS TABLE (search varchar(100));
    INSERT INTO @t VALUES ('avi');
    
    DECLARE @words AS TABLE (word varchar(100));
    INSERT INTO @words VALUES ('avion'), ('iva'), ('name');
    with cte as
    (
      select substring(search, 1, 1) as letter,
             stuff(search, 1, 1, '') as search,
             1 as RowID
      from @t
      union all
      select substring(search, 1, 1) as letter,
             stuff(search, 1, 1, '') as search,
             RowID + 1 as RowID
      from cte
      where len(search) > 0
    )
    ,letters AS (
      SELECT DISTINCT letter FROM cte
    )
    SELECT words.word
    FROM letters
    INNER JOIN @words AS words
        ON CHARINDEX(letter, word) > 0
    GROUP BY words.word
    HAVING COUNT(*) = (SELECT COUNT(*) FROM letters)
    
    qid & accept id: (10209706, 10209896) query: Elegant way to create a circular permutation with MySQL soup:

    You can use the mod operator, % to ORDER BY

    \n
    DECLARE @maxId AS INT\nSELECT @maxId = MAX(Id) FROM MyTable\n\nSELECT id FROM MyTable\nORDER BY Id % @maxId \n
    \n

    You can get further rotations by adding to Id, ie

    \n
    ORDER BY (Id + 1) % @maxId\n
    \n

    get you

    \n
    3\n4\n1\n2\n
    \n

    Working SQL Fiddle (which I just found out exists)\nhttp://sqlfiddle.com/#!3/a7f15/5

    \n soup wrap:

    You can use the mod operator, % to ORDER BY

    DECLARE @maxId AS INT
    SELECT @maxId = MAX(Id) FROM MyTable
    
    SELECT id FROM MyTable
    ORDER BY Id % @maxId 
    

    You can get further rotations by adding to Id, ie

    ORDER BY (Id + 1) % @maxId
    

    get you

    3
    4
    1
    2
    

    Working SQL Fiddle (which I just found out exists) http://sqlfiddle.com/#!3/a7f15/5

    qid & accept id: (10240035, 10240129) query: SQL and Counting soup:

    Give this a try:

    \n
    select name,\n    count(case when grade in ('A', 'B', 'C') then 1 end) totalPass,\n    count(case when grade = 'A' then 1 end) totalA,\n    count(case when grade = 'B' then 1 end) totalB,\n    count(case when grade = 'C' then 1 end) totalC\nfrom t\ngroup by name\n
    \n

    Here is the fiddle.

    \n

    Or we can make it even simpler if you were using MySQL:

    \n
    select name,\n    sum(grade in ('A', 'B', 'C')) totalPass,\n    sum(grade = 'A') totalA,\n    sum(grade = 'B') totalB,\n    sum(grade = 'C') totalC\nfrom t\ngroup by name\n
    \n

    Here is the fiddle.

    \n soup wrap:

    Give this a try:

    select name,
        count(case when grade in ('A', 'B', 'C') then 1 end) totalPass,
        count(case when grade = 'A' then 1 end) totalA,
        count(case when grade = 'B' then 1 end) totalB,
        count(case when grade = 'C' then 1 end) totalC
    from t
    group by name
    

    Here is the fiddle.

    Or we can make it even simpler if you were using MySQL:

    select name,
        sum(grade in ('A', 'B', 'C')) totalPass,
        sum(grade = 'A') totalA,
        sum(grade = 'B') totalB,
        sum(grade = 'C') totalC
    from t
    group by name
    

    Here is the fiddle.

    qid & accept id: (10277115, 10277164) query: Select newest record group by username in SQL Server 2008 soup:

    You have several options here but using adding a ROW_NUMBER grouped by user and sorted (descending) on your timestamp allows you to easily select the latest records.

    \n

    Using ROW_NUMBER

    \n
    SELECT *\nFROM   (\n         SELECT ID, voting_ID, username, timestamp, XMLBallot\n                , rn = ROW_NUMBER() OVER (PARTITION BY voting_ID, username ORDER BY timestamp DESC)\n         FROM   Ballots\n       ) bt \nWHERE  rn = 1\n
    \n

    Alternatively, you can select the maximum timestamp per user and join on that.

    \n

    Using MAX

    \n
    SELECT bt.ID, bt.voting_ID, bt.username, bt.timestamp, bt.XMLBallot\nFROM   Ballots bt\n       INNER JOIN (\n          SELECT username, voting_ID, timestamp = MAX(timestamp)\n          FROM   Ballots\n          GROUP BY\n                 username, voting_ID\n        ) btm ON btm.username = bt.Username\n                 AND btm.voting_ID = bt.voting_ID\n                 AND btm.timestamp = bt.timestamp\n
    \n soup wrap:

    You have several options here but using adding a ROW_NUMBER grouped by user and sorted (descending) on your timestamp allows you to easily select the latest records.

    Using ROW_NUMBER

    SELECT *
    FROM   (
             SELECT ID, voting_ID, username, timestamp, XMLBallot
                    , rn = ROW_NUMBER() OVER (PARTITION BY voting_ID, username ORDER BY timestamp DESC)
             FROM   Ballots
           ) bt 
    WHERE  rn = 1
    

    Alternatively, you can select the maximum timestamp per user and join on that.

    Using MAX

    SELECT bt.ID, bt.voting_ID, bt.username, bt.timestamp, bt.XMLBallot
    FROM   Ballots bt
           INNER JOIN (
              SELECT username, voting_ID, timestamp = MAX(timestamp)
              FROM   Ballots
              GROUP BY
                     username, voting_ID
            ) btm ON btm.username = bt.Username
                     AND btm.voting_ID = bt.voting_ID
                     AND btm.timestamp = bt.timestamp
    
    qid & accept id: (10296422, 10296593) query: How to assign an id to a group SQL Server soup:

    I'd be tempted to create a separate table, RunInformation, with a primary key column, Id, and a RunDate column:

    \n
    Id -- RunDate\n
    \n

    You could then replace the dateRan column from your table with a reference to the RunInformation table. This will allow you to store additional information about the run in future, if the needs arises.

    \n
    Id -- Name -- AttributeIMeasure -- RunInformationId\n
    \n soup wrap:

    I'd be tempted to create a separate table, RunInformation, with a primary key column, Id, and a RunDate column:

    Id -- RunDate
    

    You could then replace the dateRan column from your table with a reference to the RunInformation table. This will allow you to store additional information about the run in future, if the needs arises.

    Id -- Name -- AttributeIMeasure -- RunInformationId
    
    qid & accept id: (10310499, 10310674) query: How to avoid "Ambiguous field in query" without adding Table Name or Table Alias in where clause soup:

    If you for some reason can't live with doing

    \n
    select T1.name, T1.address, T1.phone, T2.title, T2.description from T1\nLeft Join T2 on T1.CID=T2.ID\nwhere T2.STATUS = 1\n
    \n

    Then I guess you could

    \n
    SELECT T1.name, T1.address, T1.phone, T2.title, T2.description \nFROM (  SELECT CID, name, address, phone\n        FROM T1) AS T1\nLEFT JOIN T2\nON T1.CID=T2.ID\nWHERE STATUS = 1\n
    \n

    Basicly just skip getting the STATUS column from T1. Then there can be no conflict.

    \n

    Bottomline; there's no simple way of doing this. The one closest to simple would be to have different names of both STATUS columns, but even that seems extreme.

    \n soup wrap:

    If you for some reason can't live with doing

    select T1.name, T1.address, T1.phone, T2.title, T2.description from T1
    Left Join T2 on T1.CID=T2.ID
    where T2.STATUS = 1
    

    Then I guess you could

    SELECT T1.name, T1.address, T1.phone, T2.title, T2.description 
    FROM (  SELECT CID, name, address, phone
            FROM T1) AS T1
    LEFT JOIN T2
    ON T1.CID=T2.ID
    WHERE STATUS = 1
    

    Basicly just skip getting the STATUS column from T1. Then there can be no conflict.

    Bottomline; there's no simple way of doing this. The one closest to simple would be to have different names of both STATUS columns, but even that seems extreme.

    qid & accept id: (10330898, 10330971) query: sql query to set year as column name soup:

    Maybe something like this:

    \n
    SELECT \n    item_name, \n    SUM(CASE WHEN YEAR( DATE )=2011 THEN item_sold_qty ELSE 0 END) AS '2011',\n    SUM(CASE WHEN YEAR( DATE )=2012 THEN item_sold_qty ELSE 0 END) AS '2012'\nFROM \n    item\nJOIN sales ON item.id = sales.item_number\nGROUP BY\n    item_name\nORDER BY \n    item_name\n
    \n

    EDIT

    \n

    If you want the other years and still sum them. Then you can do this:

    \n
    SELECT \n    item_name, \n    SUM(CASE WHEN YEAR( DATE )=2011 THEN item_sold_qty ELSE 0 END) AS '2011',\n    SUM(CASE WHEN YEAR( DATE )=2012 THEN item_sold_qty ELSE 0 END) AS '2012',\n    SUM(CASE WHEN NOT YEAR( DATE ) IN (2011,2012) THEN item_sold_qty ELSE 0 END) AS 'AllOtherYears'\nFROM \n    item\nJOIN sales ON item.id = sales.item_number\nGROUP BY\n    item_name\nORDER BY \n    item_name\n
    \n

    EDIT2

    \n

    If you have a lot of years and you do not want to keep on adding years. Then you need to using dynamic sql. That means that you concat a varchar of the sql and then execute it.

    \n

    Useful References:

    \n\n soup wrap:

    Maybe something like this:

    SELECT 
        item_name, 
        SUM(CASE WHEN YEAR( DATE )=2011 THEN item_sold_qty ELSE 0 END) AS '2011',
        SUM(CASE WHEN YEAR( DATE )=2012 THEN item_sold_qty ELSE 0 END) AS '2012'
    FROM 
        item
    JOIN sales ON item.id = sales.item_number
    GROUP BY
        item_name
    ORDER BY 
        item_name
    

    EDIT

    If you want the other years and still sum them. Then you can do this:

    SELECT 
        item_name, 
        SUM(CASE WHEN YEAR( DATE )=2011 THEN item_sold_qty ELSE 0 END) AS '2011',
        SUM(CASE WHEN YEAR( DATE )=2012 THEN item_sold_qty ELSE 0 END) AS '2012',
        SUM(CASE WHEN NOT YEAR( DATE ) IN (2011,2012) THEN item_sold_qty ELSE 0 END) AS 'AllOtherYears'
    FROM 
        item
    JOIN sales ON item.id = sales.item_number
    GROUP BY
        item_name
    ORDER BY 
        item_name
    

    EDIT2

    If you have a lot of years and you do not want to keep on adding years. Then you need to using dynamic sql. That means that you concat a varchar of the sql and then execute it.

    Useful References:

    qid & accept id: (10338000, 10340501) query: how to do content based authorization? soup:

    I think you're on the right track with views, but since each call will need to pass the user ID, it sounds like what you really need are table-valued functions. I'm most familiar with Microsoft SQL, where it would look something like this:

    \n
    SELECT P.*\nFROM Projects AS P\n     INNER JOIN dbo.AuthProjects(@UserID) AS AP ON P.ProjectID = AP.ProjectID\n
    \n

    Note that the TVF literally returns a table, to which you would join to see which projects are available. The TVF definition might look something like this:

    \n
    CREATE FUNCTION dbo.AuthProjects(@UserID INT)\n    RETURNS @Results TABLE (ProjectID INT NOT NULL, WriteAccess BIT NOT NULL)\nAS BEGIN\n    INSERT INTO @Results (ProjectID, WriteAccess)\n        SELECT\n            ProjectID, WriteAccess\n        FROM\n            Authorizations\n        WHERE\n            UserID = @UserID\n\n    -- Additional logic for more ways a project may be authorized\n\n    RETURN\nEND\n
    \n soup wrap:

    I think you're on the right track with views, but since each call will need to pass the user ID, it sounds like what you really need are table-valued functions. I'm most familiar with Microsoft SQL, where it would look something like this:

    SELECT P.*
    FROM Projects AS P
         INNER JOIN dbo.AuthProjects(@UserID) AS AP ON P.ProjectID = AP.ProjectID
    

    Note that the TVF literally returns a table, to which you would join to see which projects are available. The TVF definition might look something like this:

    CREATE FUNCTION dbo.AuthProjects(@UserID INT)
        RETURNS @Results TABLE (ProjectID INT NOT NULL, WriteAccess BIT NOT NULL)
    AS BEGIN
        INSERT INTO @Results (ProjectID, WriteAccess)
            SELECT
                ProjectID, WriteAccess
            FROM
                Authorizations
            WHERE
                UserID = @UserID
    
        -- Additional logic for more ways a project may be authorized
    
        RETURN
    END
    
    qid & accept id: (10389260, 10389455) query: Select 30% of each column value soup:

    try something like this:

    \n
    DECLARE @YourTable table (A int, b varchar(10))\nINSERT @YourTable VALUES (0, 'hello') --OP's data\nINSERT @YourTable VALUES (0, 'test')\nINSERT @YourTable VALUES (0, 'hi')\nINSERT @YourTable VALUES (1, 'blah1')\nINSERT @YourTable VALUES (1, 'blah2')\nINSERT @YourTable VALUES (1, 'blah3')\nINSERT @YourTable VALUES (1, 'blah4')\nINSERT @YourTable VALUES (1, 'blah5')\nINSERT @YourTable VALUES (1, 'blah6')\n\n;WITH NumberedRows AS\n(   SELECT \n        A,B,ROW_NUMBER() OVER (PARTITION BY A ORDER BY A,B) AS RowNumber\n        FROM @YourTable\n)\n, GroupCounts AS\n(   SELECT\n        A,MAX(RowNumber) AS MaxA\n        FROM NumberedRows\n        GROUP BY A\n)\nSELECT\n    n.a,n.b\n    FROM NumberedRows           n\n        INNER JOIN GroupCounts  c ON n.A=c.A\n    WHERE n.RowNUmber<=(c.MaxA+1)*0.3\n
    \n

    OUTPUT:

    \n
    a           b\n----------- ----------\n0           hello\n1           blah1\n1           blah2\n\n(3 row(s) affected)\n
    \n

    EDIT based on the great idea in the comment from Andriy M

    \n
    ;WITH NumberedRows AS\n(   SELECT \n        A,B,ROW_NUMBER() OVER (PARTITION BY A ORDER BY A,B) AS RowNumber\n            ,COUNT(*) OVER (PARTITION BY A) AS TotalOf\n        FROM @YourTable\n)\nSELECT\n    n.a,n.b\n    FROM NumberedRows            n\n    WHERE n.RowNumber<=(n.TotalOf+1)*0.3\n    ORDER BY A\n
    \n

    OUTPUT:

    \n
    a           b\n----------- ----------\n0           hello\n1           blah1\n1           blah2\n\n(3 row(s) affected)\n
    \n

    EDIT here are "random" rows, using Andriy M idea:

    \n
    DECLARE @YourTable table (A int, b varchar(10))\nINSERT @YourTable VALUES (0, 'hello') --OP's data\nINSERT @YourTable VALUES (0, 'test')\nINSERT @YourTable VALUES (0, 'hi')\nINSERT @YourTable VALUES (1, 'blah1')\nINSERT @YourTable VALUES (1, 'blah2')\nINSERT @YourTable VALUES (1, 'blah3')\nINSERT @YourTable VALUES (1, 'blah4')\nINSERT @YourTable VALUES (1, 'blah5')\nINSERT @YourTable VALUES (1, 'blah6')\n\n;WITH NumberedRows AS\n(   SELECT \n        A,B,ROW_NUMBER() OVER (PARTITION BY A ORDER BY newid()) AS RowNumber\n        FROM @YourTable\n)\n, GroupCounts AS (SELECT A,COUNT(A) AS MaxA FROM NumberedRows GROUP BY A)\nSELECT\n    n.A,n.B\n    FROM NumberedRows           n\n        INNER JOIN GroupCounts  c ON n.A=c.A\n    WHERE n.RowNUmber<=(c.MaxA+1)*0.3\n    ORDER BY n.A\n
    \n

    OUTPUT:

    \n
    a           b\n----------- ----------\n0           hi\n1           blah3\n1           blah6\n\n(3 row(s) affected)\n
    \n soup wrap:

    try something like this:

    DECLARE @YourTable table (A int, b varchar(10))
    INSERT @YourTable VALUES (0, 'hello') --OP's data
    INSERT @YourTable VALUES (0, 'test')
    INSERT @YourTable VALUES (0, 'hi')
    INSERT @YourTable VALUES (1, 'blah1')
    INSERT @YourTable VALUES (1, 'blah2')
    INSERT @YourTable VALUES (1, 'blah3')
    INSERT @YourTable VALUES (1, 'blah4')
    INSERT @YourTable VALUES (1, 'blah5')
    INSERT @YourTable VALUES (1, 'blah6')
    
    ;WITH NumberedRows AS
    (   SELECT 
            A,B,ROW_NUMBER() OVER (PARTITION BY A ORDER BY A,B) AS RowNumber
            FROM @YourTable
    )
    , GroupCounts AS
    (   SELECT
            A,MAX(RowNumber) AS MaxA
            FROM NumberedRows
            GROUP BY A
    )
    SELECT
        n.a,n.b
        FROM NumberedRows           n
            INNER JOIN GroupCounts  c ON n.A=c.A
        WHERE n.RowNUmber<=(c.MaxA+1)*0.3
    

    OUTPUT:

    a           b
    ----------- ----------
    0           hello
    1           blah1
    1           blah2
    
    (3 row(s) affected)
    

    EDIT based on the great idea in the comment from Andriy M

    ;WITH NumberedRows AS
    (   SELECT 
            A,B,ROW_NUMBER() OVER (PARTITION BY A ORDER BY A,B) AS RowNumber
                ,COUNT(*) OVER (PARTITION BY A) AS TotalOf
            FROM @YourTable
    )
    SELECT
        n.a,n.b
        FROM NumberedRows            n
        WHERE n.RowNumber<=(n.TotalOf+1)*0.3
        ORDER BY A
    

    OUTPUT:

    a           b
    ----------- ----------
    0           hello
    1           blah1
    1           blah2
    
    (3 row(s) affected)
    

    EDIT here are "random" rows, using Andriy M idea:

    DECLARE @YourTable table (A int, b varchar(10))
    INSERT @YourTable VALUES (0, 'hello') --OP's data
    INSERT @YourTable VALUES (0, 'test')
    INSERT @YourTable VALUES (0, 'hi')
    INSERT @YourTable VALUES (1, 'blah1')
    INSERT @YourTable VALUES (1, 'blah2')
    INSERT @YourTable VALUES (1, 'blah3')
    INSERT @YourTable VALUES (1, 'blah4')
    INSERT @YourTable VALUES (1, 'blah5')
    INSERT @YourTable VALUES (1, 'blah6')
    
    ;WITH NumberedRows AS
    (   SELECT 
            A,B,ROW_NUMBER() OVER (PARTITION BY A ORDER BY newid()) AS RowNumber
            FROM @YourTable
    )
    , GroupCounts AS (SELECT A,COUNT(A) AS MaxA FROM NumberedRows GROUP BY A)
    SELECT
        n.A,n.B
        FROM NumberedRows           n
            INNER JOIN GroupCounts  c ON n.A=c.A
        WHERE n.RowNUmber<=(c.MaxA+1)*0.3
        ORDER BY n.A
    

    OUTPUT:

    a           b
    ----------- ----------
    0           hi
    1           blah3
    1           blah6
    
    (3 row(s) affected)
    
    qid & accept id: (10423479, 10423557) query: MySQL Retrieving data from two tables using inner join syntax soup:

    try this:

    \n
    SELECT  a.Event_ID, \n        a.Competitor_ID,\n        a.Place,\n        COALESCE(b.money, 0) as `Money`\nFROM    entry a left join prize b\n            on  (a.event_id = b.event_ID) AND\n                (a.place = b.Place)\n
    \n

    hope this helps.

    \n
    EVENT_ID    COMPETITOR_ID   PLACE   MONEY\n101           101            1      120\n101           102            2       60\n101           201            3       30\n101           301            4        0   -- << this is what you're looking for\n102           201            2        5\n103           201            3       40\n
    \n soup wrap:

    try this:

    SELECT  a.Event_ID, 
            a.Competitor_ID,
            a.Place,
            COALESCE(b.money, 0) as `Money`
    FROM    entry a left join prize b
                on  (a.event_id = b.event_ID) AND
                    (a.place = b.Place)
    

    hope this helps.

    EVENT_ID    COMPETITOR_ID   PLACE   MONEY
    101           101            1      120
    101           102            2       60
    101           201            3       30
    101           301            4        0   -- << this is what you're looking for
    102           201            2        5
    103           201            3       40
    
    qid & accept id: (10477017, 10477189) query: SQL Compare with one column, but returns all columns if matched soup:

    I think you want all rows from Table1 and Table2, such that each IDCodeField values only appears in one of the tables or the other. You wish to exclude rows where the same value appears in both tables.

    \n

    Ignoring, for the moment, the question of what to do if the same value appears in the same table, the simplest query would be:

    \n
    SELECT * from Table1 T1 full outer join Table2\nON T1.IDCodeField = T2.IDCodeField\nWHERE T1.IDCodeField is null or T2.IDCodeField is null\n
    \n

    This will give you the results, but possibly not in the format you're seeking - the result rows will be as wide as both tables combined, and the columns from the non-matching table will be NULL.

    \n

    Or, we could do it in the UNION style from your question.

    \n
    SELECT * from Table1 where IDCodeField not in (select IDCodeField from Table2)\nUNION ALL\nSELECT * from Table2 where IDCodeField not in (select IDCOdeField from Table1)\n
    \n

    Both of the above queries will return rows if the same IDCodeField value is duplicated only within a single table. If you wish to exclude this possibility, you might try finding the unique values first:

    \n
    ;With UniqueIDs as (\n    SELECT IDCodeField\n    FROM (\n        SELECT IDCodeField from Table1\n        union all\n        select IDCodeField from Table2) t\n    GROUP BY IDCodeField\n    HAVING COUNT(*) = 1\n)\nSELECT * from (\n    SELECT * from Table1\n    union all\n    select * from Table2\n) t\n  INNER JOIN\nUniqueIDs u\n  ON\n    t.IDCodeField = u.IDCodeField\n
    \n
    \n

    (Of course, all the uses of SELECT * above should be replaced with appropriate column lists)

    \n soup wrap:

    I think you want all rows from Table1 and Table2, such that each IDCodeField values only appears in one of the tables or the other. You wish to exclude rows where the same value appears in both tables.

    Ignoring, for the moment, the question of what to do if the same value appears in the same table, the simplest query would be:

    SELECT * from Table1 T1 full outer join Table2
    ON T1.IDCodeField = T2.IDCodeField
    WHERE T1.IDCodeField is null or T2.IDCodeField is null
    

    This will give you the results, but possibly not in the format you're seeking - the result rows will be as wide as both tables combined, and the columns from the non-matching table will be NULL.

    Or, we could do it in the UNION style from your question.

    SELECT * from Table1 where IDCodeField not in (select IDCodeField from Table2)
    UNION ALL
    SELECT * from Table2 where IDCodeField not in (select IDCOdeField from Table1)
    

    Both of the above queries will return rows if the same IDCodeField value is duplicated only within a single table. If you wish to exclude this possibility, you might try finding the unique values first:

    ;With UniqueIDs as (
        SELECT IDCodeField
        FROM (
            SELECT IDCodeField from Table1
            union all
            select IDCodeField from Table2) t
        GROUP BY IDCodeField
        HAVING COUNT(*) = 1
    )
    SELECT * from (
        SELECT * from Table1
        union all
        select * from Table2
    ) t
      INNER JOIN
    UniqueIDs u
      ON
        t.IDCodeField = u.IDCodeField
    

    (Of course, all the uses of SELECT * above should be replaced with appropriate column lists)

    qid & accept id: (10532323, 10554407) query: Replicate recent location soup:

    Try this:

    \n
    select *, CurrentLocation\nfrom tbl x\n\nouter apply\n(\n  select top 1 location as CurrentLocation\n  from tbl\n  where [user] = x.[user]\n    and id <= x.id\n  order by id\n\n) y\n\norder by id\n
    \n

    Output:

    \n
    ID      USER    DATE            LOCATION    CURRENTLOCATION\n1       Tom     2012-03-06      US          US\n2       Tom     2012-02-04      UK          US\n3       Tom     2012-01-06      Uk          US\n4       Bob     2012-03-06      UK          UK\n5       Bob     2012-02-04      UK          UK\n6       Bob     2012-01-06      AUS         UK\n7       Dev     2012-03-06      US          US\n8       Dev     2012-02-04      AUS         US\n9       Nic     2012-01-06      US          US\n
    \n

    Live test: http://www.sqlfiddle.com/#!3/83a6a/7

    \n soup wrap:

    Try this:

    select *, CurrentLocation
    from tbl x
    
    outer apply
    (
      select top 1 location as CurrentLocation
      from tbl
      where [user] = x.[user]
        and id <= x.id
      order by id
    
    ) y
    
    order by id
    

    Output:

    ID      USER    DATE            LOCATION    CURRENTLOCATION
    1       Tom     2012-03-06      US          US
    2       Tom     2012-02-04      UK          US
    3       Tom     2012-01-06      Uk          US
    4       Bob     2012-03-06      UK          UK
    5       Bob     2012-02-04      UK          UK
    6       Bob     2012-01-06      AUS         UK
    7       Dev     2012-03-06      US          US
    8       Dev     2012-02-04      AUS         US
    9       Nic     2012-01-06      US          US
    

    Live test: http://www.sqlfiddle.com/#!3/83a6a/7

    qid & accept id: (10659824, 10659984) query: Mysql Count Distinct results soup:

    If you want to simultaneously count the number of rows with multiple specific criteria in a data set, you can use the pattern COUNT(CASE WHEN criteria THEN 1 END). Here's an example that counts the number of rows for stats = 2, and for stats = 3:

    \n
    SELECT\n  count(case when stats = 2 then 1 end) as ok,\n  count(case when stats = 3 then 1 end) as not_ok\nfrom\n  Table1\n
    \n

    Results:

    \n
    OK | NOT_OK\n-----------\n2  | 1\n
    \n

    Demo: http://www.sqlfiddle.com/#!2/82414/1

    \n soup wrap:

    If you want to simultaneously count the number of rows with multiple specific criteria in a data set, you can use the pattern COUNT(CASE WHEN criteria THEN 1 END). Here's an example that counts the number of rows for stats = 2, and for stats = 3:

    SELECT
      count(case when stats = 2 then 1 end) as ok,
      count(case when stats = 3 then 1 end) as not_ok
    from
      Table1
    

    Results:

    OK | NOT_OK
    -----------
    2  | 1
    

    Demo: http://www.sqlfiddle.com/#!2/82414/1

    qid & accept id: (10666965, 10667305) query: Joining two MySQL tables, but with additional conditions? soup:

    This is the answer:

    \n
    select a.id, a.name, a.category, a.price, b.filename as file_name \nfrom products a left join (\n    select i.p_id, i.filename from (select id, min(priority) as min_p \n    from images group by p_id) q \n    left join images i on q.id = i.id\n) b on a.id = b.p_id \nwhere a.category in (1, 2, 3);\n
    \n

    EXPLANATION:

    \n

    First, you need to get a set where for each products with lowest priority, which is from this query:

    \n
    select id, min(priority) as min_p from images group by p_id;\n
    \n

    The result will be:

    \n
    +----+----------+\n| id | lowest_p |\n+----+----------+\n|  1 |        0 |\n|  2 |        2 |\n|  3 |        2 |\n|  4 |        1 |\n+----+----------+\n4 rows in set (0.00 sec)\n
    \n

    The next step will be to get an outer join, in this case I'd choose (arbitrarily according to my preference), the left join:

    \n
    select i.p_id, i.filename from (select id, min(priority) as min_p \nfrom images group by p_id) q left join images i on q.id = i.id;\n
    \n

    This query produce what you want in short:

    \n
    +------+----------+\n| p_id | filename |\n+------+----------+\n|    1 | image1   |\n|    2 | image3   |\n|    3 | image4   |\n|    4 | image7   |\n+------+----------+\n4 rows in set (0.00 sec)\n
    \n

    Now you just need to decorate this, again using left join:

    \n
    select a.id, a.name, a.category, a.price, b.filename as file_name \nfrom products a left join (\n    select i.p_id, i.filename from (select id, min(priority) as min_p \n    from images group by p_id) q \n    left join images i on q.id = i.id\n) b on a.id = b.p_id \nwhere a.category in (1, 2, 3);\n
    \n

    And you'll get what you want:

    \n
    +------+-------+----------+-------+-----------+\n| id   | name  | category | price | file_name |\n+------+-------+----------+-------+-----------+\n|    1 | item1 |        1 |  0.99 | image1    |\n|    2 | item2 |        2 |  1.99 | image3    |\n|    3 | item3 |        3 |  2.95 | image4    |\n+------+-------+----------+-------+-----------+\n3 rows in set (0.00 sec)\n
    \n

    You can also put the products in the right hand side of the left join, depending on what you expected when there is product without images available. The query above will display the view as above, with the file_name field as "null".

    \n

    On the other hand, it will not display any if you put products on the right hand side of hte left join.

    \n soup wrap:

    This is the answer:

    select a.id, a.name, a.category, a.price, b.filename as file_name 
    from products a left join (
        select i.p_id, i.filename from (select id, min(priority) as min_p 
        from images group by p_id) q 
        left join images i on q.id = i.id
    ) b on a.id = b.p_id 
    where a.category in (1, 2, 3);
    

    EXPLANATION:

    First, you need to get a set where for each products with lowest priority, which is from this query:

    select id, min(priority) as min_p from images group by p_id;
    

    The result will be:

    +----+----------+
    | id | lowest_p |
    +----+----------+
    |  1 |        0 |
    |  2 |        2 |
    |  3 |        2 |
    |  4 |        1 |
    +----+----------+
    4 rows in set (0.00 sec)
    

    The next step will be to get an outer join, in this case I'd choose (arbitrarily according to my preference), the left join:

    select i.p_id, i.filename from (select id, min(priority) as min_p 
    from images group by p_id) q left join images i on q.id = i.id;
    

    This query produce what you want in short:

    +------+----------+
    | p_id | filename |
    +------+----------+
    |    1 | image1   |
    |    2 | image3   |
    |    3 | image4   |
    |    4 | image7   |
    +------+----------+
    4 rows in set (0.00 sec)
    

    Now you just need to decorate this, again using left join:

    select a.id, a.name, a.category, a.price, b.filename as file_name 
    from products a left join (
        select i.p_id, i.filename from (select id, min(priority) as min_p 
        from images group by p_id) q 
        left join images i on q.id = i.id
    ) b on a.id = b.p_id 
    where a.category in (1, 2, 3);
    

    And you'll get what you want:

    +------+-------+----------+-------+-----------+
    | id   | name  | category | price | file_name |
    +------+-------+----------+-------+-----------+
    |    1 | item1 |        1 |  0.99 | image1    |
    |    2 | item2 |        2 |  1.99 | image3    |
    |    3 | item3 |        3 |  2.95 | image4    |
    +------+-------+----------+-------+-----------+
    3 rows in set (0.00 sec)
    

    You can also put the products in the right hand side of the left join, depending on what you expected when there is product without images available. The query above will display the view as above, with the file_name field as "null".

    On the other hand, it will not display any if you put products on the right hand side of hte left join.

    qid & accept id: (10670090, 10670126) query: SQL Query With Calculated MIN, Requesting Other Column Returns All Rows soup:

    You can use:

    \n
    SELECT TOP 1 ID, MIN(SQRT(POW((100-x),2)) + POW((150-y),2)) AS distance FROM cabstands GROUP BY ID ORDER BY distance ASC\n
    \n

    Or for MySQL:

    \n
    SELECT ID, MIN(SQRT(POW((100-x),2)) + POW((150-y),2)) AS distance FROM cabstands GROUP BY ID ORDER BY distance ASC LIMIT 1\n
    \n soup wrap:

    You can use:

    SELECT TOP 1 ID, MIN(SQRT(POW((100-x),2)) + POW((150-y),2)) AS distance FROM cabstands GROUP BY ID ORDER BY distance ASC
    

    Or for MySQL:

    SELECT ID, MIN(SQRT(POW((100-x),2)) + POW((150-y),2)) AS distance FROM cabstands GROUP BY ID ORDER BY distance ASC LIMIT 1
    
    qid & accept id: (10694376, 10694502) query: SQL how to handle a many to many relationship soup:

    Try this:

    \n
    CREATE TABLE teamPlayer\n(\nplayerID INT NOT NULL, \nteamID INT NOT NULL,\nPRIMARY KEY(playerID, teamID)\n);\n\nalter table teamPlayer\nadd constraint \n    fk_teamPlayer__Player foreign key(playerID) references Player(personID);\n\nalter table teamPlayer\nadd constraint \n    fk_teamPlayer__Team foreign key(teamID) references Team(teamID);\n
    \n

    Or this:

    \n
    CREATE TABLE teamPlayer\n(\nplayerID INT NOT NULL, \nteamID INT NOT NULL,\nPRIMARY KEY(playerID, teamID),\n\nconstraint fk_teamPlayer__Player\nforeign key(playerID) references Player(personID),\n\nconstraint fk_teamPlayer__Team \nforeign key(teamID) references Team(teamID)\n\n);\n
    \n

    If you don't need to name your foreign keys explicitly, you can use this:

    \n
    CREATE TABLE teamPlayer\n(\nplayerID INT NOT NULL references Player(personID), \nteamID INT NOT NULL references Team(teamID),\nPRIMARY KEY(playerID, teamID)\n);\n
    \n
    \n

    All major RDBMS pretty much complied with ANSI SQL on relationship DDL. Everyone is identical

    \n

    CREATE THEN ALTER(explicitly named foreign key):

    \n\n

    CREATE(explicitly named foreign key):

    \n\n

    CREATE(auto-named foreign key):

    \n\n soup wrap:

    Try this:

    CREATE TABLE teamPlayer
    (
    playerID INT NOT NULL, 
    teamID INT NOT NULL,
    PRIMARY KEY(playerID, teamID)
    );
    
    alter table teamPlayer
    add constraint 
        fk_teamPlayer__Player foreign key(playerID) references Player(personID);
    
    alter table teamPlayer
    add constraint 
        fk_teamPlayer__Team foreign key(teamID) references Team(teamID);
    

    Or this:

    CREATE TABLE teamPlayer
    (
    playerID INT NOT NULL, 
    teamID INT NOT NULL,
    PRIMARY KEY(playerID, teamID),
    
    constraint fk_teamPlayer__Player
    foreign key(playerID) references Player(personID),
    
    constraint fk_teamPlayer__Team 
    foreign key(teamID) references Team(teamID)
    
    );
    

    If you don't need to name your foreign keys explicitly, you can use this:

    CREATE TABLE teamPlayer
    (
    playerID INT NOT NULL references Player(personID), 
    teamID INT NOT NULL references Team(teamID),
    PRIMARY KEY(playerID, teamID)
    );
    

    All major RDBMS pretty much complied with ANSI SQL on relationship DDL. Everyone is identical

    CREATE THEN ALTER(explicitly named foreign key):

    CREATE(explicitly named foreign key):

    CREATE(auto-named foreign key):

    qid & accept id: (10777996, 10778075) query: SQL query when a table has a link to itself soup:

    You can use Common Table Expressions (CTEs) to solve this problem. CTEs can be used for recursion, as Andrei pointed out (see the excellent reference that Andrei included in his post). Let's say you have a table as follows:

    \n
    create table Person\n(\n   PersonId int primary key,\n   Name varchar(25),\n   ManagerId int foreign Key references Person(PersonId)\n)\n
    \n

    and let's insert the following data into the table:

    \n
    insert into Person (PersonId, Name, ManagerId) values \n    (1,'Bob', null),\n    (2, 'Steve',1),\n    (3, 'Tim', 2)\n    (4, 'John', 3),\n    (5, 'James', null),\n    (6, 'Joe', 5)\n
    \n

    then we want a query that will return everyone who directly or indirectly reports to Bob, which would be Steve, Tim and John. We don't want to return James and Bob, since they report to no one, or Joe, since he reports to James. This can be done with a CTE query as follows:

    \n
    WITH Managers AS \n( \n     --initialize\n     SELECT PersonId, Name, ManagerId  \n        FROM Person WHERE ManagerId =1\n     UNION ALL \n     --recursion \n     SELECT p.PersonId, p.Name, p.ManagerId \n        FROM Person p INNER JOIN Managers m  \n        ON p.ManagerId = m.PersonId \n) \nSELECT * FROM Managers\n
    \n

    This query returns the correct results:

    \n
    PersonId    Name                      ManagerId\n----------- ------------------------- -----------\n2           Steve                     1\n3           Tim                       2\n4           John                      3\n
    \n

    Edit: This answer is valid assuming the OP is using SQL Server 2005 or higher. I do not know if this syntax is valid in MySQL or Oracle.

    \n soup wrap:

    You can use Common Table Expressions (CTEs) to solve this problem. CTEs can be used for recursion, as Andrei pointed out (see the excellent reference that Andrei included in his post). Let's say you have a table as follows:

    create table Person
    (
       PersonId int primary key,
       Name varchar(25),
       ManagerId int foreign Key references Person(PersonId)
    )
    

    and let's insert the following data into the table:

    insert into Person (PersonId, Name, ManagerId) values 
        (1,'Bob', null),
        (2, 'Steve',1),
        (3, 'Tim', 2)
        (4, 'John', 3),
        (5, 'James', null),
        (6, 'Joe', 5)
    

    then we want a query that will return everyone who directly or indirectly reports to Bob, which would be Steve, Tim and John. We don't want to return James and Bob, since they report to no one, or Joe, since he reports to James. This can be done with a CTE query as follows:

    WITH Managers AS 
    ( 
         --initialize
         SELECT PersonId, Name, ManagerId  
            FROM Person WHERE ManagerId =1
         UNION ALL 
         --recursion 
         SELECT p.PersonId, p.Name, p.ManagerId 
            FROM Person p INNER JOIN Managers m  
            ON p.ManagerId = m.PersonId 
    ) 
    SELECT * FROM Managers
    

    This query returns the correct results:

    PersonId    Name                      ManagerId
    ----------- ------------------------- -----------
    2           Steve                     1
    3           Tim                       2
    4           John                      3
    

    Edit: This answer is valid assuming the OP is using SQL Server 2005 or higher. I do not know if this syntax is valid in MySQL or Oracle.

    qid & accept id: (10787043, 10788017) query: Returning a row if and only if a sibling row doesn't exist soup:
    SET search_path= 'tmp';\n\nDROP TABLE dogcat CASCADE;\nCREATE TABLE dogcat\n        ( id serial NOT NULL\n        , zname    varchar\n        , foo    INTEGER\n        , bar    INTEGER\n        , house_id INTEGER NOT NULL\n        , PRIMARY KEY (zname,house_id)\n        );\nINSERT INTO dogcat(zname,foo,bar,house_id) VALUES\n  ('Cat',12,4,1)\n ,('Cat',9,4,2)\n ,('Dog',8,23,1)\n ,('Bird',9,54,1)\n ,('Bird',78,2,2)\n ,('Bird',29,32,3)\n        ;\n-- Carthesian product of the {name,house_id} domains\nWITH cart AS (\n        WITH beast AS (\n                SELECT distinct zname AS zname\n                FROM dogcat\n                )\n        , house AS (\n                SELECT distinct house_id AS house_id\n                FROM dogcat\n                )\n        SELECT beast.zname AS zname\n        ,house.house_id AS house_id\n        FROM beast , house\n        )\nINSERT INTO dogcat(zname,house_id, foo,bar)\nSELECT ca.zname, ca.house_id\n        ,fb.foo, fb.bar\nFROM cart ca\n     -- find the animal with the lowes id\nJOIN dogcat fb ON fb.zname = ca.zname AND NOT EXISTS\n        ( SELECT * FROM dogcat nx\n        WHERE nx.zname = fb.zname\n        AND nx.id < fb.id\n        )\nWHERE NOT EXISTS (\n        SELECT * FROM dogcat dc\n        WHERE dc.zname = ca.zname\n        AND dc.house_id = ca.house_id\n        )\n        ;\n\nSELECT * FROM dogcat;\n
    \n

    Result:

    \n
    SET\nDROP TABLE\nNOTICE:  CREATE TABLE will create implicit sequence "dogcat_id_seq" for serial column "dogcat.id"\nNOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index "dogcat_pkey" for table "dogcat"\nCREATE TABLE\nINSERT 0 6\nINSERT 0 3\n id | zname | foo | bar | house_id \n----+-------+-----+-----+----------\n  1 | Cat   |  12 |   4 |        1\n  2 | Cat   |   9 |   4 |        2\n  3 | Dog   |   8 |  23 |        1\n  4 | Bird  |   9 |  54 |        1\n  5 | Bird  |  78 |   2 |        2\n  6 | Bird  |  29 |  32 |        3\n  7 | Cat   |  12 |   4 |        3\n  8 | Dog   |   8 |  23 |        2\n  9 | Dog   |   8 |  23 |        3\n(9 rows)\n
    \n soup wrap:
    SET search_path= 'tmp';
    
    DROP TABLE dogcat CASCADE;
    CREATE TABLE dogcat
            ( id serial NOT NULL
            , zname    varchar
            , foo    INTEGER
            , bar    INTEGER
            , house_id INTEGER NOT NULL
            , PRIMARY KEY (zname,house_id)
            );
    INSERT INTO dogcat(zname,foo,bar,house_id) VALUES
      ('Cat',12,4,1)
     ,('Cat',9,4,2)
     ,('Dog',8,23,1)
     ,('Bird',9,54,1)
     ,('Bird',78,2,2)
     ,('Bird',29,32,3)
            ;
    -- Carthesian product of the {name,house_id} domains
    WITH cart AS (
            WITH beast AS (
                    SELECT distinct zname AS zname
                    FROM dogcat
                    )
            , house AS (
                    SELECT distinct house_id AS house_id
                    FROM dogcat
                    )
            SELECT beast.zname AS zname
            ,house.house_id AS house_id
            FROM beast , house
            )
    INSERT INTO dogcat(zname,house_id, foo,bar)
    SELECT ca.zname, ca.house_id
            ,fb.foo, fb.bar
    FROM cart ca
         -- find the animal with the lowes id
    JOIN dogcat fb ON fb.zname = ca.zname AND NOT EXISTS
            ( SELECT * FROM dogcat nx
            WHERE nx.zname = fb.zname
            AND nx.id < fb.id
            )
    WHERE NOT EXISTS (
            SELECT * FROM dogcat dc
            WHERE dc.zname = ca.zname
            AND dc.house_id = ca.house_id
            )
            ;
    
    SELECT * FROM dogcat;
    

    Result:

    SET
    DROP TABLE
    NOTICE:  CREATE TABLE will create implicit sequence "dogcat_id_seq" for serial column "dogcat.id"
    NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index "dogcat_pkey" for table "dogcat"
    CREATE TABLE
    INSERT 0 6
    INSERT 0 3
     id | zname | foo | bar | house_id 
    ----+-------+-----+-----+----------
      1 | Cat   |  12 |   4 |        1
      2 | Cat   |   9 |   4 |        2
      3 | Dog   |   8 |  23 |        1
      4 | Bird  |   9 |  54 |        1
      5 | Bird  |  78 |   2 |        2
      6 | Bird  |  29 |  32 |        3
      7 | Cat   |  12 |   4 |        3
      8 | Dog   |   8 |  23 |        2
      9 | Dog   |   8 |  23 |        3
    (9 rows)
    
    qid & accept id: (10797333, 10797805) query: How to Specify Array variable in plsql soup:

    There are a couple of different approaches you could take to get data into your array. The first would be a simple loop, as in the following:

    \n
    DECLARE\n  TYPE NUMBER_ARRAY IS VARRAY(100) OF NUMBER;\n\n  arrNums  NUMBER_ARRAY;\n  i NUMBER := 1;\nBEGIN\n  arrNums := NUMBER_ARRAY();\n\n  FOR aRow IN (SELECT NUMBER_FIELD\n                 FROM A_TABLE\n                 WHERE ROWNUM <= 100)\n  LOOP\n    arrNums.EXTEND;\n    arrNums(i) := aRow.SEQUENCE_NO;\n    i := i + 1;\n  END LOOP;\nend;\n
    \n

    Another, as suggested by @Rene, would be to use BULK COLLECT, as follows:

    \n
    DECLARE\n  TYPE NUMBER_ARRAY IS VARRAY(100) OF NUMBER;\n\n  arrNums  NUMBER_ARRAY;\nBEGIN\n  arrNums := NUMBER_ARRAY();\n  arrNums.EXTEND(100);\n\n  SELECT NUMBER_FIELD\n    BULK COLLECT INTO arrNums\n    FROM A_TABLE\n    WHERE ROWNUM <= 100;\nend;\n
    \n

    Share and enjoy.

    \n soup wrap:

    There are a couple of different approaches you could take to get data into your array. The first would be a simple loop, as in the following:

    DECLARE
      TYPE NUMBER_ARRAY IS VARRAY(100) OF NUMBER;
    
      arrNums  NUMBER_ARRAY;
      i NUMBER := 1;
    BEGIN
      arrNums := NUMBER_ARRAY();
    
      FOR aRow IN (SELECT NUMBER_FIELD
                     FROM A_TABLE
                     WHERE ROWNUM <= 100)
      LOOP
        arrNums.EXTEND;
        arrNums(i) := aRow.SEQUENCE_NO;
        i := i + 1;
      END LOOP;
    end;
    

    Another, as suggested by @Rene, would be to use BULK COLLECT, as follows:

    DECLARE
      TYPE NUMBER_ARRAY IS VARRAY(100) OF NUMBER;
    
      arrNums  NUMBER_ARRAY;
    BEGIN
      arrNums := NUMBER_ARRAY();
      arrNums.EXTEND(100);
    
      SELECT NUMBER_FIELD
        BULK COLLECT INTO arrNums
        FROM A_TABLE
        WHERE ROWNUM <= 100;
    end;
    

    Share and enjoy.

    qid & accept id: (10813098, 10813179) query: JOIN multiple fields to one field soup:

    You do it the same way you will do normally -

    \n
    SELECT ABC.*, XYZ.* FROM XYZ, ABC\nWHERE \nXYZ.KOD_TYPE=ABC.REMARK1\nAND XYZ.KOD_TYPE=ABC.REMARK2\nAND XYZ.KOD_TYPE=ABC.REMARK3\nAND XYZ.KOD_TYPE=ABC.REMARK4\nAND XYZ.KOD_TYPE=ABC.REMARK5\n
    \n

    If you need query where any one remark matches -

    \n
    SELECT ABC.*, XYZ.* FROM XYZ, ABC\nWHERE \nXYZ.KOD_TYPE=ABC.REMARK1\nOR XYZ.KOD_TYPE=ABC.REMARK2\nOR XYZ.KOD_TYPE=ABC.REMARK3\nOR XYZ.KOD_TYPE=ABC.REMARK4\nOR XYZ.KOD_TYPE=ABC.REMARK5\n
    \n soup wrap:

    You do it the same way you will do normally -

    SELECT ABC.*, XYZ.* FROM XYZ, ABC
    WHERE 
    XYZ.KOD_TYPE=ABC.REMARK1
    AND XYZ.KOD_TYPE=ABC.REMARK2
    AND XYZ.KOD_TYPE=ABC.REMARK3
    AND XYZ.KOD_TYPE=ABC.REMARK4
    AND XYZ.KOD_TYPE=ABC.REMARK5
    

    If you need query where any one remark matches -

    SELECT ABC.*, XYZ.* FROM XYZ, ABC
    WHERE 
    XYZ.KOD_TYPE=ABC.REMARK1
    OR XYZ.KOD_TYPE=ABC.REMARK2
    OR XYZ.KOD_TYPE=ABC.REMARK3
    OR XYZ.KOD_TYPE=ABC.REMARK4
    OR XYZ.KOD_TYPE=ABC.REMARK5
    
    qid & accept id: (10860452, 27342422) query: How to discover the columns for a given index or key in MonetDB soup:

    Two and a half years later, because I was intrigued by the question: You can indeed find the columns for a given key using the poorly named "objects" table.

    \n

    For example, consider the following table

    \n
    CREATE TABLE indextest (a INT, b INT);\nALTER TABLE indextest ADD CONSTRAINT indextest_pk PRIMARY KEY (a);\nALTER TABLE indextest ADD CONSTRAINT indextest_uq UNIQUE (a, b);                                                                           \n
    \n

    Now let's find out which columns belong to indextest_uq:

    \n
    SELECT idxs.id AS index_id, columns.id AS column_id, tables.name AS table_name, columns.name AS column_name, columns.type AS column_type \nFROM idxs JOIN objects ON idxs.id=objects.id JOIN tables ON idxs.table_id=tables.id JOIN columns ON idxs.table_id=columns.table_id AND objects.name=columns.name \nWHERE idxs.name='indextest_uq';\n
    \n

    The result of this query looks like this:

    \n
    +----------+-----------+------------+-------------+-------------+\n| index_id | column_id | table_name | column_name | column_type |\n+==========+===========+============+=============+=============+\n|     6446 |      6438 | indextest  | a           | int         |\n|     6446 |      6439 | indextest  | b           | int         |\n+----------+-----------+------------+-------------+-------------+\n
    \n

    Obviously, more information from the columns and tables tables could be included by extending the SELECT part of the query.

    \n soup wrap:

    Two and a half years later, because I was intrigued by the question: You can indeed find the columns for a given key using the poorly named "objects" table.

    For example, consider the following table

    CREATE TABLE indextest (a INT, b INT);
    ALTER TABLE indextest ADD CONSTRAINT indextest_pk PRIMARY KEY (a);
    ALTER TABLE indextest ADD CONSTRAINT indextest_uq UNIQUE (a, b);                                                                           
    

    Now let's find out which columns belong to indextest_uq:

    SELECT idxs.id AS index_id, columns.id AS column_id, tables.name AS table_name, columns.name AS column_name, columns.type AS column_type 
    FROM idxs JOIN objects ON idxs.id=objects.id JOIN tables ON idxs.table_id=tables.id JOIN columns ON idxs.table_id=columns.table_id AND objects.name=columns.name 
    WHERE idxs.name='indextest_uq';
    

    The result of this query looks like this:

    +----------+-----------+------------+-------------+-------------+
    | index_id | column_id | table_name | column_name | column_type |
    +==========+===========+============+=============+=============+
    |     6446 |      6438 | indextest  | a           | int         |
    |     6446 |      6439 | indextest  | b           | int         |
    +----------+-----------+------------+-------------+-------------+
    

    Obviously, more information from the columns and tables tables could be included by extending the SELECT part of the query.

    qid & accept id: (10918093, 10920001) query: Changing part of a string on some values in postgres database soup:

    You need to use this Postgres function

    \n
    overlay(string placing string from int [for int]) \nex: overlay('Txxxxas' placing 'hom' from 2 for 4)\n
    \n

    Your situation involves the select statement having the following:

    \n
    overlay(location placing '/home/BBB' from 1 for 9)\n
    \n

    You can get more information from here.

    \n soup wrap:

    You need to use this Postgres function

    overlay(string placing string from int [for int]) 
    ex: overlay('Txxxxas' placing 'hom' from 2 for 4)
    

    Your situation involves the select statement having the following:

    overlay(location placing '/home/BBB' from 1 for 9)
    

    You can get more information from here.

    qid & accept id: (10919401, 10919437) query: Select all data in sql with where condition? soup:

    The usual trick is to set a separate parameter for selecting everything:

    \n
    SELECT book FROM com WHERE genre=? OR 1=?\n
    \n

    When you set the second parameter to 0, filtering by genre is used, but when you set it to 1, everything is returned.

    \n

    If you are willing to switch to using named JDBC parameters, you could rewrite with one parameter, and use null to mean "select everything":

    \n
    SELECT book FROM com WHERE genre=:genre_param OR :genre_param is null\n
    \n soup wrap:

    The usual trick is to set a separate parameter for selecting everything:

    SELECT book FROM com WHERE genre=? OR 1=?
    

    When you set the second parameter to 0, filtering by genre is used, but when you set it to 1, everything is returned.

    If you are willing to switch to using named JDBC parameters, you could rewrite with one parameter, and use null to mean "select everything":

    SELECT book FROM com WHERE genre=:genre_param OR :genre_param is null
    
    qid & accept id: (10979035, 10979094) query: Single default value in a table soup:

    The easiest way I see is a check constraint with a UDF (User Defined function).

    \n

    Look at here, for example.\nhttp://sqljourney.wordpress.com/2010/06/25/check-constraint-with-user-defined-function-in-sql-server/

    \n

    Untested example

    \n
    CREATE FUNCTION dbo.CheckDefaultUnicity(@UserId int)\nRETURNS int\nAS \nBEGIN\n   DECLARE @retval int\n   SELECT @retval = COUNT(*) FROM  where UserId = @UserId and  = 1-- or whatever is your default value\n   RETURN @retval \nEND;\nGO\n
    \n

    and alter your table

    \n
    ALTER TABLE  \nADD CONSTRAINT Ck_UniqueDefaultForUser \nCHECK (dbo.CheckDefaultUnicity(UserId) <2)\n
    \n soup wrap:

    The easiest way I see is a check constraint with a UDF (User Defined function).

    Look at here, for example. http://sqljourney.wordpress.com/2010/06/25/check-constraint-with-user-defined-function-in-sql-server/

    Untested example

    CREATE FUNCTION dbo.CheckDefaultUnicity(@UserId int)
    RETURNS int
    AS 
    BEGIN
       DECLARE @retval int
       SELECT @retval = COUNT(*) FROM  where UserId = @UserId and  = 1-- or whatever is your default value
       RETURN @retval 
    END;
    GO
    

    and alter your table

    ALTER TABLE  
    ADD CONSTRAINT Ck_UniqueDefaultForUser 
    CHECK (dbo.CheckDefaultUnicity(UserId) <2)
    
    qid & accept id: (10993189, 10993263) query: Oracle Regex expression to match exactly non digit then digits again soup:

    Just remove the .* at the end of your expression it is responsible for matching the additional stuff.

    \n
    SELECT 1 FROM DUAL WHERE \n  REGEXP_LIKE('555-5555x123', '^[0-9]{3,4}[^[:digit:]][0-9]{4}$')\n
    \n

    That way it does match 3 or 4 digits, a non digit and 4 more digits.

    \n

    The {3,4} and {4} are the quantifiers that define the amount of digits you want to allow. Just change them to the values you need. E.g. {4,} would match 4 or more.

    \n

    ^ anchors the regex to the start of the string and $ to the end.

    \n

    Update

    \n

    To ensure that there is a non digit after the 4 digits at the end you can use an alternation

    \n
    SELECT 1 FROM DUAL WHERE \n  REGEXP_LIKE('555-5555x123', '^[0-9]{3,4}[^[:digit:]][0-9]{4}($|[^0-9].*$)')\n
    \n

    Now, after your 4 digits there must be either the end of the row OR a non digit ([^0-9] is a negated character class), then anything (but newlines) till the end of the row.

    \n

    I don't know if it is important in your case, but [^0-9] would also match a newline character, if you want to avoid this use [^0-9\r\n]

    \n soup wrap:

    Just remove the .* at the end of your expression it is responsible for matching the additional stuff.

    SELECT 1 FROM DUAL WHERE 
      REGEXP_LIKE('555-5555x123', '^[0-9]{3,4}[^[:digit:]][0-9]{4}$')
    

    That way it does match 3 or 4 digits, a non digit and 4 more digits.

    The {3,4} and {4} are the quantifiers that define the amount of digits you want to allow. Just change them to the values you need. E.g. {4,} would match 4 or more.

    ^ anchors the regex to the start of the string and $ to the end.

    Update

    To ensure that there is a non digit after the 4 digits at the end you can use an alternation

    SELECT 1 FROM DUAL WHERE 
      REGEXP_LIKE('555-5555x123', '^[0-9]{3,4}[^[:digit:]][0-9]{4}($|[^0-9].*$)')
    

    Now, after your 4 digits there must be either the end of the row OR a non digit ([^0-9] is a negated character class), then anything (but newlines) till the end of the row.

    I don't know if it is important in your case, but [^0-9] would also match a newline character, if you want to avoid this use [^0-9\r\n]

    qid & accept id: (10993546, 10993655) query: Changing the column Type in SQL soup:

    My way of doing this:

    \n

    (1) Add a new column:

    \n
    ALTER TABLE yourtable \nADD COLUMN `new_date` DATE NULL AFTER `views`; \n
    \n

    (2) Update the new column

    \n
    UPDATE yourtable SET new_date = old_date;\n
    \n

    Take care of the datas formatting in old_date. If it isn't formatted yyyy-mm-dd, you might have to STR_TO_DATE or some string-replacements in this UPDATE-statement here to fit your purposes.

    \n

    Example:

    \n

    If your data looks like this: mmmm dd, yyyy, hh:mm (p.e. May 17, 2012, 8:36 pm) , you can update like this:

    \n
    UPDATE yourtable\nSET new_date = STR_TO_DATE(old_date, "%M %e, %Y");\n
    \n

    STR_TO_DATE basically reverse engineers string data to a date value.

    \n

    (3) Delete the old column

    \n
    ALTER TABLE yourtable \nDROP COLUMN `old_date`; \n
    \n

    (4) Rename the new column

    \n
    ALTER TABLE yourtable \nCHANGE `new_date` `old_date` DATE NULL; \n
    \n

    Done!

    \n soup wrap:

    My way of doing this:

    (1) Add a new column:

    ALTER TABLE yourtable 
    ADD COLUMN `new_date` DATE NULL AFTER `views`; 
    

    (2) Update the new column

    UPDATE yourtable SET new_date = old_date;
    

    Take care of the datas formatting in old_date. If it isn't formatted yyyy-mm-dd, you might have to STR_TO_DATE or some string-replacements in this UPDATE-statement here to fit your purposes.

    Example:

    If your data looks like this: mmmm dd, yyyy, hh:mm (p.e. May 17, 2012, 8:36 pm) , you can update like this:

    UPDATE yourtable
    SET new_date = STR_TO_DATE(old_date, "%M %e, %Y");
    

    STR_TO_DATE basically reverse engineers string data to a date value.

    (3) Delete the old column

    ALTER TABLE yourtable 
    DROP COLUMN `old_date`; 
    

    (4) Rename the new column

    ALTER TABLE yourtable 
    CHANGE `new_date` `old_date` DATE NULL; 
    

    Done!

    qid & accept id: (10999396, 10999467) query: How do I use an INSERT statement's OUTPUT clause to get the identity value? soup:

    You can either have the newly inserted ID being output to the SSMS console like this:

    \n
    INSERT INTO MyTable(Name, Address, PhoneNo)\nOUTPUT INSERTED.ID\nVALUES ('Yatrix', '1234 Address Stuff', '1112223333')\n
    \n

    You can use this also from e.g. C#, when you need to get the ID back to your calling app - just execute the SQL query with .ExecuteScalar() (instead of .ExecuteNonQuery()) to read the resulting ID back.

    \n

    Or if you need to capture the newly inserted ID inside T-SQL (e.g. for later further processing), you need to create a table variable:

    \n
    DECLARE @OutputTbl TABLE (ID INT)\n\nINSERT INTO MyTable(Name, Address, PhoneNo)\nOUTPUT INSERTED.ID INTO @OutputTbl(ID)\nVALUES ('Yatrix', '1234 Address Stuff', '1112223333')\n
    \n

    This way, you can put multiple values into @OutputTbl and do further processing on those. You could also use a "regular" temporary table (#temp) or even a "real" persistent table as your "output target" here.

    \n soup wrap:

    You can either have the newly inserted ID being output to the SSMS console like this:

    INSERT INTO MyTable(Name, Address, PhoneNo)
    OUTPUT INSERTED.ID
    VALUES ('Yatrix', '1234 Address Stuff', '1112223333')
    

    You can use this also from e.g. C#, when you need to get the ID back to your calling app - just execute the SQL query with .ExecuteScalar() (instead of .ExecuteNonQuery()) to read the resulting ID back.

    Or if you need to capture the newly inserted ID inside T-SQL (e.g. for later further processing), you need to create a table variable:

    DECLARE @OutputTbl TABLE (ID INT)
    
    INSERT INTO MyTable(Name, Address, PhoneNo)
    OUTPUT INSERTED.ID INTO @OutputTbl(ID)
    VALUES ('Yatrix', '1234 Address Stuff', '1112223333')
    

    This way, you can put multiple values into @OutputTbl and do further processing on those. You could also use a "regular" temporary table (#temp) or even a "real" persistent table as your "output target" here.

    qid & accept id: (11019847, 11020207) query: Database design pattern where one attribute only applies if another attribute has certain value(s) soup:

    I would call this a data dependency. Not all data dependencies can be modeled directly or conveniently with relational decomposition. This one can be handled pretty easily with a check constraint:

    \n
    CREATE TABLE Students (\n  id SERIAL PRIMARY KEY, -- for example, something else in reality\n  grade INTEGER NOT NULL,\n  honors BOOLEAN,\n  CONSTRAINT ensure_honors_grade \n    CHECK((honors IS NULL AND grade < 7) OR \n          (honors IS NOT NULL AND grade >= 7))\n);\n
    \n

    Another solution might be to use two tables:

    \n
    CREATE TABLE Students (\n  id SERIAL PRIMARY KEY,\n  grade INTEGER NOT NULL,\n  CONSTRAINT id_grade_unique UNIQUE (id, grade) -- needed for FK constraint below\n);\n\nCREATE TABLE Honors (\n  student_id INTEGER NOT NULL,\n  grade INTEGER NOT NULL,\n  honors BOOLEAN NOT NULL,\n  CONSTRAINT student_fk FOREIGN KEY (student_id, grade) REFERENCES Students(id, grade),\n  CONSTRAINT valid_grade CHECK(grade >= 7)\n);\n
    \n

    This alternative design is more explicit about the relationship between the grade and whether or not there is an honors flag, and leaves room for further differentiation of students in grades 7-8 (though the table name should be improved). If you only have the one property, the honors boolean, then this is probably overkill. As @BrankoDimitrijevic mentions, this doesn't enforce the existence of a row in Honors just because the grade is 7 or 8, and you're also paying for an index you wouldn't otherwise need. So there are tradeoffs; these are certainly not the only two designs possible; Branko also suggests using triggers.

    \n

    When it comes to OO design, @Ryan is correct, but for proper relational database design one does not, in general, approach problems by trying to identify inheritance patterns. That is the OO perspective. It will always be important to concern yourself with your access patterns and how your code will be getting at the data, but in relational database design, one strives for normalization and flexibility in the database first and the code second, because there will invariably be multiple codebases getting at the data and you want to ensure the data is always valid no matter how buggy the accessing code is.

    \n soup wrap:

    I would call this a data dependency. Not all data dependencies can be modeled directly or conveniently with relational decomposition. This one can be handled pretty easily with a check constraint:

    CREATE TABLE Students (
      id SERIAL PRIMARY KEY, -- for example, something else in reality
      grade INTEGER NOT NULL,
      honors BOOLEAN,
      CONSTRAINT ensure_honors_grade 
        CHECK((honors IS NULL AND grade < 7) OR 
              (honors IS NOT NULL AND grade >= 7))
    );
    

    Another solution might be to use two tables:

    CREATE TABLE Students (
      id SERIAL PRIMARY KEY,
      grade INTEGER NOT NULL,
      CONSTRAINT id_grade_unique UNIQUE (id, grade) -- needed for FK constraint below
    );
    
    CREATE TABLE Honors (
      student_id INTEGER NOT NULL,
      grade INTEGER NOT NULL,
      honors BOOLEAN NOT NULL,
      CONSTRAINT student_fk FOREIGN KEY (student_id, grade) REFERENCES Students(id, grade),
      CONSTRAINT valid_grade CHECK(grade >= 7)
    );
    

    This alternative design is more explicit about the relationship between the grade and whether or not there is an honors flag, and leaves room for further differentiation of students in grades 7-8 (though the table name should be improved). If you only have the one property, the honors boolean, then this is probably overkill. As @BrankoDimitrijevic mentions, this doesn't enforce the existence of a row in Honors just because the grade is 7 or 8, and you're also paying for an index you wouldn't otherwise need. So there are tradeoffs; these are certainly not the only two designs possible; Branko also suggests using triggers.

    When it comes to OO design, @Ryan is correct, but for proper relational database design one does not, in general, approach problems by trying to identify inheritance patterns. That is the OO perspective. It will always be important to concern yourself with your access patterns and how your code will be getting at the data, but in relational database design, one strives for normalization and flexibility in the database first and the code second, because there will invariably be multiple codebases getting at the data and you want to ensure the data is always valid no matter how buggy the accessing code is.

    qid & accept id: (11033340, 11033391) query: How to find sum of multiple columns in a table in SQL Server 2005? soup:

    Easy:

    \n
    SELECT \n   Val1,\n   Val2,\n   Val3,\n   (Val1 + Val2 + Val3) as 'Total'\nFROM Emp\n
    \n

    or if you just want one row:

    \n
    SELECT \n   SUM(Val1) as 'Val1',\n   SUM(Val2) as 'Val2',\n   SUM(Val3) as 'Val3',\n   (SUM(Val1) + SUM(Val2) + SUM(Val3)) as 'Total'\nFROM Emp\n
    \n soup wrap:

    Easy:

    SELECT 
       Val1,
       Val2,
       Val3,
       (Val1 + Val2 + Val3) as 'Total'
    FROM Emp
    

    or if you just want one row:

    SELECT 
       SUM(Val1) as 'Val1',
       SUM(Val2) as 'Val2',
       SUM(Val3) as 'Val3',
       (SUM(Val1) + SUM(Val2) + SUM(Val3)) as 'Total'
    FROM Emp
    
    qid & accept id: (11097839, 11098733) query: How to create a not null column in a view soup:

    You can't add a not null or check constraint to a view; see this and on the same page 'Restrictions on NOT NULL Constraints' and 'Restrictions on Check Constraints'. You can add a with check option (against a redundant where clause) to the view but that won't be marked as not null in the data dictionary.

    \n

    The only way I can think to get this effect is, if you're on 11g, to add the cast value as a virtual column on the table, and (if it's still needed) create the view against that:

    \n
    ALTER TABLE "MyTable" ADD "MyBDColumn" AS\n    (CAST("MyColumn" AS BINARY_DOUBLE)) NOT NULL;\n\nCREATE OR REPLACE VIEW "MyView" AS\nSELECT\n    "MyBDColumn" AS "MyColumn"\nFROM "MyTable";\n\ndesc "MyView"\n\n Name                                      Null?    Type\n ----------------------------------------- -------- ----------------------------\n MyColumn                                  NOT NULL BINARY_DOUBLE\n
    \n
    \n

    Since you said in a comment on dba.se that this is for mocking something up, you could use a normal column and a trigger to simulate the virtual column:

    \n
    CREATE TABLE "MyTable" \n(\n  "MyColumn" NUMBER NOT NULL,\n  "MyBDColumn" BINARY_DOUBLE NOT NULL\n);\n\nCREATE TRIGGER "MyTrigger" before update or insert on "MyTable"\nFOR EACH ROW\nBEGIN\n    :new."MyBDColumn" := :new."MyColumn";\nEND;\n/\n\nCREATE VIEW "MyView" AS\nSELECT\n    "MyBDColumn" AS "MyColumn"\nFROM "MyTable";\n\nINSERT INTO "MyTable" ("MyColumn") values (2);\n\nSELECT * FROM "MyView";\n\n  MyColumn\n----------\n  2.0E+000\n
    \n

    And desc "MyView" still gives:

    \n
     Name                                      Null?    Type\n ----------------------------------------- -------- ----------------------------\n MyColumn                                  NOT NULL BINARY_DOUBLE\n
    \n

    As Leigh mentioned (also on dba.se), if you did want to insert/update the view you could use an instead of trigger, with the VC or fake version.

    \n soup wrap:

    You can't add a not null or check constraint to a view; see this and on the same page 'Restrictions on NOT NULL Constraints' and 'Restrictions on Check Constraints'. You can add a with check option (against a redundant where clause) to the view but that won't be marked as not null in the data dictionary.

    The only way I can think to get this effect is, if you're on 11g, to add the cast value as a virtual column on the table, and (if it's still needed) create the view against that:

    ALTER TABLE "MyTable" ADD "MyBDColumn" AS
        (CAST("MyColumn" AS BINARY_DOUBLE)) NOT NULL;
    
    CREATE OR REPLACE VIEW "MyView" AS
    SELECT
        "MyBDColumn" AS "MyColumn"
    FROM "MyTable";
    
    desc "MyView"
    
     Name                                      Null?    Type
     ----------------------------------------- -------- ----------------------------
     MyColumn                                  NOT NULL BINARY_DOUBLE
    

    Since you said in a comment on dba.se that this is for mocking something up, you could use a normal column and a trigger to simulate the virtual column:

    CREATE TABLE "MyTable" 
    (
      "MyColumn" NUMBER NOT NULL,
      "MyBDColumn" BINARY_DOUBLE NOT NULL
    );
    
    CREATE TRIGGER "MyTrigger" before update or insert on "MyTable"
    FOR EACH ROW
    BEGIN
        :new."MyBDColumn" := :new."MyColumn";
    END;
    /
    
    CREATE VIEW "MyView" AS
    SELECT
        "MyBDColumn" AS "MyColumn"
    FROM "MyTable";
    
    INSERT INTO "MyTable" ("MyColumn") values (2);
    
    SELECT * FROM "MyView";
    
      MyColumn
    ----------
      2.0E+000
    

    And desc "MyView" still gives:

     Name                                      Null?    Type
     ----------------------------------------- -------- ----------------------------
     MyColumn                                  NOT NULL BINARY_DOUBLE
    

    As Leigh mentioned (also on dba.se), if you did want to insert/update the view you could use an instead of trigger, with the VC or fake version.

    qid & accept id: (11104819, 11104987) query: sql query: create a table by merging rows from an exisiting table as follows: soup:

    I assume your node1 and node2 are integer foreign keys linking to a node table, and the table you mention is an edge table?

    \n

    Assuming the edge table has been created with something like:

    \n
    CREATE TABLE edges( node1 INTEGER, node2 INTEGER, weight REAL );\n
    \n

    How about something like (assuming no self-arcs and for every link from a->b there is also a link from b->a):

    \n
    CREATE TABLE newedges( node1 INTEGER, node2 INTEGER, weight1 REAL, weight2 REAL );\n\nINSERT INTO newedges\n    SELECT e1.node1, e1.node2, e1.weight, e2.weight\n    FROM edges AS e1 INNER JOIN edges AS e2\n    ON e1.node1=e2.node2 AND e1.node2=e2.node1\n    WHERE e1.node1 < e1.node2;\n
    \n

    The self-join collates forward and backwards edges, and the requirement that e1.node1 is less than e1.node2 ensures that you only see each collated edge once.

    \n

    Edit in response to a request to fill in zeros for missing backwards edge:

    \n

    For missing backwards edges, you can do a LEFT JOIN and use a CASE statement to fill in the gaps with zeros:

    \n
    INSERT INTO newedges\n    SELECT\n        e1.node1,\n        e1.node2,\n        e1.weight,\n        CASE WHEN e2.weight IS NULL THEN 0.0 ELSE e2.weight END\n    FROM edges AS e1 LEFT JOIN edges AS e2\n    ON e1.node1=e2.node2 AND e1.node2=e2.node1\n    WHERE e1.node1 < e1.node2;\n
    \n

    Hope that helps!

    \n soup wrap:

    I assume your node1 and node2 are integer foreign keys linking to a node table, and the table you mention is an edge table?

    Assuming the edge table has been created with something like:

    CREATE TABLE edges( node1 INTEGER, node2 INTEGER, weight REAL );
    

    How about something like (assuming no self-arcs and for every link from a->b there is also a link from b->a):

    CREATE TABLE newedges( node1 INTEGER, node2 INTEGER, weight1 REAL, weight2 REAL );
    
    INSERT INTO newedges
        SELECT e1.node1, e1.node2, e1.weight, e2.weight
        FROM edges AS e1 INNER JOIN edges AS e2
        ON e1.node1=e2.node2 AND e1.node2=e2.node1
        WHERE e1.node1 < e1.node2;
    

    The self-join collates forward and backwards edges, and the requirement that e1.node1 is less than e1.node2 ensures that you only see each collated edge once.

    Edit in response to a request to fill in zeros for missing backwards edge:

    For missing backwards edges, you can do a LEFT JOIN and use a CASE statement to fill in the gaps with zeros:

    INSERT INTO newedges
        SELECT
            e1.node1,
            e1.node2,
            e1.weight,
            CASE WHEN e2.weight IS NULL THEN 0.0 ELSE e2.weight END
        FROM edges AS e1 LEFT JOIN edges AS e2
        ON e1.node1=e2.node2 AND e1.node2=e2.node1
        WHERE e1.node1 < e1.node2;
    

    Hope that helps!

    qid & accept id: (11114638, 11114673) query: How to cut a part of a string in MySQL? soup:

    You can use

    \n
    select substring_index(substring(mycol, instr(mycol, "=")+1), " ", 1)\n
    \n

    to get the first token after the =.

    \n

    This returns 76767.

    \n
    \n

    This works in two steps :

    \n
    substring(mycol, instr(mycol, "=")+1)\n
    \n

    returns the string starting after the =

    \n

    and

    \n
    substring_index( xxx , " ", 1)\n
    \n

    get the first element of the virtual array you'd got from a split by " ", and so returns the first token of xxx.

    \n soup wrap:

    You can use

    select substring_index(substring(mycol, instr(mycol, "=")+1), " ", 1)
    

    to get the first token after the =.

    This returns 76767.


    This works in two steps :

    substring(mycol, instr(mycol, "=")+1)
    

    returns the string starting after the =

    and

    substring_index( xxx , " ", 1)
    

    get the first element of the virtual array you'd got from a split by " ", and so returns the first token of xxx.

    qid & accept id: (11116129, 11116361) query: Alter All Column Values using TRIM in SQL soup:

    If the datatype of name column is varchar then don't need to use rtrim function the right side spaces will be automatically trim. use only LTRIM only.

    \n
    update tablename\nset    name = ltrim(name)\nwhere  ;\n
    \n

    Run this see the how it trims the right spaces automatically.

    \n
    DECLARE @mytb table\n(\nname varchar(20)\n);\n\nINSERT INTO @mytb VALUES ('   stackoverflow         ');\n\nSELECT len(name) from @mytb;\n\nSELECT ltrim(name),len(ltrim(name)) from @mytb;\n
    \n soup wrap:

    If the datatype of name column is varchar then don't need to use rtrim function the right side spaces will be automatically trim. use only LTRIM only.

    update tablename
    set    name = ltrim(name)
    where  ;
    

    Run this see the how it trims the right spaces automatically.

    DECLARE @mytb table
    (
    name varchar(20)
    );
    
    INSERT INTO @mytb VALUES ('   stackoverflow         ');
    
    SELECT len(name) from @mytb;
    
    SELECT ltrim(name),len(ltrim(name)) from @mytb;
    
    qid & accept id: (11117622, 11120820) query: Select all subsets in a many-to-many relation soup:
    DROP SCHEMA tmp CASCADE;\nCREATE SCHEMA tmp;\n\nSET search_path='tmp';\n\n\nCREATE TABLE instrument\n        ( id INTEGER NOT NULL PRIMARY KEY\n        , zname varchar\n        );\nINSERT INTO instrument(id, zname) VALUES\n(1, 'instrument_1'), (2, 'instrument_2')\n, (3, 'instrument_3'), (4, 'instrument_4');\n\nCREATE TABLE piece\n        ( id INTEGER NOT NULL PRIMARY KEY\n        , zname varchar\n        );\nINSERT INTO piece(id, zname) VALUES\n(1, 'piece_1'), (2, 'piece_2'), (3, 'piece_3'), (4, 'piece_4');\n\nCREATE TABLE has_part\n        ( piece_id INTEGER NOT NULL\n        , instrument_id INTEGER NOT NULL\n        , PRIMARY KEY (piece_id,instrument_id)\n        );\n\nINSERT INTO has_part(piece_id,instrument_id) VALUES\n(1,1), (1,2), (1,3)\n, (2,1), (2,2), (2,3), (2,4)\n, (3,1), (3,3), (3,4)\n, (4,2)\n        ;\n
    \n

    The pure sql (not the double negation NOT EXISTS , NOT IN():

    \n
    SELECT zname\nFROM piece pp\nWHERE NOT EXISTS (\n        SELECT * FROM has_part nx\n        WHERE nx.piece_id = pp.id\n        AND nx.instrument_id NOT IN (1,2,3)\n        )\n        ;\n
    \n soup wrap:
    DROP SCHEMA tmp CASCADE;
    CREATE SCHEMA tmp;
    
    SET search_path='tmp';
    
    
    CREATE TABLE instrument
            ( id INTEGER NOT NULL PRIMARY KEY
            , zname varchar
            );
    INSERT INTO instrument(id, zname) VALUES
    (1, 'instrument_1'), (2, 'instrument_2')
    , (3, 'instrument_3'), (4, 'instrument_4');
    
    CREATE TABLE piece
            ( id INTEGER NOT NULL PRIMARY KEY
            , zname varchar
            );
    INSERT INTO piece(id, zname) VALUES
    (1, 'piece_1'), (2, 'piece_2'), (3, 'piece_3'), (4, 'piece_4');
    
    CREATE TABLE has_part
            ( piece_id INTEGER NOT NULL
            , instrument_id INTEGER NOT NULL
            , PRIMARY KEY (piece_id,instrument_id)
            );
    
    INSERT INTO has_part(piece_id,instrument_id) VALUES
    (1,1), (1,2), (1,3)
    , (2,1), (2,2), (2,3), (2,4)
    , (3,1), (3,3), (3,4)
    , (4,2)
            ;
    

    The pure sql (not the double negation NOT EXISTS , NOT IN():

    SELECT zname
    FROM piece pp
    WHERE NOT EXISTS (
            SELECT * FROM has_part nx
            WHERE nx.piece_id = pp.id
            AND nx.instrument_id NOT IN (1,2,3)
            )
            ;
    
    qid & accept id: (11119197, 11119946) query: sql server table peak time soup:

    I've had a play around - I'm working with sessions with a recorded start and end datetime2 values, but hopefully you can adapt your current data to conform to this:

    \n

    Sample data (if I've got the answer wrong, maybe you can adopt this, add it to your question, and add more samples and expected outputs):

    \n
    create table #Sessions (\n    --We'll treat this as a semi-open interval - the session was "live" at SessionStart, and "dead" at SessionEnd\n    SessionStart datetime2 not null,\n    SessionEnd datetime2 null\n)\ninsert into #Sessions (SessionStart,SessionEnd) values\n('20120101','20120105'),\n('20120103','20120109'),\n('20120107','20120108')\n
    \n

    And the query:

    \n
    --Logically, the highest number of simultaneous users was reached at some point when a session started\n;with StartTimes as (\n    select distinct SessionStart as Instant from #Sessions\n), Overlaps as (\n    select\n        st.Instant,COUNT(*) as Cnt,MIN(s.SessionEnd) as SessionEnd\n    from\n        StartTimes st\n            inner join\n        #Sessions s\n            on\n                st.Instant >= s.SessionStart and\n                st.Instant < s.SessionEnd\n    group by\n        st.Instant\n), RankedOverlaps as (\n    select Instant as SessionStart,Cnt,SessionEnd,RANK() OVER (ORDER BY Cnt desc) as rnk\n    from Overlaps\n)\nselect * from RankedOverlaps where rnk = 1\n\ndrop table #Sessions\n
    \n

    Which, with my sample data gives:

    \n
    SessionStart           Cnt         SessionEnd             rnk\n---------------------- ----------- ---------------------- --------------------\n2012-01-03 00:00:00.00 2           2012-01-05 00:00:00.00 1\n2012-01-07 00:00:00.00 2           2012-01-08 00:00:00.00 1\n
    \n
    \n

    An alternative approach, still using the above, but if you also want to analyze "not quite peak" values also, is as follows:

    \n
    --An alternate approach - arrange all of the distinct time values from Sessions into order\n;with Instants as (\n    select SessionStart as Instant from #Sessions\n    union --We want distinct here\n    select SessionEnd from #Sessions\n), OrderedInstants as (\n    select Instant,ROW_NUMBER() OVER (ORDER BY Instant) as rn\n    from Instants\n), Intervals as (\n    select oi1.Instant as StartTime,oi2.Instant as EndTime\n    from\n        OrderedInstants oi1\n            inner join\n        OrderedInstants oi2\n            on\n                oi1.rn = oi2.rn - 1\n), IntervalOverlaps as (\n    select\n        StartTime,\n        EndTime,\n        COUNT(*) as Cnt\n    from\n        Intervals i\n            inner join\n        #Sessions s\n            on\n                i.StartTime < s.SessionEnd and\n                s.SessionStart < i.EndTime\n    group by\n        StartTime,\n        EndTime\n)\nselect * from IntervalOverlaps order by Cnt desc,StartTime\n
    \n

    This time, I'm outputting all of the time periods, together with the number of simultaneous users at the time (order from highest to lowest):

    \n
    StartTime              EndTime                Cnt\n---------------------- ---------------------- -----------\n2012-01-03 00:00:00.00 2012-01-05 00:00:00.00 2\n2012-01-07 00:00:00.00 2012-01-08 00:00:00.00 2\n2012-01-01 00:00:00.00 2012-01-03 00:00:00.00 1\n2012-01-05 00:00:00.00 2012-01-07 00:00:00.00 1\n2012-01-08 00:00:00.00 2012-01-09 00:00:00.00 1\n
    \n soup wrap:

    I've had a play around - I'm working with sessions with a recorded start and end datetime2 values, but hopefully you can adapt your current data to conform to this:

    Sample data (if I've got the answer wrong, maybe you can adopt this, add it to your question, and add more samples and expected outputs):

    create table #Sessions (
        --We'll treat this as a semi-open interval - the session was "live" at SessionStart, and "dead" at SessionEnd
        SessionStart datetime2 not null,
        SessionEnd datetime2 null
    )
    insert into #Sessions (SessionStart,SessionEnd) values
    ('20120101','20120105'),
    ('20120103','20120109'),
    ('20120107','20120108')
    

    And the query:

    --Logically, the highest number of simultaneous users was reached at some point when a session started
    ;with StartTimes as (
        select distinct SessionStart as Instant from #Sessions
    ), Overlaps as (
        select
            st.Instant,COUNT(*) as Cnt,MIN(s.SessionEnd) as SessionEnd
        from
            StartTimes st
                inner join
            #Sessions s
                on
                    st.Instant >= s.SessionStart and
                    st.Instant < s.SessionEnd
        group by
            st.Instant
    ), RankedOverlaps as (
        select Instant as SessionStart,Cnt,SessionEnd,RANK() OVER (ORDER BY Cnt desc) as rnk
        from Overlaps
    )
    select * from RankedOverlaps where rnk = 1
    
    drop table #Sessions
    

    Which, with my sample data gives:

    SessionStart           Cnt         SessionEnd             rnk
    ---------------------- ----------- ---------------------- --------------------
    2012-01-03 00:00:00.00 2           2012-01-05 00:00:00.00 1
    2012-01-07 00:00:00.00 2           2012-01-08 00:00:00.00 1
    

    An alternative approach, still using the above, but if you also want to analyze "not quite peak" values also, is as follows:

    --An alternate approach - arrange all of the distinct time values from Sessions into order
    ;with Instants as (
        select SessionStart as Instant from #Sessions
        union --We want distinct here
        select SessionEnd from #Sessions
    ), OrderedInstants as (
        select Instant,ROW_NUMBER() OVER (ORDER BY Instant) as rn
        from Instants
    ), Intervals as (
        select oi1.Instant as StartTime,oi2.Instant as EndTime
        from
            OrderedInstants oi1
                inner join
            OrderedInstants oi2
                on
                    oi1.rn = oi2.rn - 1
    ), IntervalOverlaps as (
        select
            StartTime,
            EndTime,
            COUNT(*) as Cnt
        from
            Intervals i
                inner join
            #Sessions s
                on
                    i.StartTime < s.SessionEnd and
                    s.SessionStart < i.EndTime
        group by
            StartTime,
            EndTime
    )
    select * from IntervalOverlaps order by Cnt desc,StartTime
    

    This time, I'm outputting all of the time periods, together with the number of simultaneous users at the time (order from highest to lowest):

    StartTime              EndTime                Cnt
    ---------------------- ---------------------- -----------
    2012-01-03 00:00:00.00 2012-01-05 00:00:00.00 2
    2012-01-07 00:00:00.00 2012-01-08 00:00:00.00 2
    2012-01-01 00:00:00.00 2012-01-03 00:00:00.00 1
    2012-01-05 00:00:00.00 2012-01-07 00:00:00.00 1
    2012-01-08 00:00:00.00 2012-01-09 00:00:00.00 1
    
    qid & accept id: (11135522, 11135672) query: The best way to select the latest rates for several currency codes from the DB soup:

    Assuming that the latest exchange rate is the one with the highest id you can use:

    \n
    SELECT *\nFROM rates r\nWHERE r.id IN (\n    SELECT MAX(r1.id)\n    FROM rates r1\n    GROUP BY r1.currency_code\n) T;\n
    \n

    But I strongly suggest another pattern I love. I explained it in another answer this morning:

    \n
    SELECT\n  c.*,\n  r1.*\nFROM currency c\nINNER JOIN rates r1 ON c.code = r1.currency_code\nLEFT JOIN rates r2 ON r1.currency_code = r2.currency_code AND r2.id > r1.id\nWHERE r2.id IS NULL;\n
    \n soup wrap:

    Assuming that the latest exchange rate is the one with the highest id you can use:

    SELECT *
    FROM rates r
    WHERE r.id IN (
        SELECT MAX(r1.id)
        FROM rates r1
        GROUP BY r1.currency_code
    ) T;
    

    But I strongly suggest another pattern I love. I explained it in another answer this morning:

    SELECT
      c.*,
      r1.*
    FROM currency c
    INNER JOIN rates r1 ON c.code = r1.currency_code
    LEFT JOIN rates r2 ON r1.currency_code = r2.currency_code AND r2.id > r1.id
    WHERE r2.id IS NULL;
    
    qid & accept id: (11168749, 11168830) query: How To Find First Date of All MOnths In A Year soup:

    Please try the following. You may want to tweak the date format/timezone

    \n
    select to_date('2012/'||l||'/01', 'yyyy/mm/dd') \nfrom (select level l from dual connect by level < 13)\n
    \n

    EDIT: As provided by the op in the comments, the current year needs to be taken rather than hardcoding it. The updated query is

    \n
    SELECT L || '/01/' || TO_CHAR (SYSDATE, 'YYYY') DATESS FROM \n(SELECT LEVEL L FROM DUAL CONNECT BY LEVEL < 13)\n
    \n soup wrap:

    Please try the following. You may want to tweak the date format/timezone

    select to_date('2012/'||l||'/01', 'yyyy/mm/dd') 
    from (select level l from dual connect by level < 13)
    

    EDIT: As provided by the op in the comments, the current year needs to be taken rather than hardcoding it. The updated query is

    SELECT L || '/01/' || TO_CHAR (SYSDATE, 'YYYY') DATESS FROM 
    (SELECT LEVEL L FROM DUAL CONNECT BY LEVEL < 13)
    
    qid & accept id: (11215684, 11215700) query: Find all but allowed characters in column soup:
    \n

    it's supposed to pull all rows that do not contain characters we do not want.

    \n
    \n

    To find rows that contain x you can use LIKE:

    \n
    SELECT * FROM yourtable WHERE col LIKE '%x%'\n
    \n

    To find rows that do not contain x you can use NOT LIKE:

    \n
    SELECT * FROM yourtable WHERE col NOT LIKE '%x%'\n
    \n

    So your query should use NOT LIKE because you want rows that don't contain something:

    \n
    SELECT NID FROM NOTES WHERE NOTE NOT LIKE '%[0-9a-zA-Z#.;:/^\(\)\@\ \  \\\-]%'\n
    \n
    \n
    \n

    That should return any rows that do not contain

    \n
    0-9 a-z A-z . : ; ^ & @ \ / ( ) #\n
    \n
    \n

    No. Because of the ^ at the start, it returns the rows that don't contain characters except those. Those characters you listed are the characters that are allowed.

    \n soup wrap:

    it's supposed to pull all rows that do not contain characters we do not want.

    To find rows that contain x you can use LIKE:

    SELECT * FROM yourtable WHERE col LIKE '%x%'
    

    To find rows that do not contain x you can use NOT LIKE:

    SELECT * FROM yourtable WHERE col NOT LIKE '%x%'
    

    So your query should use NOT LIKE because you want rows that don't contain something:

    SELECT NID FROM NOTES WHERE NOTE NOT LIKE '%[0-9a-zA-Z#.;:/^\(\)\@\ \  \\\-]%'
    

    That should return any rows that do not contain

    0-9 a-z A-z . : ; ^ & @ \ / ( ) #
    

    No. Because of the ^ at the start, it returns the rows that don't contain characters except those. Those characters you listed are the characters that are allowed.

    qid & accept id: (11227924, 13309814) query: PIVOT on hierarchical data soup:

    You can use PIVOT, UNPIVOT and a recursive query to perform this.

    \n

    Static Version, is where you hard-code the values to the transformed:

    \n
    ;with hd (id, name, parentid, category)\nas\n(\n  select id, name, parentid, 1 as category\n  from yourtable\n  where parentid is null\n  union all\n  select t1.id, t1.name, t1.parentid, hd.category +1\n  from yourtable t1\n  inner join hd\n    on t1.parentid = hd.id\n),\nunpiv as\n(\n  select value, 'cat_'+cast(category as varchar(5))+'_'+ col col_name\n  from\n  (\n    select cast(id as varchar(17)) id, name, parentid, category\n    from hd\n  ) src\n  unpivot\n  (\n    value for col in (id, name)\n  ) un\n)\nselect [cat_1_id], [cat_1_name],\n                   [cat_2_id], [cat_2_name],\n                   [cat_3_id], [cat_3_name]\nfrom unpiv\npivot\n(\n  max(value)\n  for col_name in ([cat_1_id], [cat_1_name],\n                   [cat_2_id], [cat_2_name],\n                   [cat_3_id], [cat_3_name])\n) piv\n
    \n

    See SQL Fiddle with Demo

    \n

    Dynamic Version, the values are generated at run-time:

    \n
    ;with hd (id, name, parentid, category)\nas\n(\n  select id, name, parentid, 1 as category\n  from yourtable\n  where parentid is null\n  union all\n  select t1.id, t1.name, t1.parentid, hd.category +1\n  from yourtable t1\n  inner join hd\n    on t1.parentid = hd.id\n)\nselect category categoryNumber\ninto #temp\nfrom hd\n\nDECLARE @cols AS NVARCHAR(MAX),\n    @query  AS NVARCHAR(MAX)\n\nselect @cols = STUFF((SELECT distinct ',' + quotename('cat_'+cast(CATEGORYNUMBER as varchar(10))+'_'+col) \n                  from #temp\n                  cross apply (select 'id' col\n                               union all \n                               select 'name' col) src\n            FOR XML PATH(''), TYPE\n            ).value('.', 'NVARCHAR(MAX)') \n        ,1,1,'')\n\nset @query = ';with hd (id, name, parentid, category)\n              as\n              (\n                select id, name, parentid, 1 as category\n                from yourtable\n                where parentid is null\n                union all\n                select t1.id, t1.name, t1.parentid, hd.category +1\n                from yourtable t1\n                inner join hd\n                  on t1.parentid = hd.id\n              ),\n              unpiv as\n              (\n                select value, ''cat_''+cast(category as varchar(5))+''_''+ col col_name\n                from\n                (\n                  select cast(id as varchar(17)) id, name, parentid, category                 \n                  from hd\n                ) src\n                unpivot\n                (\n                  value for col in (id, name)\n                ) un\n              )\n              select '+@cols+'\n              from unpiv\n              pivot\n              (\n                max(value)\n                for col_name in ('+@cols+')\n               ) piv'\n\nexecute(@query)\n\ndrop table #temp\n
    \n

    See SQL Fiddle with Demo

    \n

    The Results are the same for both:

    \n
    | CAT_1_ID | CAT_1_NAME | CAT_2_ID |        CAT_2_NAME | CAT_3_ID | CAT_3_NAME |\n--------------------------------------------------------------------------------\n|        1 | Decorating |        2 | Paint and Brushes |        5 |    Rollers |\n
    \n soup wrap:

    You can use PIVOT, UNPIVOT and a recursive query to perform this.

    Static Version, is where you hard-code the values to the transformed:

    ;with hd (id, name, parentid, category)
    as
    (
      select id, name, parentid, 1 as category
      from yourtable
      where parentid is null
      union all
      select t1.id, t1.name, t1.parentid, hd.category +1
      from yourtable t1
      inner join hd
        on t1.parentid = hd.id
    ),
    unpiv as
    (
      select value, 'cat_'+cast(category as varchar(5))+'_'+ col col_name
      from
      (
        select cast(id as varchar(17)) id, name, parentid, category
        from hd
      ) src
      unpivot
      (
        value for col in (id, name)
      ) un
    )
    select [cat_1_id], [cat_1_name],
                       [cat_2_id], [cat_2_name],
                       [cat_3_id], [cat_3_name]
    from unpiv
    pivot
    (
      max(value)
      for col_name in ([cat_1_id], [cat_1_name],
                       [cat_2_id], [cat_2_name],
                       [cat_3_id], [cat_3_name])
    ) piv
    

    See SQL Fiddle with Demo

    Dynamic Version, the values are generated at run-time:

    ;with hd (id, name, parentid, category)
    as
    (
      select id, name, parentid, 1 as category
      from yourtable
      where parentid is null
      union all
      select t1.id, t1.name, t1.parentid, hd.category +1
      from yourtable t1
      inner join hd
        on t1.parentid = hd.id
    )
    select category categoryNumber
    into #temp
    from hd
    
    DECLARE @cols AS NVARCHAR(MAX),
        @query  AS NVARCHAR(MAX)
    
    select @cols = STUFF((SELECT distinct ',' + quotename('cat_'+cast(CATEGORYNUMBER as varchar(10))+'_'+col) 
                      from #temp
                      cross apply (select 'id' col
                                   union all 
                                   select 'name' col) src
                FOR XML PATH(''), TYPE
                ).value('.', 'NVARCHAR(MAX)') 
            ,1,1,'')
    
    set @query = ';with hd (id, name, parentid, category)
                  as
                  (
                    select id, name, parentid, 1 as category
                    from yourtable
                    where parentid is null
                    union all
                    select t1.id, t1.name, t1.parentid, hd.category +1
                    from yourtable t1
                    inner join hd
                      on t1.parentid = hd.id
                  ),
                  unpiv as
                  (
                    select value, ''cat_''+cast(category as varchar(5))+''_''+ col col_name
                    from
                    (
                      select cast(id as varchar(17)) id, name, parentid, category                 
                      from hd
                    ) src
                    unpivot
                    (
                      value for col in (id, name)
                    ) un
                  )
                  select '+@cols+'
                  from unpiv
                  pivot
                  (
                    max(value)
                    for col_name in ('+@cols+')
                   ) piv'
    
    execute(@query)
    
    drop table #temp
    

    See SQL Fiddle with Demo

    The Results are the same for both:

    | CAT_1_ID | CAT_1_NAME | CAT_2_ID |        CAT_2_NAME | CAT_3_ID | CAT_3_NAME |
    --------------------------------------------------------------------------------
    |        1 | Decorating |        2 | Paint and Brushes |        5 |    Rollers |
    
    qid & accept id: (11260900, 11260933) query: Making a query that only shows unique records soup:

    If you need only emailAddress it is quite simple:

    \n
    select distinct emailAddress from \n
    \n

    Edited according to request in comments.

    \n

    If you want to choose both distinct emailAddress and ANY customerName related to it then you must somehow tell SQL how to choose the customerName. The easiest way is to select i.e. MIN(customerName), then all other (usually those that are later in alphabet but it actually depends on collation) are discarded. Query would be:

    \n
    select emailAddress, min(customerName) as pickedCustomerName\nfrom \ngroup by emailAddress\n
    \n soup wrap:

    If you need only emailAddress it is quite simple:

    select distinct emailAddress from 
    

    Edited according to request in comments.

    If you want to choose both distinct emailAddress and ANY customerName related to it then you must somehow tell SQL how to choose the customerName. The easiest way is to select i.e. MIN(customerName), then all other (usually those that are later in alphabet but it actually depends on collation) are discarded. Query would be:

    select emailAddress, min(customerName) as pickedCustomerName
    from 
    group by emailAddress
    
    qid & accept id: (11282433, 11282492) query: Minus Query in MsAccess soup:

    One possibility is NOT IN. There is no such thing as a minus query in MS Access.

    \n
    select h.* from hello h\nWHERE uniqueid NOT IN\n(select uniqueid from hello1 h1)\n
    \n

    For a purely sql solution, you need, say:

    \n
    SELECT t.* FROM Table t\nLEFT JOIN NewTable n\nON t.ID = n.ID\nWHERE t.Field1 & "" <> n.Field1 & ""\n   OR t.Field2 & "" <> n.Field2 & ""\n
    \n

    However, it is easier using VBA.

    \n soup wrap:

    One possibility is NOT IN. There is no such thing as a minus query in MS Access.

    select h.* from hello h
    WHERE uniqueid NOT IN
    (select uniqueid from hello1 h1)
    

    For a purely sql solution, you need, say:

    SELECT t.* FROM Table t
    LEFT JOIN NewTable n
    ON t.ID = n.ID
    WHERE t.Field1 & "" <> n.Field1 & ""
       OR t.Field2 & "" <> n.Field2 & ""
    

    However, it is easier using VBA.

    qid & accept id: (11292524, 11293632) query: How can get null column after UNPIVOT? soup:

    Have you tried using COALESCE or ISNULL?

    \n

    e.g.

    \n
    ISNULL(AVG(column_1), 0) as column_1,   \n
    \n

    This does mean that you will get 0 as the result instead of 'NULL' though - do you need null when they are all NULL?

    \n

    Edit:

    \n

    Also, is there any need for an unpivot? Since you are specifying all 3 columns, why not just do:

    \n
    SELECT BankID, (column_1 + column_2 + column_3) / 3 FROM partstat\nWHERE bankid = 4\n
    \n

    This gives you the same results but with the NULL

    \n

    Of course this is assuming you have 1 row per bankid

    \n

    Edit:

    \n

    UNPIVOT isn't supposed to be used like this as far as I can see - I'd unpivot first then try the AVG... let me have a go...

    \n

    Edit:

    \n

    Ah I take that back, it is just a problem with NULLs - other posts suggest ISNULL or COALESCE to eliminate the nulls, you could use a placeholder value like -1 which could work e.g.

    \n
    SELECT bankid, AVG(CASE WHEN value = -1 THEN NULL ELSE value END) AS Average \nFROM ( \n    SELECT bankid,  \n    isnull(AVG(column_1), -1) as column_1 ,\n    AVG(Column_2) as column_2 ,\n    Avg(column_3) as column_3 \n    FROM data     \n    group by bankid\n) as pvt \nUNPIVOT (Value FOR o in (column_1, column_2, column_3)) as u\nGROUP BY bankid \n
    \n

    You need to ensure this will work though as if you have a value in column2/3 then column_1 will no longer = -1. It might be worth doing a case to see if they are all NULL in which case replacing the 1st null with -1

    \n soup wrap:

    Have you tried using COALESCE or ISNULL?

    e.g.

    ISNULL(AVG(column_1), 0) as column_1,   
    

    This does mean that you will get 0 as the result instead of 'NULL' though - do you need null when they are all NULL?

    Edit:

    Also, is there any need for an unpivot? Since you are specifying all 3 columns, why not just do:

    SELECT BankID, (column_1 + column_2 + column_3) / 3 FROM partstat
    WHERE bankid = 4
    

    This gives you the same results but with the NULL

    Of course this is assuming you have 1 row per bankid

    Edit:

    UNPIVOT isn't supposed to be used like this as far as I can see - I'd unpivot first then try the AVG... let me have a go...

    Edit:

    Ah I take that back, it is just a problem with NULLs - other posts suggest ISNULL or COALESCE to eliminate the nulls, you could use a placeholder value like -1 which could work e.g.

    SELECT bankid, AVG(CASE WHEN value = -1 THEN NULL ELSE value END) AS Average 
    FROM ( 
        SELECT bankid,  
        isnull(AVG(column_1), -1) as column_1 ,
        AVG(Column_2) as column_2 ,
        Avg(column_3) as column_3 
        FROM data     
        group by bankid
    ) as pvt 
    UNPIVOT (Value FOR o in (column_1, column_2, column_3)) as u
    GROUP BY bankid 
    

    You need to ensure this will work though as if you have a value in column2/3 then column_1 will no longer = -1. It might be worth doing a case to see if they are all NULL in which case replacing the 1st null with -1

    qid & accept id: (11307344, 11307526) query: How to check verify that SQL query was ran in transaction? soup:

    There's a transaction section in the output of:

    \n
    SHOW ENGINE INNODB STATUS\G\n
    \n

    Which looks like (that's from my local MySQL currently not running any queries):

    \n
    TRANSACTIONS\n------------\nTrx id counter 900\nPurge done for trx's n:o < 0 undo n:o < 0\nHistory list length 0\nLIST OF TRANSACTIONS FOR EACH SESSION:\n---TRANSACTION 0, not started\nMySQL thread id 47, OS thread handle 0x7fc8b85d3700, query id 120 localhost root\nSHOW ENGINE INNODB STATUS\n
    \n

    I don't know if you can actively monitor this information, so that you can see it exactly in the moment of you 3 insert operations. You can probably use that last bullet of yours (using slow queries) here...

    \n

    In addition MySQL has command counters. This counters can be accessed via:

    \n

    SHOW GLOBAL STATUS LIKE "COM\_%"

    \n

    Each execution of a command increments the counter associated with it. Transaction related counters are Com_begin, Com_commit and Com_rollback, so you can execute your code and monitor those counters.

    \n soup wrap:

    There's a transaction section in the output of:

    SHOW ENGINE INNODB STATUS\G
    

    Which looks like (that's from my local MySQL currently not running any queries):

    TRANSACTIONS
    ------------
    Trx id counter 900
    Purge done for trx's n:o < 0 undo n:o < 0
    History list length 0
    LIST OF TRANSACTIONS FOR EACH SESSION:
    ---TRANSACTION 0, not started
    MySQL thread id 47, OS thread handle 0x7fc8b85d3700, query id 120 localhost root
    SHOW ENGINE INNODB STATUS
    

    I don't know if you can actively monitor this information, so that you can see it exactly in the moment of you 3 insert operations. You can probably use that last bullet of yours (using slow queries) here...

    In addition MySQL has command counters. This counters can be accessed via:

    SHOW GLOBAL STATUS LIKE "COM\_%"

    Each execution of a command increments the counter associated with it. Transaction related counters are Com_begin, Com_commit and Com_rollback, so you can execute your code and monitor those counters.

    qid & accept id: (11308438, 11309255) query: MYSQL auto_increment_increment soup:

    Updated version: only a single id field is used. This is very probably not atomic, so use inside a transaction if you need concurrency:

    \n

    http://sqlfiddle.com/#!2/a4ed8/1

    \n
    CREATE TABLE IF NOT EXISTS person (\n   id  INT NOT NULL AUTO_INCREMENT,\n   PRIMARY KEY ( id )\n) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1;\n\nCREATE TRIGGER insert_kangaroo_id BEFORE INSERT ON person FOR EACH ROW BEGIN\n  DECLARE newid INT;\n\n  SET newid = (SELECT AUTO_INCREMENT\n               FROM information_schema.TABLES\n               WHERE TABLE_SCHEMA = DATABASE()\n               AND TABLE_NAME = 'person'\n              );\n\n  IF NEW.id AND NEW.id >= newid THEN\n    SET newid = NEW.id;\n  END IF;\n\n  SET NEW.id = 5 * CEILING( newid / 5 );\nEND;\n
    \n

    Old, non working "solution" (the before insert trigger can't see the current auto increment value):

    \n

    http://sqlfiddle.com/#!2/f4f9a/1

    \n
    CREATE TABLE IF NOT EXISTS person (\n   secretid  INT NOT NULL AUTO_INCREMENT,\n   id        INT NOT NULL,\n   PRIMARY KEY ( secretid )\n) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1;\n\nCREATE TRIGGER update_kangaroo_id BEFORE UPDATE ON person FOR EACH ROW BEGIN\n  SET NEW.id = NEW.secretid * 5;\nEND;\n\nCREATE TRIGGER insert_kangaroo_id BEFORE INSERT ON person FOR EACH ROW BEGIN\n  SET NEW.id = NEW.secretid * 5; -- NEW.secretid is empty = unusuable!\nEND;\n
    \n soup wrap:

    Updated version: only a single id field is used. This is very probably not atomic, so use inside a transaction if you need concurrency:

    http://sqlfiddle.com/#!2/a4ed8/1

    CREATE TABLE IF NOT EXISTS person (
       id  INT NOT NULL AUTO_INCREMENT,
       PRIMARY KEY ( id )
    ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1;
    
    CREATE TRIGGER insert_kangaroo_id BEFORE INSERT ON person FOR EACH ROW BEGIN
      DECLARE newid INT;
    
      SET newid = (SELECT AUTO_INCREMENT
                   FROM information_schema.TABLES
                   WHERE TABLE_SCHEMA = DATABASE()
                   AND TABLE_NAME = 'person'
                  );
    
      IF NEW.id AND NEW.id >= newid THEN
        SET newid = NEW.id;
      END IF;
    
      SET NEW.id = 5 * CEILING( newid / 5 );
    END;
    

    Old, non working "solution" (the before insert trigger can't see the current auto increment value):

    http://sqlfiddle.com/#!2/f4f9a/1

    CREATE TABLE IF NOT EXISTS person (
       secretid  INT NOT NULL AUTO_INCREMENT,
       id        INT NOT NULL,
       PRIMARY KEY ( secretid )
    ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1;
    
    CREATE TRIGGER update_kangaroo_id BEFORE UPDATE ON person FOR EACH ROW BEGIN
      SET NEW.id = NEW.secretid * 5;
    END;
    
    CREATE TRIGGER insert_kangaroo_id BEFORE INSERT ON person FOR EACH ROW BEGIN
      SET NEW.id = NEW.secretid * 5; -- NEW.secretid is empty = unusuable!
    END;
    
    qid & accept id: (11329936, 11330295) query: Calculate Percentages In Query - Access SQL soup:

    Your subquery has no where clause and thus counts all records, but you can do it without subquery

    \n
    SELECT\n    "Criterion = 1" AS CritDesc,\n    SUM(IIf(Criterion = 1, 1, 0)) AS NumCrit,\n    COUNT(*) AS TotalNum,\n    SUM(IIf(Criterion = 1, 1, 0)) / COUNT(*) AS Percentage,\n    ParentNumber AS Parent\nFROM\n    tblChild\nGROUP BY\n    ParentNumber;\n
    \n

    Note: I dropped the WHERE-clause. Instead I am counting the records fulfilling the criterion by summing up 1 for Criterion = 1 and 0 otherwise. This allows me to get the total number per ParentNumber at the same time with Count(*).

    \n
    \n

    UPDATE

    \n

    You might want to get results for parents having no children as well. In that case you can use an outer join

    \n
    SELECT\n    "Criterion = 1" AS CritDesc,\n    SUM(IIf(C.Criterion = 1, 1, 0)) AS NumCrit,\n    COUNT(C.Number) AS TotalNumOfChildren,\n    SUM(IIf(C.Criterion = 1, 1, 0)) / COUNT(*) AS Percentage,\n    P.Number AS Parent\nFROM\n    tblChild AS C\n    LEFT JOIN tblParent AS P\n        ON C.ParentNumber = P.Number  \nGROUP BY\n    P.Number;\n
    \n

    Note that I get the total number of children with Count(C.Number) as Count(*) would count records with no children as well and yield 1 in that case. In the percentage calculation, however, I divide by Count(*) in order to avoid a division by zero. The result will still be correct in that case, since the sum of records with Criterion = 1 will be zero.

    \n soup wrap:

    Your subquery has no where clause and thus counts all records, but you can do it without subquery

    SELECT
        "Criterion = 1" AS CritDesc,
        SUM(IIf(Criterion = 1, 1, 0)) AS NumCrit,
        COUNT(*) AS TotalNum,
        SUM(IIf(Criterion = 1, 1, 0)) / COUNT(*) AS Percentage,
        ParentNumber AS Parent
    FROM
        tblChild
    GROUP BY
        ParentNumber;
    

    Note: I dropped the WHERE-clause. Instead I am counting the records fulfilling the criterion by summing up 1 for Criterion = 1 and 0 otherwise. This allows me to get the total number per ParentNumber at the same time with Count(*).


    UPDATE

    You might want to get results for parents having no children as well. In that case you can use an outer join

    SELECT
        "Criterion = 1" AS CritDesc,
        SUM(IIf(C.Criterion = 1, 1, 0)) AS NumCrit,
        COUNT(C.Number) AS TotalNumOfChildren,
        SUM(IIf(C.Criterion = 1, 1, 0)) / COUNT(*) AS Percentage,
        P.Number AS Parent
    FROM
        tblChild AS C
        LEFT JOIN tblParent AS P
            ON C.ParentNumber = P.Number  
    GROUP BY
        P.Number;
    

    Note that I get the total number of children with Count(C.Number) as Count(*) would count records with no children as well and yield 1 in that case. In the percentage calculation, however, I divide by Count(*) in order to avoid a division by zero. The result will still be correct in that case, since the sum of records with Criterion = 1 will be zero.

    qid & accept id: (11350686, 11351593) query: Combine results of joins on two tables soup:

    Linked by (tag_id, mark_id)

    \n
    SELECT DISTINCT i.*\nFROM   tags_users  tu  \nJOIN   marks_users mu USING (user_id)\nJOIN   items       i  USING (tag_id, mark_id)\nWHERE  tu.user_id = 5;\n
    \n

    The DISTINCT should not be necessary, if you have defined multi-column primary or unique keys on the columns.

    \n

    Linked by tag_id or mark_id

    \n

    @Gordon's answer is perfectly valid. But it will perform terribly.
    \nThis will be much faster:

    \n
    SELECT i.*\nFROM   items i  \nWHERE  EXISTS (\n    SELECT 1\n    FROM   tags_users  tu\n    WHERE  tu.tag_id = i.tag_id\n    AND    tu.user_id = 5\n    )\nOR     EXISTS (\n    SELECT 1\n    FROM   marks_users mu \n    WHERE  mu.mark_id = i.mark_id\n    AND    mu.user_id = 5\n    );\n
    \n

    Assumes that entries in items itself are UNIQUE on (tag_id, mark_id).

    \n

    Why is this much faster?

    \n

    If you JOIN to two unrelated tables (like in @Gordon's answer), you effectively form a cross join, which are known for rapidly degrading performance with growing number of rows. O(N²). Say, you have:

    \n\n

    This will happen in @Gordon's query:

    \n
      \n
    1. JOIN rows of items to tags_users. Each item is joined to 100 rows, resulting in\n10,000 x 100 = 1,000,000 rows. (!)
    2. \n
    3. JOIN that to marks_users. Each row is joined to 100 marks, resulting in\n100,000,000 rows. (!!)
    4. \n
    5. The WHERE clause is applied and the many duplicates are collapsed by DISTINCT, resulting in 10,000 rows.
    6. \n
    \n

    Test with EXPLAIN ANALYZE. The difference will be obvious even with small numbers and staggering with growing numbers.

    \n

    SQL Fiddle.

    \n

    Benchmarks

    \n

    I ran some quick tests with this setup on my machine (pg 9.1):

    \n

    Gordon's query

    \n
    SELECT DISTINCT i.*\nFROM   items i\nLEFT   JOIN tags_users tu on i.tag_id = tu.tag_id\nLEFT   JOIN marks_users mu on i.mark_id = mu.mark_id\nWHERE  5 IN (tu.user_id, mu.user_id);\n
    \n

    Total runtime: 38229.860 ms

    \n

    Sanitized version

    \n

    Pulling the condition on user_id into the JOIN clause cuts down on the combinations radically, but it is still a (much tinier) cross join.

    \n
    SELECT DISTINCT i.*\nFROM   items i\nLEFT   JOIN tags_users tu on i.tag_id = tu.tag_id AND tu.user_id = 5\nLEFT   JOIN marks_users mu on i.mark_id = mu.mark_id AND mu.user_id = 5\nWHERE  tu.user_id = 5 OR mu.user_id = 5;\n
    \n

    Total runtime: 110.450 ms

    \n

    With EXISTS semi-joins

    \n

    (see query above)
    \nWith this query, every row is checked once if it qualifies. You don't need a DISTINCT, because rows are not duplicated to begin with.

    \n

    Total runtime: 26.569 ms

    \n

    UNION

    \n

    For completeness, the variant with UNION. Use UNION, not UNION ALL to remove duplicates:

    \n
    SELECT i.*\nFROM   items i \nJOIN   tags_users  tu ON i.tag_id = tu.tag_id AND tu.user_id = 5\nUNION\nSELECT i.*\nFROM   items i \nJOIN   marks_users mu ON i.mark_id = mu.mark_id AND mu.user_id = 5;\n
    \n

    Total runtime: 178.901 ms

    \n soup wrap:

    Linked by (tag_id, mark_id)

    SELECT DISTINCT i.*
    FROM   tags_users  tu  
    JOIN   marks_users mu USING (user_id)
    JOIN   items       i  USING (tag_id, mark_id)
    WHERE  tu.user_id = 5;
    

    The DISTINCT should not be necessary, if you have defined multi-column primary or unique keys on the columns.

    Linked by tag_id or mark_id

    @Gordon's answer is perfectly valid. But it will perform terribly.
    This will be much faster:

    SELECT i.*
    FROM   items i  
    WHERE  EXISTS (
        SELECT 1
        FROM   tags_users  tu
        WHERE  tu.tag_id = i.tag_id
        AND    tu.user_id = 5
        )
    OR     EXISTS (
        SELECT 1
        FROM   marks_users mu 
        WHERE  mu.mark_id = i.mark_id
        AND    mu.user_id = 5
        );
    

    Assumes that entries in items itself are UNIQUE on (tag_id, mark_id).

    Why is this much faster?

    If you JOIN to two unrelated tables (like in @Gordon's answer), you effectively form a cross join, which are known for rapidly degrading performance with growing number of rows. O(N²). Say, you have:

    This will happen in @Gordon's query:

    1. JOIN rows of items to tags_users. Each item is joined to 100 rows, resulting in 10,000 x 100 = 1,000,000 rows. (!)
    2. JOIN that to marks_users. Each row is joined to 100 marks, resulting in 100,000,000 rows. (!!)
    3. The WHERE clause is applied and the many duplicates are collapsed by DISTINCT, resulting in 10,000 rows.

    Test with EXPLAIN ANALYZE. The difference will be obvious even with small numbers and staggering with growing numbers.

    SQL Fiddle.

    Benchmarks

    I ran some quick tests with this setup on my machine (pg 9.1):

    Gordon's query

    SELECT DISTINCT i.*
    FROM   items i
    LEFT   JOIN tags_users tu on i.tag_id = tu.tag_id
    LEFT   JOIN marks_users mu on i.mark_id = mu.mark_id
    WHERE  5 IN (tu.user_id, mu.user_id);
    

    Total runtime: 38229.860 ms

    Sanitized version

    Pulling the condition on user_id into the JOIN clause cuts down on the combinations radically, but it is still a (much tinier) cross join.

    SELECT DISTINCT i.*
    FROM   items i
    LEFT   JOIN tags_users tu on i.tag_id = tu.tag_id AND tu.user_id = 5
    LEFT   JOIN marks_users mu on i.mark_id = mu.mark_id AND mu.user_id = 5
    WHERE  tu.user_id = 5 OR mu.user_id = 5;
    

    Total runtime: 110.450 ms

    With EXISTS semi-joins

    (see query above)
    With this query, every row is checked once if it qualifies. You don't need a DISTINCT, because rows are not duplicated to begin with.

    Total runtime: 26.569 ms

    UNION

    For completeness, the variant with UNION. Use UNION, not UNION ALL to remove duplicates:

    SELECT i.*
    FROM   items i 
    JOIN   tags_users  tu ON i.tag_id = tu.tag_id AND tu.user_id = 5
    UNION
    SELECT i.*
    FROM   items i 
    JOIN   marks_users mu ON i.mark_id = mu.mark_id AND mu.user_id = 5;
    

    Total runtime: 178.901 ms

    qid & accept id: (11363669, 11364612) query: Oracle timeline report from overlapping intervals soup:

    You can do it in one whooping statement:

    \n
    SQL> WITH timeline AS\n  2          (SELECT mydate startdate,\n  3                  lead(mydate) OVER (ORDER BY mydate) - 1 enddate\n  4             FROM (SELECT startdate mydate FROM interval_test\n  5                   UNION\n  6                   SELECT enddate FROM interval_test)\n  7            WHERE mydate IS NOT NULL)\n  8  SELECT startdate,\n  9         enddate,\n 10         max(substr(sys_connect_by_path(item, ','), 2)) items\n 11    FROM (SELECT t.startdate,\n 12                 t.enddate,\n 13                 item,\n 14                 row_number() OVER (PARTITION BY t.startdate, t.enddate\n 15                                    ORDER BY i.item) rn\n 16            FROM    timeline t\n 17                 JOIN\n 18                    interval_test i\n 19                 ON nvl(i.enddate, DATE '9999-12-31') - 1 >= t.startdate\n 20                AND i.startdate <= nvl(t.enddate, DATE '9999-12-31'))\n 21  START WITH rn = 1\n 22  CONNECT BY rn = PRIOR rn + 1\n 23         AND startdate = PRIOR startdate\n 24  GROUP BY startdate, enddate\n 25  ORDER BY startdate;\n\nSTARTDATE  ENDDATE    ITEMS\n---------- ---------- --------------------\n2012-01-01 2012-01-31 AAA\n2012-02-01 2012-02-29 AAA,BBB\n2012-03-01            AAA\n
    \n

    I used a first subquery to list all intervals:

    \n
    SQL> SELECT mydate startdate,\n  2                  lead(mydate) OVER (ORDER BY mydate) - 1 enddate\n  3             FROM (SELECT startdate mydate FROM interval_test\n  4                   UNION\n  5                   SELECT enddate FROM interval_test)\n  6            WHERE mydate IS NOT NULL;\n\nSTARTDATE  ENDDATE\n---------- ----------\n2012-01-01 2012-01-31\n2012-02-01 2012-02-29\n2012-03-01\n
    \n

    joined to the following query that lists all items on one row given two dates:

    \n
    SELECT max(substr(sys_connect_by_path(item, ','), 2)) items\n  FROM (SELECT item, row_number() OVER (ORDER BY item) rn\n          FROM interval_test\n         WHERE nvl(enddate, DATE '9999-12-31') >= :startdate\n           AND startdate <= :enddate)\nCONNECT BY rn = PRIOR rn + 1\nSTART WITH rn = 1;\n
    \n soup wrap:

    You can do it in one whooping statement:

    SQL> WITH timeline AS
      2          (SELECT mydate startdate,
      3                  lead(mydate) OVER (ORDER BY mydate) - 1 enddate
      4             FROM (SELECT startdate mydate FROM interval_test
      5                   UNION
      6                   SELECT enddate FROM interval_test)
      7            WHERE mydate IS NOT NULL)
      8  SELECT startdate,
      9         enddate,
     10         max(substr(sys_connect_by_path(item, ','), 2)) items
     11    FROM (SELECT t.startdate,
     12                 t.enddate,
     13                 item,
     14                 row_number() OVER (PARTITION BY t.startdate, t.enddate
     15                                    ORDER BY i.item) rn
     16            FROM    timeline t
     17                 JOIN
     18                    interval_test i
     19                 ON nvl(i.enddate, DATE '9999-12-31') - 1 >= t.startdate
     20                AND i.startdate <= nvl(t.enddate, DATE '9999-12-31'))
     21  START WITH rn = 1
     22  CONNECT BY rn = PRIOR rn + 1
     23         AND startdate = PRIOR startdate
     24  GROUP BY startdate, enddate
     25  ORDER BY startdate;
    
    STARTDATE  ENDDATE    ITEMS
    ---------- ---------- --------------------
    2012-01-01 2012-01-31 AAA
    2012-02-01 2012-02-29 AAA,BBB
    2012-03-01            AAA
    

    I used a first subquery to list all intervals:

    SQL> SELECT mydate startdate,
      2                  lead(mydate) OVER (ORDER BY mydate) - 1 enddate
      3             FROM (SELECT startdate mydate FROM interval_test
      4                   UNION
      5                   SELECT enddate FROM interval_test)
      6            WHERE mydate IS NOT NULL;
    
    STARTDATE  ENDDATE
    ---------- ----------
    2012-01-01 2012-01-31
    2012-02-01 2012-02-29
    2012-03-01
    

    joined to the following query that lists all items on one row given two dates:

    SELECT max(substr(sys_connect_by_path(item, ','), 2)) items
      FROM (SELECT item, row_number() OVER (ORDER BY item) rn
              FROM interval_test
             WHERE nvl(enddate, DATE '9999-12-31') >= :startdate
               AND startdate <= :enddate)
    CONNECT BY rn = PRIOR rn + 1
    START WITH rn = 1;
    
    qid & accept id: (11396151, 11398826) query: From within a grails HQL, how would I use a (non-aggregate) Oracle function? soup:

    To call a function in HQL, the SQL dialect must be aware of it. You can add your function at runtime in BootStrap.groovy like this:

    \n
    import org.hibernate.dialect.function.SQLFunctionTemplate\nimport org.hibernate.Hibernate\n\ndef dialect = applicationContext.sessionFactory.dialect\ndef getCurrentTerm = new SQLFunctionTemplate(Hibernate.INTEGER, "TT_STUDENT.STU_GENERAL.F_Get_Current_term()")\ndialect.registerFunction('F_Get_Current_term', getCurrentTerm)\n
    \n

    Once registered, you should be able to call the function in your queries:

    \n
    def a = SaturnStvterm.findAll("from SaturnStvterm as s where id > TT_STUDENT.STU_GENERAL.F_Get_Current_term()")\n
    \n soup wrap:

    To call a function in HQL, the SQL dialect must be aware of it. You can add your function at runtime in BootStrap.groovy like this:

    import org.hibernate.dialect.function.SQLFunctionTemplate
    import org.hibernate.Hibernate
    
    def dialect = applicationContext.sessionFactory.dialect
    def getCurrentTerm = new SQLFunctionTemplate(Hibernate.INTEGER, "TT_STUDENT.STU_GENERAL.F_Get_Current_term()")
    dialect.registerFunction('F_Get_Current_term', getCurrentTerm)
    

    Once registered, you should be able to call the function in your queries:

    def a = SaturnStvterm.findAll("from SaturnStvterm as s where id > TT_STUDENT.STU_GENERAL.F_Get_Current_term()")
    
    qid & accept id: (11404664, 11407741) query: SQL Update most recent in table instead of most recent on selected record soup:

    The problem is that you're not correlating your subquery with your outer query. It helps to use different aliases for all tables involved, and the join to Members inside the subquery seems unnecessary:

    \n
    create table Members (ID int not null,Attend_Freq int not null,Last_Attend_Date datetime not null)\ninsert into Members (ID,Attend_Freq,Last_Attend_Date) values\n(123,4,'19000101')\n\ncreate table Attendance (ID int not null,Member_ID int not null,Last_Attend_Date datetime not null)\ninsert into Attendance (ID,Member_ID,Last_Attend_Date) values\n(987,123,'20120605'),\n(888,123,'20120604'),\n(567,123,'20120603'),\n(456,234,'20120630'),\n(1909,292,'20120705')\n\nupdate M\nset\n    Last_Attend_Date =\n        (select MAX(Last_Attend_Date)\n            from Attendance A2\n        where A2.Member_ID = M.ID) --M is a reference to the outer table here\nfrom\n    Members M\n        inner join\n    Attendance A\n        on\n            M.ID = A.Member_ID\nwhere\n    m.Attend_Freq < 5 and\n    A.Last_Attend_Date < DATEADD(day,-14,CURRENT_TIMESTAMP)\n\nselect * from Members\n
    \n

    Result:

    \n
    ID          Attend_Freq Last_Attend_Date\n----------- ----------- ----------------\n123         4           2012-06-05\n
    \n soup wrap:

    The problem is that you're not correlating your subquery with your outer query. It helps to use different aliases for all tables involved, and the join to Members inside the subquery seems unnecessary:

    create table Members (ID int not null,Attend_Freq int not null,Last_Attend_Date datetime not null)
    insert into Members (ID,Attend_Freq,Last_Attend_Date) values
    (123,4,'19000101')
    
    create table Attendance (ID int not null,Member_ID int not null,Last_Attend_Date datetime not null)
    insert into Attendance (ID,Member_ID,Last_Attend_Date) values
    (987,123,'20120605'),
    (888,123,'20120604'),
    (567,123,'20120603'),
    (456,234,'20120630'),
    (1909,292,'20120705')
    
    update M
    set
        Last_Attend_Date =
            (select MAX(Last_Attend_Date)
                from Attendance A2
            where A2.Member_ID = M.ID) --M is a reference to the outer table here
    from
        Members M
            inner join
        Attendance A
            on
                M.ID = A.Member_ID
    where
        m.Attend_Freq < 5 and
        A.Last_Attend_Date < DATEADD(day,-14,CURRENT_TIMESTAMP)
    
    select * from Members
    

    Result:

    ID          Attend_Freq Last_Attend_Date
    ----------- ----------- ----------------
    123         4           2012-06-05
    
    qid & accept id: (11419308, 11429318) query: how to pass javascript array to oracle store procedure by ado parameter object soup:

    The format is:

    \n
    CreateParameter( name, type, direction, size, value )\n
    \n

    The values you'll need are:

    \n
    adVarChar = 200\nAdArray = 0x2000\nadParamInput = 1\n
    \n

    And you'll call it like:

    \n
    var param = cmd.CreateParameter( 'par', adVarChar + AdArray, adParamInput, 255, userArray )\n
    \n soup wrap:

    The format is:

    CreateParameter( name, type, direction, size, value )
    

    The values you'll need are:

    adVarChar = 200
    AdArray = 0x2000
    adParamInput = 1
    

    And you'll call it like:

    var param = cmd.CreateParameter( 'par', adVarChar + AdArray, adParamInput, 255, userArray )
    
    qid & accept id: (11419793, 11420154) query: Detect role in Postgresql dynamically soup:

    You have to use EXECUTE for dynamic SQL. Also, a DO statement cannot take parameters. Create a plpgsql function:

    \n
    CREATE OR REPLACE FUNCTION f_revoke_all_from_role(_role text)\n  RETURNS void AS\n$BODY$\nBEGIN\n\nIF EXISTS (SELECT 1 FROM pg_roles WHERE rolname = _role) THEN\n    EXECUTE 'REVOKE ALL PRIVILEGES ON TABLE x FROM ' || quote_ident(_role);\nEND IF;\n\nEND;\n$BODY$ LANGUAGE plpgsql;\n
    \n

    Call:

    \n
    SELECT f_revoke_all_from_role('superman');\n
    \n\n soup wrap:

    You have to use EXECUTE for dynamic SQL. Also, a DO statement cannot take parameters. Create a plpgsql function:

    CREATE OR REPLACE FUNCTION f_revoke_all_from_role(_role text)
      RETURNS void AS
    $BODY$
    BEGIN
    
    IF EXISTS (SELECT 1 FROM pg_roles WHERE rolname = _role) THEN
        EXECUTE 'REVOKE ALL PRIVILEGES ON TABLE x FROM ' || quote_ident(_role);
    END IF;
    
    END;
    $BODY$ LANGUAGE plpgsql;
    

    Call:

    SELECT f_revoke_all_from_role('superman');
    
    qid & accept id: (11426911, 11427306) query: Convert sub-subquery with a order+limit 1 to left join soup:

    I think that you might use worksupdates as 'ruling table' and attach the rest there:

    \n
    SELECT works.id, title, version, date, pages, uploaded, uri\n    FROM workupdates\n    JOIN info ON info.id=workupdates.info\n    JOIN works ON workupdates.work = works.id\n    WHERE workupdates.date =\n        (SELECT MAX(date) FROM workupdates WHERE work = works.id)\n;\n
    \n

    Even if this is sub-optimal, since the JOINs would take place before the filtering on date.

    \n

    Or pivoting the tables around and having works rule, maybe better:

    \n
    SELECT works.id, title, version, date, pages, uploaded, uri\n    FROM works\n    JOIN workupdates ON (workupdates.work = works.id\n          AND workupdates.date =\n              (SELECT MAX(date) FROM workupdates WHERE work = works.id))\n    JOIN info ON info.id=workupdates.info\n;\n
    \n

    It ought to be possible to save an iteration when joining worksupdates and works, but it's not coming to me at the moment (and it might be I'm dreaming things up) :-(

    \n soup wrap:

    I think that you might use worksupdates as 'ruling table' and attach the rest there:

    SELECT works.id, title, version, date, pages, uploaded, uri
        FROM workupdates
        JOIN info ON info.id=workupdates.info
        JOIN works ON workupdates.work = works.id
        WHERE workupdates.date =
            (SELECT MAX(date) FROM workupdates WHERE work = works.id)
    ;
    

    Even if this is sub-optimal, since the JOINs would take place before the filtering on date.

    Or pivoting the tables around and having works rule, maybe better:

    SELECT works.id, title, version, date, pages, uploaded, uri
        FROM works
        JOIN workupdates ON (workupdates.work = works.id
              AND workupdates.date =
                  (SELECT MAX(date) FROM workupdates WHERE work = works.id))
        JOIN info ON info.id=workupdates.info
    ;
    

    It ought to be possible to save an iteration when joining worksupdates and works, but it's not coming to me at the moment (and it might be I'm dreaming things up) :-(

    qid & accept id: (11436797, 11447658) query: Insert blank row to result after ORDER BY soup:

    You can, pretty much as Michael and Gordon did, just tack an empty row on with union all, but you need to have it before the order by:

    \n
    ...\nand to_date(to_char(t.enddatetime, 'DD-MON-YYYY')) <=\n    to_date('?DATE2::?','MM/DD/YYYY')\nunion all\nselect null, null, null, null, null, null, null, null\nfrom dual\norder by eventid, starttime, actionsequence;\n
    \n

    ... and you can't use the case that Gordon had directly in the order by because it isn't a selected value - you'll get an ORA-07185. (Note that the column names in the order by are the aliases that you assigned in the select, not those in the table; and you don't include the table name/alias; and it isn't necessary to alias the null columns in the union part, but you may want to for clarity).

    \n

    But this relies on null being sorted after any real values, which may not always be the case (not sure, but might be affected by NLS parameters), and it isn't known if the real eventkey can ever be null anyway. So it's probably safer to introduce a dummy column in both parts of the query and use that for the ordering, but exclude it from the results by nesting the query:

    \n
    select crewactionfactid, crewkey, eventid, actionsequence, type,\n    starttime, endtime, duration\nfrom (\n    select 0 as dummy_order_field,\n        t.crewactionfactid,\n        t.crewkey,\n        t.eventkey as eventid,\n        t.actionsequence,\n        case t.actiontype\n            when 'DISPATCHED' then '2-Dispatched'\n            when 'ASSIGNED' then '1-Assigned'\n            when 'ENROUTE' then '3-Enroute'\n            when 'ARRIVED' then '4-Arrived'\n            else 'unknown'\n        end as type,\n        t.startdatetime as starttime,\n        t.enddatetime as endtime,\n        t.duration\n    from schema_name.table_name t\n    where to_date(to_char(t.startdatetime, 'DD-MON-YYYY')) >=\n        to_date('?DATE1::?','MM/DD/YYYY')\n    and to_date(to_char(t.enddatetime, 'DD-MON-YYYY')) <=\n        to_date('?DATE2::?','MM/DD/YYYY')\n    union all\n    select 1, null, null, null, null, null, null, null, null\n    from dual\n)\norder by dummy_order_field, eventid, starttime, action sequence;\n
    \n

    The date handling is odd though, particularly the to_date(to_char(...)) parts. It looks like you're just trying to lose the time portion, in which case you can use trunk instead:

    \n
    where trunc(t.startdatetime) >= to_date('?DATE1::?','MM/DD/YYYY')\nand trunc(t.enddatetime) <= to_date('?DATE2::?','MM/DD/YYYY')\n
    \n

    But applying any function to the date column prevents any index on it being used, so it's better to leave that alone and get the variable part in the right state for comparison:

    \n
    where t.startdatetime >= to_date('?DATE1::?','MM/DD/YYYY')\nand t.enddatetime < to_date('?DATE2::?','MM/DD/YYYY') + 1\n
    \n

    The + 1 adds a day, so id DATE2 was 07/12/2012, the filter is < 2012-07-13 00:00:00, which is the same as <= 2012-07-12 23:59:59.

    \n soup wrap:

    You can, pretty much as Michael and Gordon did, just tack an empty row on with union all, but you need to have it before the order by:

    ...
    and to_date(to_char(t.enddatetime, 'DD-MON-YYYY')) <=
        to_date('?DATE2::?','MM/DD/YYYY')
    union all
    select null, null, null, null, null, null, null, null
    from dual
    order by eventid, starttime, actionsequence;
    

    ... and you can't use the case that Gordon had directly in the order by because it isn't a selected value - you'll get an ORA-07185. (Note that the column names in the order by are the aliases that you assigned in the select, not those in the table; and you don't include the table name/alias; and it isn't necessary to alias the null columns in the union part, but you may want to for clarity).

    But this relies on null being sorted after any real values, which may not always be the case (not sure, but might be affected by NLS parameters), and it isn't known if the real eventkey can ever be null anyway. So it's probably safer to introduce a dummy column in both parts of the query and use that for the ordering, but exclude it from the results by nesting the query:

    select crewactionfactid, crewkey, eventid, actionsequence, type,
        starttime, endtime, duration
    from (
        select 0 as dummy_order_field,
            t.crewactionfactid,
            t.crewkey,
            t.eventkey as eventid,
            t.actionsequence,
            case t.actiontype
                when 'DISPATCHED' then '2-Dispatched'
                when 'ASSIGNED' then '1-Assigned'
                when 'ENROUTE' then '3-Enroute'
                when 'ARRIVED' then '4-Arrived'
                else 'unknown'
            end as type,
            t.startdatetime as starttime,
            t.enddatetime as endtime,
            t.duration
        from schema_name.table_name t
        where to_date(to_char(t.startdatetime, 'DD-MON-YYYY')) >=
            to_date('?DATE1::?','MM/DD/YYYY')
        and to_date(to_char(t.enddatetime, 'DD-MON-YYYY')) <=
            to_date('?DATE2::?','MM/DD/YYYY')
        union all
        select 1, null, null, null, null, null, null, null, null
        from dual
    )
    order by dummy_order_field, eventid, starttime, action sequence;
    

    The date handling is odd though, particularly the to_date(to_char(...)) parts. It looks like you're just trying to lose the time portion, in which case you can use trunk instead:

    where trunc(t.startdatetime) >= to_date('?DATE1::?','MM/DD/YYYY')
    and trunc(t.enddatetime) <= to_date('?DATE2::?','MM/DD/YYYY')
    

    But applying any function to the date column prevents any index on it being used, so it's better to leave that alone and get the variable part in the right state for comparison:

    where t.startdatetime >= to_date('?DATE1::?','MM/DD/YYYY')
    and t.enddatetime < to_date('?DATE2::?','MM/DD/YYYY') + 1
    

    The + 1 adds a day, so id DATE2 was 07/12/2012, the filter is < 2012-07-13 00:00:00, which is the same as <= 2012-07-12 23:59:59.

    qid & accept id: (11441696, 11441990) query: Merging matching data side by side from different tables soup:

    If you have six different tables, then you need to join them together:

    \n
    select tjan.companyname, tjan.employee, tjan.id, , \nfrom tjan join\n     tfeb \n     on tjan.companyname = tfeb.companyname and\n        tjan.employee = tfeb.employee and\n        tjan.id = tfeb.id\netc. etc. etc.\n
    \n

    The problem that you have is that the populations in the different months may be different, so the joins will lose rows. A good way to handle this is with a driving table:

    \n
    select . . .\nfrom (select companyname, employee, id from tjan union\n      select companyname, employee, id from tfeb union\n      . . .\n     ) driving left outer join\n     tjan\n     on tjan.companyname = driving.companyname and\n        tjan.employee = driving.employee and\n        tjan.id = driving.id left outer join\n     tfeb\n     on tfeb.companyname = driving.companyname and\n        tfeb.employee = driving.employee and\n        tfeb.id = driving.id left outer join\n    . . .\n
    \n

    You can do all this in one SQL statement. There are repetitive parts (such as the column names in the select). Consider using Excel to generate these.

    \n soup wrap:

    If you have six different tables, then you need to join them together:

    select tjan.companyname, tjan.employee, tjan.id, , 
    from tjan join
         tfeb 
         on tjan.companyname = tfeb.companyname and
            tjan.employee = tfeb.employee and
            tjan.id = tfeb.id
    etc. etc. etc.
    

    The problem that you have is that the populations in the different months may be different, so the joins will lose rows. A good way to handle this is with a driving table:

    select . . .
    from (select companyname, employee, id from tjan union
          select companyname, employee, id from tfeb union
          . . .
         ) driving left outer join
         tjan
         on tjan.companyname = driving.companyname and
            tjan.employee = driving.employee and
            tjan.id = driving.id left outer join
         tfeb
         on tfeb.companyname = driving.companyname and
            tfeb.employee = driving.employee and
            tfeb.id = driving.id left outer join
        . . .
    

    You can do all this in one SQL statement. There are repetitive parts (such as the column names in the select). Consider using Excel to generate these.

    qid & accept id: (11445551, 11445879) query: How can I update extreme columns within range fast? soup:

    I'm not sure what the performance of this will be like, but it's a more set-based approach than your current one:

    \n
    declare @T table (CategoryID int not null,Time datetime2 not null,IsSampled bit not null,Value decimal(10,5) not null)\ninsert into @T (CategoryID,Time,IsSampled,Value) values\n(1,'2012-07-01T00:00:00.000',0,65.36347),\n(1,'2012-07-01T00:00:11.000',0,80.16729),\n(1,'2012-07-01T00:00:14.000',0,29.19716),\n(1,'2012-07-01T00:00:25.000',0,7.05847),\n(1,'2012-07-01T00:00:36.000',0,98.08257),\n(1,'2012-07-01T00:00:57.000',0,75.35524),\n(1,'2012-07-01T00:00:59.000',0,35.35524)\n\n;with BinnedValues as (\n    select CategoryID,Time,IsSampled,Value,DATEADD(minute,DATEDIFF(minute,0,Time),0) as TimeBin\n    from @T\n), MinMax as (\n    select CategoryID,Time,IsSampled,Value,TimeBin,\n        ROW_NUMBER() OVER (PARTITION BY CategoryID, TimeBin ORDER BY Value) as MinPos,\n        ROW_NUMBER() OVER (PARTITION BY CategoryID, TimeBin ORDER BY Value desc) as MaxPos,\n        ROW_NUMBER() OVER (PARTITION BY CategoryID, TimeBin ORDER BY Time) as Earliest\n    from\n        BinnedValues\n)\nupdate MinMax set IsSampled = 1 where MinPos=1 or MaxPos=1 or Earliest=1\n\nselect * from @T\n
    \n

    Result:

    \n
    CategoryID  Time                   IsSampled Value\n----------- ---------------------- --------- ---------------------------------------\n1           2012-07-01 00:00:00.00 1         65.36347\n1           2012-07-01 00:00:11.00 0         80.16729\n1           2012-07-01 00:00:14.00 0         29.19716\n1           2012-07-01 00:00:25.00 1         7.05847\n1           2012-07-01 00:00:36.00 1         98.08257\n1           2012-07-01 00:00:57.00 0         75.35524\n1           2012-07-01 00:00:59.00 0         35.35524\n
    \n

    It could possibly be sped up if the TimeBin column could be added as a computed column to the table and added to appropriate indexes.

    \n

    It should also be noted that this will mark a maximum of 3 rows as sampled - if the earliest is also the min or max value, it will only be marked once (obviously), but the next nearest min or max value will not be. Also, if multiple rows have the same Value, and that is the min or max value, one of the rows will be selected arbitrarily.

    \n soup wrap:

    I'm not sure what the performance of this will be like, but it's a more set-based approach than your current one:

    declare @T table (CategoryID int not null,Time datetime2 not null,IsSampled bit not null,Value decimal(10,5) not null)
    insert into @T (CategoryID,Time,IsSampled,Value) values
    (1,'2012-07-01T00:00:00.000',0,65.36347),
    (1,'2012-07-01T00:00:11.000',0,80.16729),
    (1,'2012-07-01T00:00:14.000',0,29.19716),
    (1,'2012-07-01T00:00:25.000',0,7.05847),
    (1,'2012-07-01T00:00:36.000',0,98.08257),
    (1,'2012-07-01T00:00:57.000',0,75.35524),
    (1,'2012-07-01T00:00:59.000',0,35.35524)
    
    ;with BinnedValues as (
        select CategoryID,Time,IsSampled,Value,DATEADD(minute,DATEDIFF(minute,0,Time),0) as TimeBin
        from @T
    ), MinMax as (
        select CategoryID,Time,IsSampled,Value,TimeBin,
            ROW_NUMBER() OVER (PARTITION BY CategoryID, TimeBin ORDER BY Value) as MinPos,
            ROW_NUMBER() OVER (PARTITION BY CategoryID, TimeBin ORDER BY Value desc) as MaxPos,
            ROW_NUMBER() OVER (PARTITION BY CategoryID, TimeBin ORDER BY Time) as Earliest
        from
            BinnedValues
    )
    update MinMax set IsSampled = 1 where MinPos=1 or MaxPos=1 or Earliest=1
    
    select * from @T
    

    Result:

    CategoryID  Time                   IsSampled Value
    ----------- ---------------------- --------- ---------------------------------------
    1           2012-07-01 00:00:00.00 1         65.36347
    1           2012-07-01 00:00:11.00 0         80.16729
    1           2012-07-01 00:00:14.00 0         29.19716
    1           2012-07-01 00:00:25.00 1         7.05847
    1           2012-07-01 00:00:36.00 1         98.08257
    1           2012-07-01 00:00:57.00 0         75.35524
    1           2012-07-01 00:00:59.00 0         35.35524
    

    It could possibly be sped up if the TimeBin column could be added as a computed column to the table and added to appropriate indexes.

    It should also be noted that this will mark a maximum of 3 rows as sampled - if the earliest is also the min or max value, it will only be marked once (obviously), but the next nearest min or max value will not be. Also, if multiple rows have the same Value, and that is the min or max value, one of the rows will be selected arbitrarily.

    qid & accept id: (11456664, 11456990) query: Counting in sql and subas soup:

    Haven't tried it, but I think this should work

    \n
     select NoOfChanges, count (*) from\n ( \n     select suba.id, count(*) as NoOfChanges from \n      ( select id, service_type from table_name\n       group by 1,2) as  suba\n       group by 1 \n       having count (*) > 1 \n    )\n subtableb\n group by NoOfChanges \n
    \n

    You can think of that as

    \n
    select NoOfChanges, count (*) from subtableb\ngroup by NoOfChanges  \n
    \n

    but subtableb isn't a real table, but the results from your previous query

    \n soup wrap:

    Haven't tried it, but I think this should work

     select NoOfChanges, count (*) from
     ( 
         select suba.id, count(*) as NoOfChanges from 
          ( select id, service_type from table_name
           group by 1,2) as  suba
           group by 1 
           having count (*) > 1 
        )
     subtableb
     group by NoOfChanges 
    

    You can think of that as

    select NoOfChanges, count (*) from subtableb
    group by NoOfChanges  
    

    but subtableb isn't a real table, but the results from your previous query

    qid & accept id: (11463090, 11463122) query: Single MySQL field with comma separated values soup:

    You can use this solution:

    \n
    SELECT b.filename\nFROM posts a\nINNER JOIN images b ON FIND_IN_SET(b.imageid, a.gallery) > 0\nWHERE a.postid = 3\n
    \n

    SQLFiddle

    \n

    However, you should really normalize your design and use a cross-reference table between posts and images. This would be the best and most efficient way of representing N:M (many-to-many) relationships. Not only is it much more efficient for retrieval, but it will vastly simplify updating and deleting image associations.

    \n
    \n
    \n
    \n

    ...but the comma-separated value is easier to work with as far as the jQuery script I am using to add to it.

    \n
    \n
    \n

    Even if you properly represented the N:M relationship with a cross-reference table, you can still get the imageid's in CSV format:

    \n

    Suppose you have a posts_has_images table with primary key fields (postid, imageid):

    \n

    You can use GROUP_CONCAT() to get a CSV of the imageid's for each postid:

    \n
    SELECT postid, GROUP_CONCAT(imageid) AS gallery\nFROM posts_has_images\nGROUP BY postid\n
    \n soup wrap:

    You can use this solution:

    SELECT b.filename
    FROM posts a
    INNER JOIN images b ON FIND_IN_SET(b.imageid, a.gallery) > 0
    WHERE a.postid = 3
    

    SQLFiddle

    However, you should really normalize your design and use a cross-reference table between posts and images. This would be the best and most efficient way of representing N:M (many-to-many) relationships. Not only is it much more efficient for retrieval, but it will vastly simplify updating and deleting image associations.


    ...but the comma-separated value is easier to work with as far as the jQuery script I am using to add to it.

    Even if you properly represented the N:M relationship with a cross-reference table, you can still get the imageid's in CSV format:

    Suppose you have a posts_has_images table with primary key fields (postid, imageid):

    You can use GROUP_CONCAT() to get a CSV of the imageid's for each postid:

    SELECT postid, GROUP_CONCAT(imageid) AS gallery
    FROM posts_has_images
    GROUP BY postid
    
    qid & accept id: (11468551, 11521193) query: Getting hours interval between date range soup:

    Thanks all for the suggestions and comments.I finally got a was to solve may problem.

    \n

    Below is the script to the solution i came up with:

    \n
    DECLARE @start_date datetime = CONVERT(DATETIME,'2012-02-06 23:59:01.000',20);\nDECLARE @end_date datetime = CONVERT(DATETIME,'2012-12-08 23:59:17.000',20);\nDECLARE @org datetime  ;\nDECLARE @end datetime  ;\nDECLARE @datetable TABLE (h_start datetime, h_end datetime,h_sesc int);\n\nWHILE (dateadd(second, -1, dateadd(hour, datediff(hour, 0, @start_date)+1, 0))) < @end_date\nBEGIN\nSET @org = null;\nSET @org = @start_date;\nSET @end = (dateadd(second, -1, dateadd(hour, datediff(hour, 0, @org)+1, 0)));\nINSERT INTO @datetable (h_start, h_end,h_sesc)\nVALUES(dateadd(second, 0,@org), @end,DATEDIFF(second, @org,@end));\n\nSET @start_date = dateadd(second, 1,@end);\n\nEND;\n\n\nINSERT INTO @datetable (h_start, h_end,h_sesc)\nVALUES(dateadd(second, 0,@start_date), @end_date,DATEDIFF(second, dateadd(second, 0,@start_date),@end_date));\n\nSELECT * FROM @datetable;\n
    \n

    The above will give the folowing results:

    \n
    h_start                 h_end                   h_sesc\n2012-02-06 23:59:01.000 2012-02-06 23:59:59.000 58\n2012-02-07 00:00:00.000 2012-02-07 00:59:59.000 3599\n2012-02-07 01:00:00.000 2012-02-07 01:59:59.000 3599\n2012-02-07 02:00:00.000 2012-02-07 02:59:59.000 3599\n2012-02-07 03:00:00.000 2012-02-07 03:59:59.000 3599\n2012-02-07 04:00:00.000 2012-02-07 04:59:59.000 3599\n2012-02-07 05:00:00.000 2012-02-07 05:59:59.000 3599\n
    \n

    ..\n..

    \n
    2012-12-08 18:00:00.000 2012-12-08 18:59:59.000 3599\n2012-12-08 19:00:00.000 2012-12-08 19:59:59.000 3599\n2012-12-08 20:00:00.000 2012-12-08 20:59:59.000 3599\n2012-12-08 21:00:00.000 2012-12-08 21:59:59.000 3599\n2012-12-08 22:00:00.000 2012-12-08 22:59:59.000 3599\n2012-12-08 23:00:00.000 2012-12-08 23:59:17.000 3557\n
    \n

    Hope someone will find it useful.

    \n soup wrap:

    Thanks all for the suggestions and comments.I finally got a was to solve may problem.

    Below is the script to the solution i came up with:

    DECLARE @start_date datetime = CONVERT(DATETIME,'2012-02-06 23:59:01.000',20);
    DECLARE @end_date datetime = CONVERT(DATETIME,'2012-12-08 23:59:17.000',20);
    DECLARE @org datetime  ;
    DECLARE @end datetime  ;
    DECLARE @datetable TABLE (h_start datetime, h_end datetime,h_sesc int);
    
    WHILE (dateadd(second, -1, dateadd(hour, datediff(hour, 0, @start_date)+1, 0))) < @end_date
    BEGIN
    SET @org = null;
    SET @org = @start_date;
    SET @end = (dateadd(second, -1, dateadd(hour, datediff(hour, 0, @org)+1, 0)));
    INSERT INTO @datetable (h_start, h_end,h_sesc)
    VALUES(dateadd(second, 0,@org), @end,DATEDIFF(second, @org,@end));
    
    SET @start_date = dateadd(second, 1,@end);
    
    END;
    
    
    INSERT INTO @datetable (h_start, h_end,h_sesc)
    VALUES(dateadd(second, 0,@start_date), @end_date,DATEDIFF(second, dateadd(second, 0,@start_date),@end_date));
    
    SELECT * FROM @datetable;
    

    The above will give the folowing results:

    h_start                 h_end                   h_sesc
    2012-02-06 23:59:01.000 2012-02-06 23:59:59.000 58
    2012-02-07 00:00:00.000 2012-02-07 00:59:59.000 3599
    2012-02-07 01:00:00.000 2012-02-07 01:59:59.000 3599
    2012-02-07 02:00:00.000 2012-02-07 02:59:59.000 3599
    2012-02-07 03:00:00.000 2012-02-07 03:59:59.000 3599
    2012-02-07 04:00:00.000 2012-02-07 04:59:59.000 3599
    2012-02-07 05:00:00.000 2012-02-07 05:59:59.000 3599
    

    .. ..

    2012-12-08 18:00:00.000 2012-12-08 18:59:59.000 3599
    2012-12-08 19:00:00.000 2012-12-08 19:59:59.000 3599
    2012-12-08 20:00:00.000 2012-12-08 20:59:59.000 3599
    2012-12-08 21:00:00.000 2012-12-08 21:59:59.000 3599
    2012-12-08 22:00:00.000 2012-12-08 22:59:59.000 3599
    2012-12-08 23:00:00.000 2012-12-08 23:59:17.000 3557
    

    Hope someone will find it useful.

    qid & accept id: (11480527, 11480595) query: MySQL - Select the least day of the current month/year, not necessarily the first day of the month soup:

    If you are interested in returning only one row, the easiest way to do this would be:

    \n
    SELECT t.*\n  FROM table_name t\n WHERE t.name = '$username'\n   AND t.theDate >= CAST(DATE_FORMAT(NOW(),'%Y-%m-01') AS DATE)\n   AND t.theDate < DATE_ADD(DATE_FORMAT(NOW(),'%Y-%m-01'), INTERVAL 1 MONTH)\n ORDER BY s.name DESC, s.theDate DESC\n LIMIT 1\n
    \n

    The ORDER BY is based on the assumption that you have an index on (or with leading columns of) (name,theDate). That would be the most appropriate index for the predicates (i.e. conditions in the WHERE clause. There's really no need for us to sort the name column, since we know it's going to be equal to something... but specifying the ORDER BY in this way makes it more likely MySQL will do a reverse scan operation on the index, to return the rows in the correct order, avoiding a filesort operation.

    \n

    NOTE: I specify the bare theDate column in the conditions in the WHERE clause, rather than wrapping that in any function... by specifying the bare column and a bounded range, we enable MySQL to make use of an index range scan operation. There are other possible ways to include this condition in the WHERE clause, for example...

    \n
    DATE_FORMAT(t.theDate,'%Y-%m') = DATE_FORMAT(NOW(),'%Y-%m')\n
    \n

    which will return an equivalent result, but a predicate like this is not sargable. That is, MySQL can't/won't do a range scan on an index to satisfy this.

    \n

    If you are intending to get all the rows for the "least" date in a month for a given user (your question doesn't seem to indicate that you need only one row), here's one way get that result:

    \n
    SELECT t.* \n  FROM table_name t\n  JOIN ( SELECT s.name\n              , s.theDate\n           FROM table_name s \n          WHERE s.name = '$username'\n            AND s.theDate >= CAST(DATE_FORMAT(NOW(),'%Y-%m-01') AS DATE)\n            AND s.theDate < DATE_ADD(DATE_FORMAT(NOW(),'%Y-%m-01'), INTERVAL 1 MONTH)\n          ORDER BY s.name DESC, s.theDate DESC\n          LIMIT 1\n       ) r\n    ON r.name = t.name\n   AND r.theDate = t.theDate \n
    \n

    Again, MySQL can make use of an index (if available) with leading columns (name,theDate) to satisfy the predicates, and to do a reverse scan operation (avoiding a sort), and to do the JOIN operation.

    \n

    NOTE: We're assuming here that 'theDate' is datatype DATE (with no time component). If it's a DATETIME or a TIMESTAMP, there's a potential for a time component, and that query may not return all rows for a given "date" value, if the time components are different for the rows with the same "date". (e.g. '2012-07-13 17:30' and '2012-07-13 19:55' are different datetime values.) If we want to return both of those rows (because both are a date of "July 13"), we need to do a range scan instead of an equality test.

    \n
    SELECT t.* \n  FROM table_name t\n  JOIN ( SELECT s.name\n              , s.theDate\n           FROM table_name s \n          WHERE s.name = '$username'\n            AND s.theDate >= CAST(DATE_FORMAT(NOW(),'%Y-%m-01') AS DATE)\n            AND s.theDate < DATE_ADD(DATE_FORMAT(NOW(),'%Y-%m-01'), INTERVAL 1 MONTH)\n          ORDER BY s.name DESC, s.theDate DESC\n          LIMIT 1\n       ) r\n    ON t.name = r.name \n   AND t.theDate >= r.theDate\n   AND t.theDate < DATE_FORMAT(DATE_ADD(r.theDate,INTERVAL 1 DAY),'%Y-%m-%d')\n
    \n

    Note those last two lines... we're looking for any rows with a theDate value that is greater than or equal to the "least" value found for the current month AND that is ALSO less than midnight of the following day.

    \n soup wrap:

    If you are interested in returning only one row, the easiest way to do this would be:

    SELECT t.*
      FROM table_name t
     WHERE t.name = '$username'
       AND t.theDate >= CAST(DATE_FORMAT(NOW(),'%Y-%m-01') AS DATE)
       AND t.theDate < DATE_ADD(DATE_FORMAT(NOW(),'%Y-%m-01'), INTERVAL 1 MONTH)
     ORDER BY s.name DESC, s.theDate DESC
     LIMIT 1
    

    The ORDER BY is based on the assumption that you have an index on (or with leading columns of) (name,theDate). That would be the most appropriate index for the predicates (i.e. conditions in the WHERE clause. There's really no need for us to sort the name column, since we know it's going to be equal to something... but specifying the ORDER BY in this way makes it more likely MySQL will do a reverse scan operation on the index, to return the rows in the correct order, avoiding a filesort operation.

    NOTE: I specify the bare theDate column in the conditions in the WHERE clause, rather than wrapping that in any function... by specifying the bare column and a bounded range, we enable MySQL to make use of an index range scan operation. There are other possible ways to include this condition in the WHERE clause, for example...

    DATE_FORMAT(t.theDate,'%Y-%m') = DATE_FORMAT(NOW(),'%Y-%m')
    

    which will return an equivalent result, but a predicate like this is not sargable. That is, MySQL can't/won't do a range scan on an index to satisfy this.

    If you are intending to get all the rows for the "least" date in a month for a given user (your question doesn't seem to indicate that you need only one row), here's one way get that result:

    SELECT t.* 
      FROM table_name t
      JOIN ( SELECT s.name
                  , s.theDate
               FROM table_name s 
              WHERE s.name = '$username'
                AND s.theDate >= CAST(DATE_FORMAT(NOW(),'%Y-%m-01') AS DATE)
                AND s.theDate < DATE_ADD(DATE_FORMAT(NOW(),'%Y-%m-01'), INTERVAL 1 MONTH)
              ORDER BY s.name DESC, s.theDate DESC
              LIMIT 1
           ) r
        ON r.name = t.name
       AND r.theDate = t.theDate 
    

    Again, MySQL can make use of an index (if available) with leading columns (name,theDate) to satisfy the predicates, and to do a reverse scan operation (avoiding a sort), and to do the JOIN operation.

    NOTE: We're assuming here that 'theDate' is datatype DATE (with no time component). If it's a DATETIME or a TIMESTAMP, there's a potential for a time component, and that query may not return all rows for a given "date" value, if the time components are different for the rows with the same "date". (e.g. '2012-07-13 17:30' and '2012-07-13 19:55' are different datetime values.) If we want to return both of those rows (because both are a date of "July 13"), we need to do a range scan instead of an equality test.

    SELECT t.* 
      FROM table_name t
      JOIN ( SELECT s.name
                  , s.theDate
               FROM table_name s 
              WHERE s.name = '$username'
                AND s.theDate >= CAST(DATE_FORMAT(NOW(),'%Y-%m-01') AS DATE)
                AND s.theDate < DATE_ADD(DATE_FORMAT(NOW(),'%Y-%m-01'), INTERVAL 1 MONTH)
              ORDER BY s.name DESC, s.theDate DESC
              LIMIT 1
           ) r
        ON t.name = r.name 
       AND t.theDate >= r.theDate
       AND t.theDate < DATE_FORMAT(DATE_ADD(r.theDate,INTERVAL 1 DAY),'%Y-%m-%d')
    

    Note those last two lines... we're looking for any rows with a theDate value that is greater than or equal to the "least" value found for the current month AND that is ALSO less than midnight of the following day.

    qid & accept id: (11495713, 11495825) query: Return results of query based on todays date in SQL (MySQL) Part 2 soup:

    You'll want to first JOIN the other table onto the first using related columns (I'm assuming id in the other table is related to table_c_id).

    \n

    And as I had stated in my answer to your previous question, you're better off making the comparison on the bare datetime column so that the query remains sargable(i.e. able to utilize indexes):

    \n
    SELECT     a.value\nFROM       table_c a\nINNER JOIN table_a b ON a.table_c_id = b.id\nWHERE      a.table_c_id IN (9,17,25) AND\n           b.crm_date_time_column >= UNIX_TIMESTAMP(CURDATE())\nGROUP BY   a.value \n
    \n

    This assumes the crm_date_time_column will never contain times which are in the future (e.g. tomorrow, next month, etc.), but if it can, you would just add:

    \n
    AND b.crm_date_time_column < UNIX_TIMESTAMP(CURDATE() + INTERVAL 1 DAY)\n
    \n

    as another condition in the WHERE clause.

    \n soup wrap:

    You'll want to first JOIN the other table onto the first using related columns (I'm assuming id in the other table is related to table_c_id).

    And as I had stated in my answer to your previous question, you're better off making the comparison on the bare datetime column so that the query remains sargable(i.e. able to utilize indexes):

    SELECT     a.value
    FROM       table_c a
    INNER JOIN table_a b ON a.table_c_id = b.id
    WHERE      a.table_c_id IN (9,17,25) AND
               b.crm_date_time_column >= UNIX_TIMESTAMP(CURDATE())
    GROUP BY   a.value 
    

    This assumes the crm_date_time_column will never contain times which are in the future (e.g. tomorrow, next month, etc.), but if it can, you would just add:

    AND b.crm_date_time_column < UNIX_TIMESTAMP(CURDATE() + INTERVAL 1 DAY)
    

    as another condition in the WHERE clause.

    qid & accept id: (11510950, 11511356) query: Which values are missing in SQL from a list? soup:

    You could also try using EXCEPT (similar to MINUS in Oracle):

    \n
    (SELECT 1\nUNION\nSELECT 2\nUNION \nSELECT 3\nUNION\nSELECT 4\nUNION\nSELECT 5\nUNION\nSELECT 6)\nEXCEPT\n(SELECT 2\n UNION\n SELECT 3\n UNION\n SELECT 4)\n
    \n

    Or, more relevant to your example:

    \n
    (SELECT 1\nUNION\nSELECT 2\nUNION \nSELECT 3\nUNION\nSELECT 4\nUNION\nSELECT 5\nUNION\nSELECT 6)\nEXCEPT\n(SELECT Field FROM Table)        \n
    \n

    where Field contains 2, 4, and 5.

    \n soup wrap:

    You could also try using EXCEPT (similar to MINUS in Oracle):

    (SELECT 1
    UNION
    SELECT 2
    UNION 
    SELECT 3
    UNION
    SELECT 4
    UNION
    SELECT 5
    UNION
    SELECT 6)
    EXCEPT
    (SELECT 2
     UNION
     SELECT 3
     UNION
     SELECT 4)
    

    Or, more relevant to your example:

    (SELECT 1
    UNION
    SELECT 2
    UNION 
    SELECT 3
    UNION
    SELECT 4
    UNION
    SELECT 5
    UNION
    SELECT 6)
    EXCEPT
    (SELECT Field FROM Table)        
    

    where Field contains 2, 4, and 5.

    qid & accept id: (11568694, 11568814) query: SQL relational insert to 2 tables in single query without resorting to mysql_insert_id() soup:

    thanks to @hackattack, who found this ? answered already elsewhere.

    \n
    BEGIN\nINSERT INTO users (username, password) \n  VALUES('test', 'test')\nINSERT INTO profiles (userid, bio, homepage) \n  VALUES(LAST_INSERT_ID(),'Hello world!', 'http://www.stackoverflow.com');\nCOMMIT;\n
    \n

    BUT, ALAS - that didn't work.\nThe MySQL 5 reference shows it slightly different syntax:

    \n
    INSERT INTO `table2` (`description`) \n  VALUES('sdfsdf');# 1 row affected.\nINSERT INTO `table1`(`table1_id`,`title`) \n  VALUES(LAST_INSERT_ID(),'hello world');\n
    \n

    And, lo/behold - that works!

    \n

    More trouble ahead\nAlthough the query will succeed in phpMyAdmin, my PHP installation complains about the query and throws a syntax error. I resorted to doing this the php-way and making 2 separate queries and using mysql_insert_id()

    \n

    I find that annoying, but I guess that's not much less server load than a transaction.

    \n soup wrap:

    thanks to @hackattack, who found this ? answered already elsewhere.

    BEGIN
    INSERT INTO users (username, password) 
      VALUES('test', 'test')
    INSERT INTO profiles (userid, bio, homepage) 
      VALUES(LAST_INSERT_ID(),'Hello world!', 'http://www.stackoverflow.com');
    COMMIT;
    

    BUT, ALAS - that didn't work. The MySQL 5 reference shows it slightly different syntax:

    INSERT INTO `table2` (`description`) 
      VALUES('sdfsdf');# 1 row affected.
    INSERT INTO `table1`(`table1_id`,`title`) 
      VALUES(LAST_INSERT_ID(),'hello world');
    

    And, lo/behold - that works!

    More trouble ahead Although the query will succeed in phpMyAdmin, my PHP installation complains about the query and throws a syntax error. I resorted to doing this the php-way and making 2 separate queries and using mysql_insert_id()

    I find that annoying, but I guess that's not much less server load than a transaction.

    qid & accept id: (11696995, 11697220) query: SQL: retrieve records between dates in all databases soup:

    There's no need for the Date(...) as far as i can tell. This example seems to work

    \n
    DECLARE @TheDate Date = '2012-07-01';\n\nSELECT 'hello' WHERE (@TheDate BETWEEN '2012-04-01' AND '2012-06-30')\n--None returned\nSET @TheDate = '2012-05-01'\n\nSELECT 'hello' WHERE (@TheDate BETWEEN '2012-04-01' AND '2012-06-30')\n--selects hello\n
    \n

    Edit Btw worth looking at This Question with the date time answer (will post here just to save effort)

    \n

    The between statement can cause issues with range boundaries for dates as

    \n
    BETWEEN '01/01/2009' AND '01/31/2009'\n
    \n

    is really interpreted as

    \n
    BETWEEN '01/01/2009 00:00:00' AND '01/31/2009 00:00:00'\n
    \n

    so will miss anything that occurred during the day of Jan 31st. In this case, you will have to use:

    \n
    myDate >= '01/01/2009 00:00:00' AND myDate < '02/01/2009 00:00:00'  --CORRECT!\n
    \n

    or

    \n
    BETWEEN '01/01/2009 00:00:00' AND '01/31/2009 23:59:59' --WRONG! (see update!)\n
    \n

    UPDATE: It is entirely possible to have records created within that last second of the day, with a datetime as late as 01/01/2009 23:59:59.997!!

    \n

    For this reason, the BETWEEN (firstday) AND (lastday 23:59:59) approach is not recommended.

    \n

    Use the myDate >= (firstday) AND myDate < (Lastday+1) approach instead.

    \n soup wrap:

    There's no need for the Date(...) as far as i can tell. This example seems to work

    DECLARE @TheDate Date = '2012-07-01';
    
    SELECT 'hello' WHERE (@TheDate BETWEEN '2012-04-01' AND '2012-06-30')
    --None returned
    SET @TheDate = '2012-05-01'
    
    SELECT 'hello' WHERE (@TheDate BETWEEN '2012-04-01' AND '2012-06-30')
    --selects hello
    

    Edit Btw worth looking at This Question with the date time answer (will post here just to save effort)

    The between statement can cause issues with range boundaries for dates as

    BETWEEN '01/01/2009' AND '01/31/2009'
    

    is really interpreted as

    BETWEEN '01/01/2009 00:00:00' AND '01/31/2009 00:00:00'
    

    so will miss anything that occurred during the day of Jan 31st. In this case, you will have to use:

    myDate >= '01/01/2009 00:00:00' AND myDate < '02/01/2009 00:00:00'  --CORRECT!
    

    or

    BETWEEN '01/01/2009 00:00:00' AND '01/31/2009 23:59:59' --WRONG! (see update!)
    

    UPDATE: It is entirely possible to have records created within that last second of the day, with a datetime as late as 01/01/2009 23:59:59.997!!

    For this reason, the BETWEEN (firstday) AND (lastday 23:59:59) approach is not recommended.

    Use the myDate >= (firstday) AND myDate < (Lastday+1) approach instead.

    qid & accept id: (11753269, 11768001) query: string comparing query with chinese chars - Oracle Database soup:
     SQL> create table mytbl (data_col varchar2(200));\n Table created\n SQL> insert into mytbl values('在职'); \n 1 row inserted.\n SQL> commit;\n Commit complete.\n SQL> select * from mytbl where data_col like '%在职%';\n DATA_COL                                                                                                                                                                                               \n -----------\n 在职 \n\n SQL> SELECT * FROM nls_database_parameters where parameter='NLS_CHARACTERSET';\n PARAMETER                      VALUE                                  \n ------------------------------ ----------------------------------------\n NLS_CHARACTERSET               AL32UTF8   \n
    \n

    Your NLS_CHARACTERSET should be set to AL32UTF8. So try

    \n
     SQL> ALTER SESSION SET NLS_CHARACTERSET = 'AL32UTF8';\n
    \n

    Also make sure that parameter NLS_NCHAR_CHARACTERSET is set to UTF8.

    \n
     SQL> ALTER SESSION SET NLS_NCHAR_CHARACTERSET = 'UTF8';\n
    \n soup wrap:
     SQL> create table mytbl (data_col varchar2(200));
     Table created
     SQL> insert into mytbl values('在职'); 
     1 row inserted.
     SQL> commit;
     Commit complete.
     SQL> select * from mytbl where data_col like '%在职%';
     DATA_COL                                                                                                                                                                                               
     -----------
     在职 
    
     SQL> SELECT * FROM nls_database_parameters where parameter='NLS_CHARACTERSET';
     PARAMETER                      VALUE                                  
     ------------------------------ ----------------------------------------
     NLS_CHARACTERSET               AL32UTF8   
    

    Your NLS_CHARACTERSET should be set to AL32UTF8. So try

     SQL> ALTER SESSION SET NLS_CHARACTERSET = 'AL32UTF8';
    

    Also make sure that parameter NLS_NCHAR_CHARACTERSET is set to UTF8.

     SQL> ALTER SESSION SET NLS_NCHAR_CHARACTERSET = 'UTF8';
    
    qid & accept id: (11762700, 11762828) query: How do I get row id of a row in sql server soup:

    SQL Server does not track the order of inserted rows, so there is no reliable way to get that information given your current table structure. Even if employee_id is an IDENTITY column, it is not 100% foolproof to rely on that for order of insertion (since you can fill gaps and even create duplicate ID values using SET IDENTITY_INSERT ON). If employee_id is an IDENTITY column and you are sure that rows aren't manually inserted out of order, you should be able to use this variation of your query to select the data in sequence, newest first:

    \n
    SELECT \n   ROW_NUMBER() OVER (ORDER BY EMPLOYEE_ID DESC) AS ID, \n   EMPLOYEE_ID,\n   EMPLOYEE_NAME \nFROM dbo.CSBCA1_5_FPCIC_2012_EES207201222743\nORDER BY ID;\n
    \n

    You can make a change to your table to track this information for new rows, but you won't be able to derive it for your existing data (they will all me marked as inserted at the time you make this change).

    \n
    ALTER TABLE dbo.CSBCA1_5_FPCIC_2012_EES207201222743 \n-- wow, who named this?\n  ADD CreatedDate DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP;\n
    \n

    Note that this may break existing code that just does INSERT INTO dbo.whatever SELECT/VALUES() - e.g. you may have to revisit your code and define a proper, explicit column list.

    \n soup wrap:

    SQL Server does not track the order of inserted rows, so there is no reliable way to get that information given your current table structure. Even if employee_id is an IDENTITY column, it is not 100% foolproof to rely on that for order of insertion (since you can fill gaps and even create duplicate ID values using SET IDENTITY_INSERT ON). If employee_id is an IDENTITY column and you are sure that rows aren't manually inserted out of order, you should be able to use this variation of your query to select the data in sequence, newest first:

    SELECT 
       ROW_NUMBER() OVER (ORDER BY EMPLOYEE_ID DESC) AS ID, 
       EMPLOYEE_ID,
       EMPLOYEE_NAME 
    FROM dbo.CSBCA1_5_FPCIC_2012_EES207201222743
    ORDER BY ID;
    

    You can make a change to your table to track this information for new rows, but you won't be able to derive it for your existing data (they will all me marked as inserted at the time you make this change).

    ALTER TABLE dbo.CSBCA1_5_FPCIC_2012_EES207201222743 
    -- wow, who named this?
      ADD CreatedDate DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP;
    

    Note that this may break existing code that just does INSERT INTO dbo.whatever SELECT/VALUES() - e.g. you may have to revisit your code and define a proper, explicit column list.

    qid & accept id: (11769527, 11769986) query: vb.net comparing two databases then insert or delete soup:

    This query will return all of the rows in the attached table that are not in the local version of the table

    \n
    SELECT * FROM attachedTable \nWHERE col1 NOT IN( SELECT lt.col1 FROM localTable as lt)\n
    \n

    And this will do the converse, returning all rows in the local table that are not matched in the remote table.

    \n
    SELECT * FROM localTable \nWHERE col1 NOT IN( SELECT rt.col1 FROM attachedTable As rt)\n
    \n soup wrap:

    This query will return all of the rows in the attached table that are not in the local version of the table

    SELECT * FROM attachedTable 
    WHERE col1 NOT IN( SELECT lt.col1 FROM localTable as lt)
    

    And this will do the converse, returning all rows in the local table that are not matched in the remote table.

    SELECT * FROM localTable 
    WHERE col1 NOT IN( SELECT rt.col1 FROM attachedTable As rt)
    
    qid & accept id: (11783678, 11788849) query: How can I find tables which reference a particular row via a foreign key? soup:

    NULL values in referencing columns

    \n

    This query produces the DML statement to find all rows in all tables, where a column has a foreign-key constraint referencing another table but hold a NULL value in that column:

    \n
    WITH x AS (\n SELECT c.conrelid::regclass    AS tbl\n      , c.confrelid::regclass   AS ftbl\n      , quote_ident(k.attname)  AS fk\n      , quote_ident(pf.attname) AS pk\n FROM   pg_constraint c\n JOIN   pg_attribute  k ON (k.attrelid, k.attnum) = (c.conrelid, c.conkey[1])\n JOIN   pg_attribute  f ON (f.attrelid, f.attnum) = (c.confrelid, c.confkey[1])\n LEFT   JOIN pg_constraint p  ON p.conrelid = c.conrelid AND p.contype = 'p'\n LEFT   JOIN pg_attribute  pf ON (pf.attrelid, pf.attnum)\n                               = (p.conrelid, p.conkey[1])\n WHERE  c.contype   = 'f'\n AND    c.confrelid = 'fk_tbl'::regclass  -- references to this tbl\n AND    f.attname   = 'fk_tbl_id'         -- and only to this column\n)\nSELECT string_agg(format(\n'SELECT %L AS tbl\n     , %L AS pk\n     , %s::text AS pk_val\n     , %L AS fk\n     , %L AS ftbl\nFROM   %1$s WHERE %4$s IS NULL'\n                  , tbl\n                  , COALESCE(pk 'NONE')\n                  , COALESCE(pk 'NULL')\n                  , fk\n                  , ftbl), '\nUNION ALL\n') || ';'\nFROM   x;\n
    \n

    Produces a query like this:

    \n
    SELECT 'some_tbl' AS tbl\n     , 'some_tbl_id' AS pk\n     , some_tbl_id::text AS pk_val\n     , 'fk_tbl_id' AS fk\n     , 'fk_tbl' AS ftbl\nFROM   some_tbl WHERE fk_tbl_id IS NULL\nUNION ALL\nSELECT 'other_tbl' AS tbl\n     , 'other_tbl_id' AS pk\n     , other_tbl_id::text AS pk_val\n     , 'some_name_id' AS fk\n     , 'fk_tbl' AS ftbl\nFROM   other_tbl WHERE some_name_id IS NULL;\n
    \n

    Produces output like this:

    \n
        tbl    |     pk       | pk_val |    fk        |  ftbl\n-----------+--------------+--------+--------------+--------\n some_tbl  | some_tbl_id  | 49     | fk_tbl_id    | fk_tbl\n some_tbl  | some_tbl_id  | 58     | fk_tbl_id    | fk_tbl\n other_tbl | other_tbl_id | 66     | some_name_id | fk_tbl\n other_tbl | other_tbl_id | 67     | some_name_id | fk_tbl\n
    \n\n
    \n

    NULL values in referenced columns

    \n

    My first solution does something subtly different from what you ask, because what you describe (as I understand it) is non-existent. The value NULL is "unknown" and cannot be referenced. If you actually want to find rows with a NULL value in a column that has FK constraints pointing to it (not to the particular row with the NULL value, of course), then the query can be much simplified:

    \n
    WITH x AS (\n SELECT c.confrelid::regclass   AS ftbl\n       ,quote_ident(f.attname)  AS fk\n       ,quote_ident(pf.attname) AS pk\n       ,string_agg(c.conrelid::regclass::text, ', ') AS referencing_tbls\n FROM   pg_constraint c\n JOIN   pg_attribute  f ON (f.attrelid, f.attnum) = (c.confrelid, c.confkey[1])\n LEFT   JOIN pg_constraint p  ON p.conrelid = c.confrelid AND p.contype = 'p'\n LEFT   JOIN pg_attribute  pf ON (pf.attrelid, pf.attnum)\n                               = (p.conrelid, p.conkey[1])\n WHERE  c.contype = 'f'\n -- AND    c.confrelid = 'fk_tbl'::regclass  -- only referring this tbl\n GROUP  BY 1, 2, 3\n)\nSELECT string_agg(format(\n'SELECT %L AS ftbl\n     , %L AS pk\n     , %s::text AS pk_val\n     , %L AS fk\n     , %L AS referencing_tbls\nFROM   %1$s WHERE %4$s IS NULL'\n                  , ftbl\n                  , COALESCE(pk, 'NONE')\n                  , COALESCE(pk, 'NULL')\n                  , fk\n                  , referencing_tbls), '\nUNION ALL\n') || ';'\nFROM   x;\n
    \n

    Finds all such rows in the entire database (commented out the restriction to one table). Tested with Postgres 9.1.4 and works for me.

    \n

    I group multiple tables referencing the same foreign column into one query and add a list of referencing tables to give an overview.

    \n soup wrap:

    NULL values in referencing columns

    This query produces the DML statement to find all rows in all tables, where a column has a foreign-key constraint referencing another table but hold a NULL value in that column:

    WITH x AS (
     SELECT c.conrelid::regclass    AS tbl
          , c.confrelid::regclass   AS ftbl
          , quote_ident(k.attname)  AS fk
          , quote_ident(pf.attname) AS pk
     FROM   pg_constraint c
     JOIN   pg_attribute  k ON (k.attrelid, k.attnum) = (c.conrelid, c.conkey[1])
     JOIN   pg_attribute  f ON (f.attrelid, f.attnum) = (c.confrelid, c.confkey[1])
     LEFT   JOIN pg_constraint p  ON p.conrelid = c.conrelid AND p.contype = 'p'
     LEFT   JOIN pg_attribute  pf ON (pf.attrelid, pf.attnum)
                                   = (p.conrelid, p.conkey[1])
     WHERE  c.contype   = 'f'
     AND    c.confrelid = 'fk_tbl'::regclass  -- references to this tbl
     AND    f.attname   = 'fk_tbl_id'         -- and only to this column
    )
    SELECT string_agg(format(
    'SELECT %L AS tbl
         , %L AS pk
         , %s::text AS pk_val
         , %L AS fk
         , %L AS ftbl
    FROM   %1$s WHERE %4$s IS NULL'
                      , tbl
                      , COALESCE(pk 'NONE')
                      , COALESCE(pk 'NULL')
                      , fk
                      , ftbl), '
    UNION ALL
    ') || ';'
    FROM   x;
    

    Produces a query like this:

    SELECT 'some_tbl' AS tbl
         , 'some_tbl_id' AS pk
         , some_tbl_id::text AS pk_val
         , 'fk_tbl_id' AS fk
         , 'fk_tbl' AS ftbl
    FROM   some_tbl WHERE fk_tbl_id IS NULL
    UNION ALL
    SELECT 'other_tbl' AS tbl
         , 'other_tbl_id' AS pk
         , other_tbl_id::text AS pk_val
         , 'some_name_id' AS fk
         , 'fk_tbl' AS ftbl
    FROM   other_tbl WHERE some_name_id IS NULL;
    

    Produces output like this:

        tbl    |     pk       | pk_val |    fk        |  ftbl
    -----------+--------------+--------+--------------+--------
     some_tbl  | some_tbl_id  | 49     | fk_tbl_id    | fk_tbl
     some_tbl  | some_tbl_id  | 58     | fk_tbl_id    | fk_tbl
     other_tbl | other_tbl_id | 66     | some_name_id | fk_tbl
     other_tbl | other_tbl_id | 67     | some_name_id | fk_tbl
    

    NULL values in referenced columns

    My first solution does something subtly different from what you ask, because what you describe (as I understand it) is non-existent. The value NULL is "unknown" and cannot be referenced. If you actually want to find rows with a NULL value in a column that has FK constraints pointing to it (not to the particular row with the NULL value, of course), then the query can be much simplified:

    WITH x AS (
     SELECT c.confrelid::regclass   AS ftbl
           ,quote_ident(f.attname)  AS fk
           ,quote_ident(pf.attname) AS pk
           ,string_agg(c.conrelid::regclass::text, ', ') AS referencing_tbls
     FROM   pg_constraint c
     JOIN   pg_attribute  f ON (f.attrelid, f.attnum) = (c.confrelid, c.confkey[1])
     LEFT   JOIN pg_constraint p  ON p.conrelid = c.confrelid AND p.contype = 'p'
     LEFT   JOIN pg_attribute  pf ON (pf.attrelid, pf.attnum)
                                   = (p.conrelid, p.conkey[1])
     WHERE  c.contype = 'f'
     -- AND    c.confrelid = 'fk_tbl'::regclass  -- only referring this tbl
     GROUP  BY 1, 2, 3
    )
    SELECT string_agg(format(
    'SELECT %L AS ftbl
         , %L AS pk
         , %s::text AS pk_val
         , %L AS fk
         , %L AS referencing_tbls
    FROM   %1$s WHERE %4$s IS NULL'
                      , ftbl
                      , COALESCE(pk, 'NONE')
                      , COALESCE(pk, 'NULL')
                      , fk
                      , referencing_tbls), '
    UNION ALL
    ') || ';'
    FROM   x;
    

    Finds all such rows in the entire database (commented out the restriction to one table). Tested with Postgres 9.1.4 and works for me.

    I group multiple tables referencing the same foreign column into one query and add a list of referencing tables to give an overview.

    qid & accept id: (11793666, 11793730) query: Retrieving records from a table within two date variables soup:
    SELECT myColumn\n  FROM myTable\n WHERE Date BETWEEN @StartDate AND @EndDate\n
    \n

    Edited: Between clause is inclusive (both dates are included in the result) so if you maybe want to exclude one of the dates in the variable columns better use:

    \n
    SELECT myColumn\n  FROM myTable\n WHERE Date >= @StartDate\n   AND Date <= @EndDate\n
    \n soup wrap:
    SELECT myColumn
      FROM myTable
     WHERE Date BETWEEN @StartDate AND @EndDate
    

    Edited: Between clause is inclusive (both dates are included in the result) so if you maybe want to exclude one of the dates in the variable columns better use:

    SELECT myColumn
      FROM myTable
     WHERE Date >= @StartDate
       AND Date <= @EndDate
    
    qid & accept id: (11814210, 11814632) query: SQL Query Group By Mount And Year soup:

    Try this :

    \n
    Declare @Sample table \n(Buy datetime ,Qty int)\n\nInsert into @Sample values\n( '01-01-2012' ,1),\n('01-01-2012',1 ),\n('01-02-2012',1 ),\n('01-03-2012',1 ),\n('01-05-2012',1 ),\n('01-07-2012',1 ),\n('01-12-2012',1 )\n\n;with cte as \n(\n  select top 12 row_number() over(order by t1.number) as N\n  from   master..spt_values t1 \n   cross join master..spt_values t2\n )\nselect t.N as month,\nisnull(datepart(year,y.buy),'2012') as Year,\nsum(isnull(y.qty,0)) as Quantity\nfrom cte t\nleft join @Sample y\non month(convert(varchar(20),buy,103)) = t.N\ngroup by y.buy,t.N\n
    \n

    Create a Month table to store the value from 1 to 12 .Instead of master..spt_values you can also use sys.all_objects

    \n
      select row_number() over (order by object_id) as months\n  from sys.all_objects  \n
    \n

    or use a recursive cte to generate the month table

    \n
    ;with cte(N) as \n(\nSelect 1 \nunion all\nSelect 1+N from cte where N<12\n)\nSelect * from cte\n
    \n

    and then use Left join to compare the value from the month table with your table and use isnull function to handle the null values.

    \n soup wrap:

    Try this :

    Declare @Sample table 
    (Buy datetime ,Qty int)
    
    Insert into @Sample values
    ( '01-01-2012' ,1),
    ('01-01-2012',1 ),
    ('01-02-2012',1 ),
    ('01-03-2012',1 ),
    ('01-05-2012',1 ),
    ('01-07-2012',1 ),
    ('01-12-2012',1 )
    
    ;with cte as 
    (
      select top 12 row_number() over(order by t1.number) as N
      from   master..spt_values t1 
       cross join master..spt_values t2
     )
    select t.N as month,
    isnull(datepart(year,y.buy),'2012') as Year,
    sum(isnull(y.qty,0)) as Quantity
    from cte t
    left join @Sample y
    on month(convert(varchar(20),buy,103)) = t.N
    group by y.buy,t.N
    

    Create a Month table to store the value from 1 to 12 .Instead of master..spt_values you can also use sys.all_objects

      select row_number() over (order by object_id) as months
      from sys.all_objects  
    

    or use a recursive cte to generate the month table

    ;with cte(N) as 
    (
    Select 1 
    union all
    Select 1+N from cte where N<12
    )
    Select * from cte
    

    and then use Left join to compare the value from the month table with your table and use isnull function to handle the null values.

    qid & accept id: (11822599, 11823335) query: How to compare oracle date and lotusscript date? soup:

    Create an Oracle date using the to_date function.

    \n

    to_date(,'format')

    \n

    Format your date as a string for example: 06-05-2012 and this will return an Oracle date:

    \n

    In plsql that would look like:

    \n
    my_string := '06-08-2012';\nmy_date := to_date(my_string,'DD-MM-YYYY');\n
    \n

    But of course you can do this in SQL directly.

    \n
    where LAST_MODIFIED > to_date(,)\n
    \n soup wrap:

    Create an Oracle date using the to_date function.

    to_date(,'format')

    Format your date as a string for example: 06-05-2012 and this will return an Oracle date:

    In plsql that would look like:

    my_string := '06-08-2012';
    my_date := to_date(my_string,'DD-MM-YYYY');
    

    But of course you can do this in SQL directly.

    where LAST_MODIFIED > to_date(,)
    
    qid & accept id: (11833448, 11833863) query: How to query 2 different date ranges depending on the day it is run soup:

    If you have your query in a view, you might use this:

    \n
    where\n    Invoice_Date between\n    (\n        case\n            when datepart(dd, getdate()) = 1 then dateadd(mm, -1, getdate())\n            else dateadd(dd, -15, getdate())\n        end\n    )\n    and\n    (\n        case\n            when datepart(dd, getdate()) = 1 then dateadd(dd, -1, getdate())\n            else dateadd(dd, -1, getdate())\n        end\n    )\n
    \n

    UPDATE: Ignoring the time

    \n

    (I know it looks ugly.)

    \n
    where\n    Invoice_Date between\n    (\n        case\n            when datepart(dd, dateadd(dd, datediff(dd, 0, getdate()), 0)) = 1 then dateadd(mm, -1, dateadd(dd, datediff(dd, 0, getdate()), 0))\n            else dateadd(dd, -15, dateadd(dd, datediff(dd, 0, getdate()), 0))\n        end\n    )\n    and\n    (\n        case\n            when datepart(dd, dateadd(dd, datediff(dd, 0, getdate()), 0)) = 1 then dateadd(dd, -1, dateadd(dd, datediff(dd, 0, getdate()), 0))\n            else dateadd(dd, -1, dateadd(dd, datediff(dd, 0, getdate()), 0))\n        end\n    )\n
    \n soup wrap:

    If you have your query in a view, you might use this:

    where
        Invoice_Date between
        (
            case
                when datepart(dd, getdate()) = 1 then dateadd(mm, -1, getdate())
                else dateadd(dd, -15, getdate())
            end
        )
        and
        (
            case
                when datepart(dd, getdate()) = 1 then dateadd(dd, -1, getdate())
                else dateadd(dd, -1, getdate())
            end
        )
    

    UPDATE: Ignoring the time

    (I know it looks ugly.)

    where
        Invoice_Date between
        (
            case
                when datepart(dd, dateadd(dd, datediff(dd, 0, getdate()), 0)) = 1 then dateadd(mm, -1, dateadd(dd, datediff(dd, 0, getdate()), 0))
                else dateadd(dd, -15, dateadd(dd, datediff(dd, 0, getdate()), 0))
            end
        )
        and
        (
            case
                when datepart(dd, dateadd(dd, datediff(dd, 0, getdate()), 0)) = 1 then dateadd(dd, -1, dateadd(dd, datediff(dd, 0, getdate()), 0))
                else dateadd(dd, -1, dateadd(dd, datediff(dd, 0, getdate()), 0))
            end
        )
    
    qid & accept id: (11844855, 11847599) query: linked tables in firebird, discard records that have a specific value in a one to many linked table soup:

    Below are three queries that will do the task:

    \n
    SELECT\n  c.*\nFROM\n  client c \nWHERE\n  NOT EXISTS(SELECT * FROM notes n WHERE n.client_id = c.client_id \n    AND n.note = 'do not send')\n
    \n

    or

    \n
    SELECT\n  c.*, n.client_id\nFROM\n  client.c LEFT JOIN\n    (SELECT client_id FROM notes WHERE note = 'do not send') n\n  ON c.client_id = n.client_id\nWHERE\n  n.client_id IS NULL\n
    \n

    or

    \n
    SELECT\n  c.*\nFROM\n  client c \nWHERE\n  NOT c.client_id IN (SELECT client_id FROM notes n \n    WHERE n.note = 'do not send')\n
    \n soup wrap:

    Below are three queries that will do the task:

    SELECT
      c.*
    FROM
      client c 
    WHERE
      NOT EXISTS(SELECT * FROM notes n WHERE n.client_id = c.client_id 
        AND n.note = 'do not send')
    

    or

    SELECT
      c.*, n.client_id
    FROM
      client.c LEFT JOIN
        (SELECT client_id FROM notes WHERE note = 'do not send') n
      ON c.client_id = n.client_id
    WHERE
      n.client_id IS NULL
    

    or

    SELECT
      c.*
    FROM
      client c 
    WHERE
      NOT c.client_id IN (SELECT client_id FROM notes n 
        WHERE n.note = 'do not send')
    
    qid & accept id: (11847584, 11847747) query: Transposing Rows in to colums in SQL Server 2005 soup:

    You will need to perform a PIVOT. There are two ways to do this with PIVOT, either a Static Pivot where you code the columns to transform or a Dynamic Pivot which determines the columns at execution.

    \n

    Static Pivot:

    \n
    SELECT *\nFROM\n(\n    SELECT col1, col2\n    FROM yourTable\n) x\nPIVOT\n(\n   min(col2)\n   for col1 in ([A], [B], [C])\n)p\n
    \n

    See SQL Fiddle with Demo

    \n

    Dynamic Pivot:

    \n
    DECLARE @cols AS NVARCHAR(MAX),\n    @query  AS NVARCHAR(MAX)\n\nselect @cols = STUFF((SELECT distinct ',' + QUOTENAME(col1) \n                    from t1\n            FOR XML PATH(''), TYPE\n            ).value('.', 'NVARCHAR(MAX)') \n        ,1,1,'')\n\nset @query = 'SELECT ' + @cols + ' from \n             (\n                select col1, col2\n                from t1\n            ) x\n            pivot \n            (\n                min(col2)\n                for col1 in (' + @cols + ')\n            ) p '\n\nexecute(@query)\n
    \n

    See SQL Fiddle with Demo

    \n

    If you do not want to use the PIVOT function, then you can perform a similar type of query with CASE statements:

    \n
    select \n  SUM(CASE WHEN col1 = 'A' THEN col2 END) as A,\n  SUM(CASE WHEN col1 = 'B' THEN col2 END) as B,\n  SUM(CASE WHEN col1 = 'C' THEN col2 END) as C\nFROM t1\n
    \n

    See SQL Fiddle with Demo

    \n soup wrap:

    You will need to perform a PIVOT. There are two ways to do this with PIVOT, either a Static Pivot where you code the columns to transform or a Dynamic Pivot which determines the columns at execution.

    Static Pivot:

    SELECT *
    FROM
    (
        SELECT col1, col2
        FROM yourTable
    ) x
    PIVOT
    (
       min(col2)
       for col1 in ([A], [B], [C])
    )p
    

    See SQL Fiddle with Demo

    Dynamic Pivot:

    DECLARE @cols AS NVARCHAR(MAX),
        @query  AS NVARCHAR(MAX)
    
    select @cols = STUFF((SELECT distinct ',' + QUOTENAME(col1) 
                        from t1
                FOR XML PATH(''), TYPE
                ).value('.', 'NVARCHAR(MAX)') 
            ,1,1,'')
    
    set @query = 'SELECT ' + @cols + ' from 
                 (
                    select col1, col2
                    from t1
                ) x
                pivot 
                (
                    min(col2)
                    for col1 in (' + @cols + ')
                ) p '
    
    execute(@query)
    

    See SQL Fiddle with Demo

    If you do not want to use the PIVOT function, then you can perform a similar type of query with CASE statements:

    select 
      SUM(CASE WHEN col1 = 'A' THEN col2 END) as A,
      SUM(CASE WHEN col1 = 'B' THEN col2 END) as B,
      SUM(CASE WHEN col1 = 'C' THEN col2 END) as C
    FROM t1
    

    See SQL Fiddle with Demo

    qid & accept id: (11852951, 11853200) query: SQL - Determine count of records active at time soup:

    The following uses correlated subqueries to get the numbers you want. The idea is to count the number of cumulative starts and cumulative ends, up to each time:

    \n
    with alltimes as\n    (select t.*\n     from ((select part_start_time as thetime, 1 as IsStart, 0 as IsEnd\n            from t\n           ) union all\n           (select part_end_time, 0 as isStart, 1 as IsEnd\n            from t\n           )\n          ) t\n     )\nselect t.*,\n       (cumstarts - cumends) as numactive\nfrom (select alltimes.thetime,\n             (select sum(isStart)\n              from allStarts as where as.part_start_time <= alltimes.thetime\n             ) as cumStarts,\n             (select sum(isEnd)\n              from allStarts as where as.part_end_time <= alltimes.thetime\n             ) as cumEnds\n      from alltimes\n     ) t\n
    \n

    The output is based on each time present in the data.

    \n

    As a rule of thumb, you don't want to be doing lots of data work on the application side. When possible, that is best done in the database.

    \n

    This query will have duplicates when there are multiple starts and ends at the same time. In this case, you would need to determine how to treat this case. But, the idea is the same. The outer select would be:

    \n
    select t.thetime, max(cumstarts - cumends) as numactives\n
    \n

    and you need a group by clause:

    \n
    group by t.thetime\n
    \n

    The "max" gives the starts precedence (meaning with the same time stampt, the starts are treated as happening first, so you get the maximum actives at that time). "Min" would give the ends precedence. And, if you use average, remember to convert to floating point:

    \n
    select t.thetime, avg(cumstarts*1.0 - cumends) as avgnumactives\n
    \n soup wrap:

    The following uses correlated subqueries to get the numbers you want. The idea is to count the number of cumulative starts and cumulative ends, up to each time:

    with alltimes as
        (select t.*
         from ((select part_start_time as thetime, 1 as IsStart, 0 as IsEnd
                from t
               ) union all
               (select part_end_time, 0 as isStart, 1 as IsEnd
                from t
               )
              ) t
         )
    select t.*,
           (cumstarts - cumends) as numactive
    from (select alltimes.thetime,
                 (select sum(isStart)
                  from allStarts as where as.part_start_time <= alltimes.thetime
                 ) as cumStarts,
                 (select sum(isEnd)
                  from allStarts as where as.part_end_time <= alltimes.thetime
                 ) as cumEnds
          from alltimes
         ) t
    

    The output is based on each time present in the data.

    As a rule of thumb, you don't want to be doing lots of data work on the application side. When possible, that is best done in the database.

    This query will have duplicates when there are multiple starts and ends at the same time. In this case, you would need to determine how to treat this case. But, the idea is the same. The outer select would be:

    select t.thetime, max(cumstarts - cumends) as numactives
    

    and you need a group by clause:

    group by t.thetime
    

    The "max" gives the starts precedence (meaning with the same time stampt, the starts are treated as happening first, so you get the maximum actives at that time). "Min" would give the ends precedence. And, if you use average, remember to convert to floating point:

    select t.thetime, avg(cumstarts*1.0 - cumends) as avgnumactives
    
    qid & accept id: (11900470, 12310205) query: Oracle: subtract millisecond from a datetime soup:

    For adding or subtracting an amount of time expressed as a literal you can use INTERVAL.

    \n
    SELECT TO_TIMESTAMP('10/08/2012','DD/MM/YYYY')\n     - INTERVAL '0.001' SECOND \nFROM dual;\n
    \n

    As well there are now standard ways to express date and time literals and avoid the use of various database specific conversion functions.

    \n
    SELECT TIMESTAMP '2012-10-08 00:00:00' \n   - INTERVAL '0.001' SECOND DATA\nFROM dual;\n
    \n

    For your original question the time part of a day is stored in fractional days. So one second is:

    \n
    1 / (hours in day * minutes in hour * seconds in a minute)\n
    \n

    Divide by 1000 to get milliseconds.

    \n
    1 / (24 * 60 * 60 * 1000)\n
    \n soup wrap:

    For adding or subtracting an amount of time expressed as a literal you can use INTERVAL.

    SELECT TO_TIMESTAMP('10/08/2012','DD/MM/YYYY')
         - INTERVAL '0.001' SECOND 
    FROM dual;
    

    As well there are now standard ways to express date and time literals and avoid the use of various database specific conversion functions.

    SELECT TIMESTAMP '2012-10-08 00:00:00' 
       - INTERVAL '0.001' SECOND DATA
    FROM dual;
    

    For your original question the time part of a day is stored in fractional days. So one second is:

    1 / (hours in day * minutes in hour * seconds in a minute)
    

    Divide by 1000 to get milliseconds.

    1 / (24 * 60 * 60 * 1000)
    
    qid & accept id: (11912188, 11925465) query: Smart SQL Merge - n rows, coalesce soup:

    If the performance is important enough to justify a couple of hours of coding and you are allowed to use SQLCLR, you can calculate all values in a single table scan with multi-parameter User Defined Aggregare.

    \n

    Here's an example of an aggregate that returns lowest-ranked non-NULL string:

    \n
    using System;\nusing System.Data;\nusing System.Data.SqlClient;\nusing System.Data.SqlTypes;\nusing System.IO;\nusing Microsoft.SqlServer.Server;\n\n[Serializable]\n[SqlUserDefinedAggregate(Format.UserDefined, MaxByteSize = -1, IsNullIfEmpty = true)]\npublic struct LowestRankString : IBinarySerialize\n{\n    public int currentRank;\n    public SqlString currentValue;\n\n    public void Init()\n    {\n        currentRank = int.MaxValue;\n        currentValue = SqlString.Null;\n    }\n\n    public void Accumulate(int Rank, SqlString Value)\n    {\n        if (!Value.IsNull)\n        {\n            if (Rank <= currentRank)\n            {\n                currentRank = Rank;\n                currentValue = Value;\n            }\n        }\n    }\n\n    public void Merge(LowestRankString Group)\n    {\n        Accumulate(Group.currentRank, Group.currentValue);\n    }\n\n    public SqlString Terminate()\n    {\n        return currentValue;\n    }\n\n    public void Read(BinaryReader r)\n    {\n        currentRank = r.ReadInt32();\n        bool hasValue = r.ReadBoolean();\n        if (hasValue)\n        {\n            currentValue = new SqlString(r.ReadString());\n        }\n        else\n        {\n            currentValue = SqlString.Null;\n        }\n    }\n\n    public void Write(BinaryWriter w)\n    {\n        w.Write(currentRank);\n\n        bool hasValue = !currentValue.IsNull;\n        w.Write(hasValue);\n        if (hasValue)\n        {\n            w.Write(currentValue.Value);\n        }\n    }\n}\n
    \n

    Assuming your table looks something like this:

    \n

    CREATE TABLE TopNonNullRank (\n Id INT NOT NULL,\n UserId NVARCHAR (32) NOT NULL,\n Value1 NVARCHAR (128) NULL,\n Value2 NVARCHAR (128) NULL,\n Value3 NVARCHAR (128) NULL,\n Value4 NVARCHAR (128) NULL,\n PRIMARY KEY CLUSTERED (Id ASC)\n);

    \n
    INSERT INTO TopNonNullRank (Id, UserId, Value1, Value2, Value3, Value4) VALUES \n    (1, N'Ada', NULL, N'Top value 2 for A', N'Top value 3 for A', NULL),\n    (2, N'Ada', N'Top value 1 for A', NULL, N'Other value 3', N'Top value 4 for A'),\n    (3, N'Ada', N'Other value 1 for A', N'Other value 2 for A', N'Other value 3 for A', NULL),\n    (4, N'Bob', N'Top value 1 for B', NULL, NULL, NULL),\n    (5, N'Bob', NULL, NULL, NULL, N'Top value 4 for B'),\n    (6, N'Bob', N'Other value 1 for B', N'Top value 2 for B', NULL, N'Other value 4');\n
    \n

    The following simple query returns top non-NULL value for each column.

    \n
    SELECT \n    UserId,\n    dbo.LowestRankString(Id, Value1) AS TopValue1,\n    dbo.LowestRankString(Id, Value2) AS TopValue2,\n    dbo.LowestRankString(Id, Value3) AS TopValue3,\n    dbo.LowestRankString(Id, Value4) AS TopValue4\nFROM TopNonNullRank\nGROUP BY UserId\n
    \n

    The only thing left is merging the results back to the original table. The simplest way would be something like this:

    \n
    WITH TopValuesPerUser AS\n(\n    SELECT \n        UserId,\n        dbo.LowestRankString(Id, Value1) AS TopValue1,\n        dbo.LowestRankString(Id, Value2) AS TopValue2,\n        dbo.LowestRankString(Id, Value3) AS TopValue3,\n        dbo.LowestRankString(Id, Value4) AS TopValue4\n    FROM TopNonNullRank\n    GROUP BY UserId\n)\nUPDATE TopNonNullRank\nSET\n    Value1 = TopValue1,\n    Value2 = TopValue2,\n    Value3 = TopValue3,\n    Value4 = TopValue4\nFROM TopNonNullRank AS OriginalTable\nLEFT JOIN TopValuesPerUser ON TopValuesPerUser.UserId = OriginalTable.UserId;\n
    \n

    Note that this update still leaves you with duplicate rows, and you would need to get rid of them.

    \n

    You could also get more fancy and store the results of this query into a temporary table, and then use MERGE statement to apply them to the original table.

    \n

    Another option would be to store the results in a new table, and then swap it with the original table using sp_rename stored proc.

    \n soup wrap:

    If the performance is important enough to justify a couple of hours of coding and you are allowed to use SQLCLR, you can calculate all values in a single table scan with multi-parameter User Defined Aggregare.

    Here's an example of an aggregate that returns lowest-ranked non-NULL string:

    using System;
    using System.Data;
    using System.Data.SqlClient;
    using System.Data.SqlTypes;
    using System.IO;
    using Microsoft.SqlServer.Server;
    
    [Serializable]
    [SqlUserDefinedAggregate(Format.UserDefined, MaxByteSize = -1, IsNullIfEmpty = true)]
    public struct LowestRankString : IBinarySerialize
    {
        public int currentRank;
        public SqlString currentValue;
    
        public void Init()
        {
            currentRank = int.MaxValue;
            currentValue = SqlString.Null;
        }
    
        public void Accumulate(int Rank, SqlString Value)
        {
            if (!Value.IsNull)
            {
                if (Rank <= currentRank)
                {
                    currentRank = Rank;
                    currentValue = Value;
                }
            }
        }
    
        public void Merge(LowestRankString Group)
        {
            Accumulate(Group.currentRank, Group.currentValue);
        }
    
        public SqlString Terminate()
        {
            return currentValue;
        }
    
        public void Read(BinaryReader r)
        {
            currentRank = r.ReadInt32();
            bool hasValue = r.ReadBoolean();
            if (hasValue)
            {
                currentValue = new SqlString(r.ReadString());
            }
            else
            {
                currentValue = SqlString.Null;
            }
        }
    
        public void Write(BinaryWriter w)
        {
            w.Write(currentRank);
    
            bool hasValue = !currentValue.IsNull;
            w.Write(hasValue);
            if (hasValue)
            {
                w.Write(currentValue.Value);
            }
        }
    }
    

    Assuming your table looks something like this:

    CREATE TABLE TopNonNullRank ( Id INT NOT NULL, UserId NVARCHAR (32) NOT NULL, Value1 NVARCHAR (128) NULL, Value2 NVARCHAR (128) NULL, Value3 NVARCHAR (128) NULL, Value4 NVARCHAR (128) NULL, PRIMARY KEY CLUSTERED (Id ASC) );

    INSERT INTO TopNonNullRank (Id, UserId, Value1, Value2, Value3, Value4) VALUES 
        (1, N'Ada', NULL, N'Top value 2 for A', N'Top value 3 for A', NULL),
        (2, N'Ada', N'Top value 1 for A', NULL, N'Other value 3', N'Top value 4 for A'),
        (3, N'Ada', N'Other value 1 for A', N'Other value 2 for A', N'Other value 3 for A', NULL),
        (4, N'Bob', N'Top value 1 for B', NULL, NULL, NULL),
        (5, N'Bob', NULL, NULL, NULL, N'Top value 4 for B'),
        (6, N'Bob', N'Other value 1 for B', N'Top value 2 for B', NULL, N'Other value 4');
    

    The following simple query returns top non-NULL value for each column.

    SELECT 
        UserId,
        dbo.LowestRankString(Id, Value1) AS TopValue1,
        dbo.LowestRankString(Id, Value2) AS TopValue2,
        dbo.LowestRankString(Id, Value3) AS TopValue3,
        dbo.LowestRankString(Id, Value4) AS TopValue4
    FROM TopNonNullRank
    GROUP BY UserId
    

    The only thing left is merging the results back to the original table. The simplest way would be something like this:

    WITH TopValuesPerUser AS
    (
        SELECT 
            UserId,
            dbo.LowestRankString(Id, Value1) AS TopValue1,
            dbo.LowestRankString(Id, Value2) AS TopValue2,
            dbo.LowestRankString(Id, Value3) AS TopValue3,
            dbo.LowestRankString(Id, Value4) AS TopValue4
        FROM TopNonNullRank
        GROUP BY UserId
    )
    UPDATE TopNonNullRank
    SET
        Value1 = TopValue1,
        Value2 = TopValue2,
        Value3 = TopValue3,
        Value4 = TopValue4
    FROM TopNonNullRank AS OriginalTable
    LEFT JOIN TopValuesPerUser ON TopValuesPerUser.UserId = OriginalTable.UserId;
    

    Note that this update still leaves you with duplicate rows, and you would need to get rid of them.

    You could also get more fancy and store the results of this query into a temporary table, and then use MERGE statement to apply them to the original table.

    Another option would be to store the results in a new table, and then swap it with the original table using sp_rename stored proc.

    qid & accept id: (11960289, 11960469) query: Access SQL update based on count results and conditional update soup:

    You can wrap a query in another query:

    \n
    SELECT TechID, Rank FROM Rank,\n(SELECT x.TechID, Count(*) AS cnt, tblEmployeeData.LName, \n    tblEmployeeData.Pernr, tblEmployeeData.Occurrences, tblEmployeeData.Standing\nFROM tblEmployeeData\nINNER JOIN tblOccurrence AS x ON tblEmployeeData.TechID = x.TechID\nWHERE (((x.OccurrenceDate) Between DateAdd("m",-6,Date()) And Date())\n  AND ((Exists     \n    (SELECT * FROM tblOccurrence AS y  WHERE y.TechID = x.TechID AND DATEADD \n    ("d", -1, x.[OccurrenceDate]) = y.[OccurrenceDate]))=False))\nGROUP BY x.TechID, tblEmployeeData.LName, tblEmployeeData.Pernr) a\nWHERE a.Cnt BETWEEN Rank.Low And rank.High\n
    \n

    The idea is that you use the query with a Rank table, like so:

    \n
    Low High    Rank\n0   3       Good\n4   5       Verbal Warning\n6   7       Written Warning\n8   8       Final Written Warning\n9   99      Termination\n
    \n

    Edit re comments

    \n

    This runs for me in a rough mock-up

    \n
    SELECT a.TechID, tblRank.Rank FROM tblRank, (SELECT x.TechID, Count(*) AS cnt, tblEmployeeData.LName, \n    tblEmployeeData.Pernr, tblEmployeeData.Occurrences, tblEmployeeData.Standing\nFROM tblEmployeeData\nINNER JOIN tblOccurrence AS x ON tblEmployeeData.TechID = x.TechID\nWHERE (((x.OccurrenceDate) Between DateAdd("m",-6,Date()) And Date()) AND ((Exists     \n    (SELECT * FROM tblOccurrence AS y  WHERE y.TechID = x.TechID AND DATEADD \n    ("d", -1, x.[OccurrenceDate]) = y.[OccurrenceDate]))=False))\nGROUP BY x.TechID, tblEmployeeData.LName, tblEmployeeData.Pernr, tblEmployeeData.Occurrences, tblEmployeeData.Standing) a\nWHERE a.Cnt BETWEEN tblRank.Low And tblrank.High\n
    \n soup wrap:

    You can wrap a query in another query:

    SELECT TechID, Rank FROM Rank,
    (SELECT x.TechID, Count(*) AS cnt, tblEmployeeData.LName, 
        tblEmployeeData.Pernr, tblEmployeeData.Occurrences, tblEmployeeData.Standing
    FROM tblEmployeeData
    INNER JOIN tblOccurrence AS x ON tblEmployeeData.TechID = x.TechID
    WHERE (((x.OccurrenceDate) Between DateAdd("m",-6,Date()) And Date())
      AND ((Exists     
        (SELECT * FROM tblOccurrence AS y  WHERE y.TechID = x.TechID AND DATEADD 
        ("d", -1, x.[OccurrenceDate]) = y.[OccurrenceDate]))=False))
    GROUP BY x.TechID, tblEmployeeData.LName, tblEmployeeData.Pernr) a
    WHERE a.Cnt BETWEEN Rank.Low And rank.High
    

    The idea is that you use the query with a Rank table, like so:

    Low High    Rank
    0   3       Good
    4   5       Verbal Warning
    6   7       Written Warning
    8   8       Final Written Warning
    9   99      Termination
    

    Edit re comments

    This runs for me in a rough mock-up

    SELECT a.TechID, tblRank.Rank FROM tblRank, (SELECT x.TechID, Count(*) AS cnt, tblEmployeeData.LName, 
        tblEmployeeData.Pernr, tblEmployeeData.Occurrences, tblEmployeeData.Standing
    FROM tblEmployeeData
    INNER JOIN tblOccurrence AS x ON tblEmployeeData.TechID = x.TechID
    WHERE (((x.OccurrenceDate) Between DateAdd("m",-6,Date()) And Date()) AND ((Exists     
        (SELECT * FROM tblOccurrence AS y  WHERE y.TechID = x.TechID AND DATEADD 
        ("d", -1, x.[OccurrenceDate]) = y.[OccurrenceDate]))=False))
    GROUP BY x.TechID, tblEmployeeData.LName, tblEmployeeData.Pernr, tblEmployeeData.Occurrences, tblEmployeeData.Standing) a
    WHERE a.Cnt BETWEEN tblRank.Low And tblrank.High
    
    qid & accept id: (11969118, 11969295) query: SQLite: How to get certain field from multiple tables? soup:

    When you UNION results together, the column takes the name given to it in the first query (in this case A_name)

    \n

    Instead of using UNION ALL, try joining your tables together:

    \n
    SELECT A.A_name, B.B_name, C.C_name\nFROM TableA A\n    INNER JOIN TableB B ON A.companyId = B.companyId\n    INNER JOIN TableC C ON A.companyId = C.companyId\nWHERE A.companyId = 1\n
    \n

    This will give you the results on a single row. If you really want the results as seperate rows, you could perhaps select the table name along with the *_name field:

    \n
    SELECT 'TableA' AS TableName, A_name FROM TableA WHERE companyId = 1 UNION ALL\nSELECT 'TableB', B_name FROM TableB WHERE companyId = 1 UNION ALL\nSELECT 'TableC', C_name FROM TableC WHERE companyId = 1\n
    \n soup wrap:

    When you UNION results together, the column takes the name given to it in the first query (in this case A_name)

    Instead of using UNION ALL, try joining your tables together:

    SELECT A.A_name, B.B_name, C.C_name
    FROM TableA A
        INNER JOIN TableB B ON A.companyId = B.companyId
        INNER JOIN TableC C ON A.companyId = C.companyId
    WHERE A.companyId = 1
    

    This will give you the results on a single row. If you really want the results as seperate rows, you could perhaps select the table name along with the *_name field:

    SELECT 'TableA' AS TableName, A_name FROM TableA WHERE companyId = 1 UNION ALL
    SELECT 'TableB', B_name FROM TableB WHERE companyId = 1 UNION ALL
    SELECT 'TableC', C_name FROM TableC WHERE companyId = 1
    
    qid & accept id: (12013073, 12013200) query: Extract time from datetime efficiently (as decimal or datetime) soup:

    To get a datetime:

    \n
    SELECT GetDate() - DateDiff(day, 0, GetDate());\n-- returns the time with zero as the datetime part (1900-01-01).\n
    \n

    And to get a number representing the time:

    \n
    SELECT DateDiff(millisecond, DateDiff(day, 0, GetDate()), GetDate());\n-- time since midnight in milliseconds, use as you wish\n
    \n

    If you really want a string, then:

    \n
    SELECT Convert(varchar(8), GetDate(), 108); -- 'hh:mm:ss'\nSELECT Convert(varchar(12), GetDate(), 114); -- 'hh:mm:ss.nnn' where nnn is milliseconds\n
    \n soup wrap:

    To get a datetime:

    SELECT GetDate() - DateDiff(day, 0, GetDate());
    -- returns the time with zero as the datetime part (1900-01-01).
    

    And to get a number representing the time:

    SELECT DateDiff(millisecond, DateDiff(day, 0, GetDate()), GetDate());
    -- time since midnight in milliseconds, use as you wish
    

    If you really want a string, then:

    SELECT Convert(varchar(8), GetDate(), 108); -- 'hh:mm:ss'
    SELECT Convert(varchar(12), GetDate(), 114); -- 'hh:mm:ss.nnn' where nnn is milliseconds
    
    qid & accept id: (12050795, 12050859) query: How to remove null values from a count function soup:

    The problem is that you return all rows in table a1_publisher. Try this instead.

    \n
    select j.publisher_id, count(j.publisher_id)\nFROM a1_journal j inner join a1_publisher p ON  j.publisher_id=p.publisher_id \nGROUP BY j.publisher_id\nHAVING count(j.publisher_id) >=3\nORDER BY count(j.publisher_id) DESC\n
    \n

    UPDATE:

    \n

    To select publisher's name there're 2 ways.

    \n
      \n
    1. If publisher's name is unique you can add the column to group by like this

      \n
      select j.publisher_id,p.publisher_name, count(j.publisher_id)\nFROM a1_journal j \n  inner join a1_publisher p ON  j.publisher_id=p.publisher_id \nGROUP BY j.publisher_id, p.publisher_name\nHAVING count(j.publisher_id) >=3\nORDER BY count(j.publisher_id) DESC\n
    2. \n
    3. If it's not unique, you should have another join with a1_publisher like this.

      \n
      SELECT aj.publisher_id, aj.numberOfJournals, ap.publisher_name\nFROM a1_publisher ap \nINNER JOIN (\n    SELECT j.publisher_id, count(j.publisher_id) numberOfJournals\n    FROM a1_journal j \n       inner join a1_publisher p ON  j.publisher_id=p.publisher_id \n    GROUP BY j.publisher_id\n    HAVING count(j.publisher_id) >=3  ) aj \nON ap.publisher_id = ap.publisher_id\nORDER BY count(j.publisher_id) DESC\n
    4. \n
    \n soup wrap:

    The problem is that you return all rows in table a1_publisher. Try this instead.

    select j.publisher_id, count(j.publisher_id)
    FROM a1_journal j inner join a1_publisher p ON  j.publisher_id=p.publisher_id 
    GROUP BY j.publisher_id
    HAVING count(j.publisher_id) >=3
    ORDER BY count(j.publisher_id) DESC
    

    UPDATE:

    To select publisher's name there're 2 ways.

    1. If publisher's name is unique you can add the column to group by like this

      select j.publisher_id,p.publisher_name, count(j.publisher_id)
      FROM a1_journal j 
        inner join a1_publisher p ON  j.publisher_id=p.publisher_id 
      GROUP BY j.publisher_id, p.publisher_name
      HAVING count(j.publisher_id) >=3
      ORDER BY count(j.publisher_id) DESC
      
    2. If it's not unique, you should have another join with a1_publisher like this.

      SELECT aj.publisher_id, aj.numberOfJournals, ap.publisher_name
      FROM a1_publisher ap 
      INNER JOIN (
          SELECT j.publisher_id, count(j.publisher_id) numberOfJournals
          FROM a1_journal j 
             inner join a1_publisher p ON  j.publisher_id=p.publisher_id 
          GROUP BY j.publisher_id
          HAVING count(j.publisher_id) >=3  ) aj 
      ON ap.publisher_id = ap.publisher_id
      ORDER BY count(j.publisher_id) DESC
      
    qid & accept id: (12063841, 12063860) query: Display value from column B if column A is NULL soup:

    Use ISNULL() or COALESCE(), or CASE

    \n
    SELECT    ISNULL(ColumnA, ColumnB) AS [YourColumn]\nFROM      FOO\n
    \n

    OR

    \n
    SELECT    COALESCE(ColumnA, ColumnB) AS [YourColumn]\nFROM      FOO\n
    \n

    OR

    \n
    SELECT    CASE WHEN ColumnA IS NULL THEN\n              ColumnB\n          ELSE\n              ColumnA\n          END AS [YourColumn]\nFROM      FOO\n
    \n soup wrap:

    Use ISNULL() or COALESCE(), or CASE

    SELECT    ISNULL(ColumnA, ColumnB) AS [YourColumn]
    FROM      FOO
    

    OR

    SELECT    COALESCE(ColumnA, ColumnB) AS [YourColumn]
    FROM      FOO
    

    OR

    SELECT    CASE WHEN ColumnA IS NULL THEN
                  ColumnB
              ELSE
                  ColumnA
              END AS [YourColumn]
    FROM      FOO
    
    qid & accept id: (12085307, 12087577) query: sec_to_time() function in PostgreSQL? soup:

    Use to_char:

    \n
    regress=# SELECT to_char( (9999999 ||' seconds')::interval, 'HH24:MM:SS' );\n  to_char   \n------------\n 2777:00:39\n(1 row)\n
    \n

    Here's a function that produces a text formatted value:

    \n
    CREATE OR REPLACE FUNCTION sec_to_time(bigint) RETURNS text AS $$\nSELECT to_char( ($1|| ' seconds')::interval, 'HH24:MI:SS');\n$$ LANGUAGE 'SQL' IMMUTABLE;\n
    \n

    eg:

    \n
    regress=# SELECT sec_to_time(9999999);\n sec_to_time \n-------------\n 2777:00:39\n(1 row)\n
    \n

    If you'd prefer an INTERVAL result, use:

    \n
    CREATE OR REPLACE FUNCTION sec_to_time(bigint) RETURNS interval AS $$\nSELECT justify_interval( ($1|| ' seconds')::interval);\n$$ LANGUAGE 'SQL' IMMUTABLE;\n
    \n

    ... which will produce results like:

    \n
    SELECT sec_to_time(9999999);\n       sec_to_time       \n-------------------------\n 3 mons 25 days 17:46:39\n(1 row)\n
    \n

    Don't cast an INTERVAL to TIME though; it'll discard the days part. Use to_char(theinterval, 'HH24:MI:SS) to convert it to text without truncation instead.

    \n soup wrap:

    Use to_char:

    regress=# SELECT to_char( (9999999 ||' seconds')::interval, 'HH24:MM:SS' );
      to_char   
    ------------
     2777:00:39
    (1 row)
    

    Here's a function that produces a text formatted value:

    CREATE OR REPLACE FUNCTION sec_to_time(bigint) RETURNS text AS $$
    SELECT to_char( ($1|| ' seconds')::interval, 'HH24:MI:SS');
    $$ LANGUAGE 'SQL' IMMUTABLE;
    

    eg:

    regress=# SELECT sec_to_time(9999999);
     sec_to_time 
    -------------
     2777:00:39
    (1 row)
    

    If you'd prefer an INTERVAL result, use:

    CREATE OR REPLACE FUNCTION sec_to_time(bigint) RETURNS interval AS $$
    SELECT justify_interval( ($1|| ' seconds')::interval);
    $$ LANGUAGE 'SQL' IMMUTABLE;
    

    ... which will produce results like:

    SELECT sec_to_time(9999999);
           sec_to_time       
    -------------------------
     3 mons 25 days 17:46:39
    (1 row)
    

    Don't cast an INTERVAL to TIME though; it'll discard the days part. Use to_char(theinterval, 'HH24:MI:SS) to convert it to text without truncation instead.

    qid & accept id: (12088243, 12110722) query: ActiveX calling URL page soup:

    Following up on the suggestion by @Ted, you can also fetch a URL using native Microsoft capabilities in an in-process fashion. You can do this via a component known as WinHTTP (the latest appears to be WinHTTP 5.1).

    \n

    See my script below which includes a function to simply obtain the status of a URL. When I run this script I get the following output:

    \n
    http://www.google.com => 200 [OK]\nhttp://www.google.com/does_not_exist => 404 [Not Found]\nhttp://does_not_exist.google.com => -2147012889\n    [The server name or address could not be resolved]\n
    \n

    If you want the actual content behind a URL, try oHttp.ResponseText. Here's the WinHTTP reference if you are interested in other capabilities as well.

    \n
    Option Explicit\n\nDim aUrlList\naUrlList = Array( _\n    "http://www.google.com", _\n    "http://www.google.com/does_not_exist", _\n    "http://does_not_exist.google.com" _\n)\n\nDim i\nFor i = 0 To UBound(aUrlList)\n    WScript.Echo aUrlList(i) & " => " & GetUrlStatus(aUrlList(i))\nNext\n\nFunction GetUrlStatus(sUrl)\n    Dim oHttp : Set oHttp = CreateObject("WinHttp.WinHttpRequest.5.1")\n\n    On Error Resume Next\n\n    With oHttp\n        .Open "GET", SUrl, False\n        .Send\n    End With\n\n    If Err Then\n        GetUrlStatus = Err.Number & " [" & Err.Description & "]"\n    Else\n        GetUrlStatus = oHttp.Status & " [" & oHttp.StatusText & "]"\n    End If\n\n    Set oHttp = Nothing\nEnd Function\n
    \n soup wrap:

    Following up on the suggestion by @Ted, you can also fetch a URL using native Microsoft capabilities in an in-process fashion. You can do this via a component known as WinHTTP (the latest appears to be WinHTTP 5.1).

    See my script below which includes a function to simply obtain the status of a URL. When I run this script I get the following output:

    http://www.google.com => 200 [OK]
    http://www.google.com/does_not_exist => 404 [Not Found]
    http://does_not_exist.google.com => -2147012889
        [The server name or address could not be resolved]
    

    If you want the actual content behind a URL, try oHttp.ResponseText. Here's the WinHTTP reference if you are interested in other capabilities as well.

    Option Explicit
    
    Dim aUrlList
    aUrlList = Array( _
        "http://www.google.com", _
        "http://www.google.com/does_not_exist", _
        "http://does_not_exist.google.com" _
    )
    
    Dim i
    For i = 0 To UBound(aUrlList)
        WScript.Echo aUrlList(i) & " => " & GetUrlStatus(aUrlList(i))
    Next
    
    Function GetUrlStatus(sUrl)
        Dim oHttp : Set oHttp = CreateObject("WinHttp.WinHttpRequest.5.1")
    
        On Error Resume Next
    
        With oHttp
            .Open "GET", SUrl, False
            .Send
        End With
    
        If Err Then
            GetUrlStatus = Err.Number & " [" & Err.Description & "]"
        Else
            GetUrlStatus = oHttp.Status & " [" & oHttp.StatusText & "]"
        End If
    
        Set oHttp = Nothing
    End Function
    
    qid & accept id: (12133106, 12133149) query: MySQL - How do I compare two columns for repeated values? soup:

    This should work,

    \n
    Select f1.FRIEND_ID,f1.FRIEND_NAME from \nFRIENDS f1,FRIENDS f2 where f1.FRIEND_ID =f2.FRIEND_ID and \nf1.id=1 and f2.id=2\n
    \n

    here is the sample:\nhttp://sqlfiddle.com/#!2/c9f36/1/0

    \n

    also if you want to get all people having common friends try this

    \n
    Select f1.FRIEND_ID,f1.FRIEND_NAME,f1.id 'first person',f2.id as 'second person' from \nFRIENDS f1,FRIENDS f2 where f1.FRIEND_ID =f2.FRIEND_ID and \nf1.id<>f2.id and f1.id
    \n

    this will return two people having same friends per row: http://sqlfiddle.com/#!2/c9f36/2/0

    \n soup wrap:

    This should work,

    Select f1.FRIEND_ID,f1.FRIEND_NAME from 
    FRIENDS f1,FRIENDS f2 where f1.FRIEND_ID =f2.FRIEND_ID and 
    f1.id=1 and f2.id=2
    

    here is the sample: http://sqlfiddle.com/#!2/c9f36/1/0

    also if you want to get all people having common friends try this

    Select f1.FRIEND_ID,f1.FRIEND_NAME,f1.id 'first person',f2.id as 'second person' from 
    FRIENDS f1,FRIENDS f2 where f1.FRIEND_ID =f2.FRIEND_ID and 
    f1.id<>f2.id and f1.id

    this will return two people having same friends per row: http://sqlfiddle.com/#!2/c9f36/2/0

    qid & accept id: (12151979, 12152003) query: Have an array in a SQL field. How to display it systematically? soup:

    You can use FIND_IN_SET() function for that.

    \n

    Example you have record like this

    \n
    Orders Table\n------------------------------------\nOrderID     |     attachedCompanyIDs\n------------------------------------\n   1                     1,2,3               -- comma separated values\n   2                     2,4     \n
    \n

    and

    \n
    Company Table\n--------------------------------------\nCompanyID      |        name\n--------------------------------------\n    1                 Company 1\n    2                 Another Company\n    3                 StackOverflow\n    4                 Nothing\n
    \n

    Using the function

    \n
    SELECT name \nFROM orders, company\nWHERE orderID = 1 AND FIND_IN_SET(companyID, attachedCompanyIDs)\n
    \n

    will result

    \n
    name\n---------------\nCompany 1\nAnother Company\nStackOverflow\n
    \n soup wrap:

    You can use FIND_IN_SET() function for that.

    Example you have record like this

    Orders Table
    ------------------------------------
    OrderID     |     attachedCompanyIDs
    ------------------------------------
       1                     1,2,3               -- comma separated values
       2                     2,4     
    

    and

    Company Table
    --------------------------------------
    CompanyID      |        name
    --------------------------------------
        1                 Company 1
        2                 Another Company
        3                 StackOverflow
        4                 Nothing
    

    Using the function

    SELECT name 
    FROM orders, company
    WHERE orderID = 1 AND FIND_IN_SET(companyID, attachedCompanyIDs)
    

    will result

    name
    ---------------
    Company 1
    Another Company
    StackOverflow
    
    qid & accept id: (12160776, 12161066) query: SQL cumulative % Total soup:

    I think you're looking for something like this, though your example calculations may be off a little:

    \n
    SELECT\n    COLA,\n    COLB,\n    ROUND(\n        -- Divide the running total...\n        (SELECT CAST(SUM(COLB) AS FLOAT) FROM #MyTempTable WHERE COLA <= a.COLA) /\n        -- ...by the full total\n        (SELECT CAST(SUM(COLB) AS FLOAT) FROM #MyTempTable),\n        2\n    ) AS COLC\nFROM #MyTempTable AS a\nORDER BY COLA\n
    \n

    EDIT: I've added rounding.

    \n

    This gives us the following output:

    \n
    COLA    COLB    COLC\nName1   218     0.35\nName2   157     0.6\nName3   134     0.81\nName4   121     1\n
    \n

    The reason that your results are 0 (or 1) is because you are dividing ints by ints, thus giving you an int (see Datatype precedence).

    \n

    UPDATE:

    \n

    I should add that this uses a "triangular join" to get the running total (WHERE COLA <= a.COLA). Depending upon your SQL Server version, you may compare this to other options if performance becomes a concern.

    \n soup wrap:

    I think you're looking for something like this, though your example calculations may be off a little:

    SELECT
        COLA,
        COLB,
        ROUND(
            -- Divide the running total...
            (SELECT CAST(SUM(COLB) AS FLOAT) FROM #MyTempTable WHERE COLA <= a.COLA) /
            -- ...by the full total
            (SELECT CAST(SUM(COLB) AS FLOAT) FROM #MyTempTable),
            2
        ) AS COLC
    FROM #MyTempTable AS a
    ORDER BY COLA
    

    EDIT: I've added rounding.

    This gives us the following output:

    COLA    COLB    COLC
    Name1   218     0.35
    Name2   157     0.6
    Name3   134     0.81
    Name4   121     1
    

    The reason that your results are 0 (or 1) is because you are dividing ints by ints, thus giving you an int (see Datatype precedence).

    UPDATE:

    I should add that this uses a "triangular join" to get the running total (WHERE COLA <= a.COLA). Depending upon your SQL Server version, you may compare this to other options if performance becomes a concern.

    qid & accept id: (12175474, 12175510) query: simple flow control with mysql soup:
    UPDATE A SET act=now() WHERE id=1 AND act_reset <> 0\n
    \n

    Is this the query you are looking for?

    \n

    Using If statement in MySQL :

    \n
    IF act_reset <> 0 THEN \n  UPDATE A SET act=now() WHERE id=1 \nEND IF; \n
    \n soup wrap:
    UPDATE A SET act=now() WHERE id=1 AND act_reset <> 0
    

    Is this the query you are looking for?

    Using If statement in MySQL :

    IF act_reset <> 0 THEN 
      UPDATE A SET act=now() WHERE id=1 
    END IF; 
    
    qid & accept id: (12221037, 12221580) query: How can I query row data as columns? soup:

    You can do an UNPIVOT and then a PIVOT of the data. this can be done either statically or dynamically:

    \n

    Static Version:

    \n
    select *\nfrom\n(\n  select fk, col + cast(rownumber as varchar(1)) new_col,\n    val\n  from \n  (\n    select fk, rownumber, value, cast(type as varchar(10)) type,\n      status\n    from yourtable\n  ) x\n  unpivot\n  (\n    val\n    for col in (value, type, status)\n  ) u\n) x1\npivot\n(\n  max(val)\n  for new_col in\n    ([value1], [type1], [status1], \n     [value2], [type2], [status2],\n    [value3], [type3])\n) p\n
    \n

    see SQL Fiddle with demo

    \n

    Dynamic Version, this will get the list of columns to unpivot and then to pivot at run-time:

    \n
    DECLARE @colsUnpivot AS NVARCHAR(MAX),\n    @query  AS NVARCHAR(MAX),\n    @colsPivot as  NVARCHAR(MAX)\n\nselect @colsUnpivot = stuff((select ','+quotename(C.name)\n         from sys.columns as C\n         where C.object_id = object_id('yourtable') and\n               C.name not in ('fk', 'rownumber')\n         for xml path('')), 1, 1, '')\n\nselect @colsPivot = STUFF((SELECT  ',' \n                      + quotename(c.name \n                         + cast(t.rownumber as varchar(10)))\n                    from yourtable t\n                     cross apply \n                      sys.columns as C\n                   where C.object_id = object_id('yourtable') and\n                         C.name not in ('fk', 'rownumber')\n                   group by c.name, t.rownumber\n                   order by t.rownumber\n            FOR XML PATH(''), TYPE\n            ).value('.', 'NVARCHAR(MAX)') \n        ,1,1,'')\n\n\nset @query \n  = 'select *\n      from\n      (\n        select fk, col + cast(rownumber as varchar(10)) new_col,\n          val\n        from \n        (\n          select fk, rownumber, value, cast(type as varchar(10)) type,\n            status\n          from yourtable\n        ) x\n        unpivot\n        (\n          val\n          for col in ('+ @colsunpivot +')\n        ) u\n      ) x1\n      pivot\n      (\n        max(val)\n        for new_col in\n          ('+ @colspivot +')\n      ) p'\n\nexec(@query)\n
    \n

    see SQL Fiddle with Demo

    \n

    Both will generate the same results, however the dynamic is great if you do not know the number of columns ahead of time.

    \n

    The Dynamic version is working under the assumption that the rownumber is already a part of the dataset.

    \n soup wrap:

    You can do an UNPIVOT and then a PIVOT of the data. this can be done either statically or dynamically:

    Static Version:

    select *
    from
    (
      select fk, col + cast(rownumber as varchar(1)) new_col,
        val
      from 
      (
        select fk, rownumber, value, cast(type as varchar(10)) type,
          status
        from yourtable
      ) x
      unpivot
      (
        val
        for col in (value, type, status)
      ) u
    ) x1
    pivot
    (
      max(val)
      for new_col in
        ([value1], [type1], [status1], 
         [value2], [type2], [status2],
        [value3], [type3])
    ) p
    

    see SQL Fiddle with demo

    Dynamic Version, this will get the list of columns to unpivot and then to pivot at run-time:

    DECLARE @colsUnpivot AS NVARCHAR(MAX),
        @query  AS NVARCHAR(MAX),
        @colsPivot as  NVARCHAR(MAX)
    
    select @colsUnpivot = stuff((select ','+quotename(C.name)
             from sys.columns as C
             where C.object_id = object_id('yourtable') and
                   C.name not in ('fk', 'rownumber')
             for xml path('')), 1, 1, '')
    
    select @colsPivot = STUFF((SELECT  ',' 
                          + quotename(c.name 
                             + cast(t.rownumber as varchar(10)))
                        from yourtable t
                         cross apply 
                          sys.columns as C
                       where C.object_id = object_id('yourtable') and
                             C.name not in ('fk', 'rownumber')
                       group by c.name, t.rownumber
                       order by t.rownumber
                FOR XML PATH(''), TYPE
                ).value('.', 'NVARCHAR(MAX)') 
            ,1,1,'')
    
    
    set @query 
      = 'select *
          from
          (
            select fk, col + cast(rownumber as varchar(10)) new_col,
              val
            from 
            (
              select fk, rownumber, value, cast(type as varchar(10)) type,
                status
              from yourtable
            ) x
            unpivot
            (
              val
              for col in ('+ @colsunpivot +')
            ) u
          ) x1
          pivot
          (
            max(val)
            for new_col in
              ('+ @colspivot +')
          ) p'
    
    exec(@query)
    

    see SQL Fiddle with Demo

    Both will generate the same results, however the dynamic is great if you do not know the number of columns ahead of time.

    The Dynamic version is working under the assumption that the rownumber is already a part of the dataset.

    qid & accept id: (12248899, 12249175) query: How can i concatenate and make a group of text in sql server? soup:

    Here, try this one,

    \n
    SELECT  a.dept_id, \n        NewTable.NameValues\nFROM    (\n          SELECT DISTINCT dept_ID\n          FROM tableA\n        ) a \n        LEFT JOIN\n        (\n          SELECT  dept_id,\n                STUFF((\n                  SELECT  ', ' + [Name] \n                  FROM    tableA\n                  WHERE   ( dept_id = Results.dept_id )\n                  FOR XML PATH('')), 1, 1, '') AS NameValues\n          FROM    tableA Results\n          GROUP BY dept_id\n        ) NewTable\n        on a.dept_id = NewTable.dept_id\nGO\n
    \n

    SQLFiddle Demo

    \n

    HEre's another version

    \n
    SELECT  a.dept_id, \n        SUBSTRING(d.nameList,1, LEN(d.nameList) - 1) ConcatenateNames\nFROM \n        (\n            SELECT DISTINCT dept_id\n            FROM   tableA\n        ) a\n        CROSS APPLY\n        (\n            SELECT name + ', ' \n            FROM tableA AS B \n            WHERE A.dept_id = B.dept_id \n            FOR XML PATH('')\n        ) D (nameList)\nGO\n
    \n

    SQLFiddle Demo

    \n soup wrap:

    Here, try this one,

    SELECT  a.dept_id, 
            NewTable.NameValues
    FROM    (
              SELECT DISTINCT dept_ID
              FROM tableA
            ) a 
            LEFT JOIN
            (
              SELECT  dept_id,
                    STUFF((
                      SELECT  ', ' + [Name] 
                      FROM    tableA
                      WHERE   ( dept_id = Results.dept_id )
                      FOR XML PATH('')), 1, 1, '') AS NameValues
              FROM    tableA Results
              GROUP BY dept_id
            ) NewTable
            on a.dept_id = NewTable.dept_id
    GO
    

    SQLFiddle Demo

    HEre's another version

    SELECT  a.dept_id, 
            SUBSTRING(d.nameList,1, LEN(d.nameList) - 1) ConcatenateNames
    FROM 
            (
                SELECT DISTINCT dept_id
                FROM   tableA
            ) a
            CROSS APPLY
            (
                SELECT name + ', ' 
                FROM tableA AS B 
                WHERE A.dept_id = B.dept_id 
                FOR XML PATH('')
            ) D (nameList)
    GO
    

    SQLFiddle Demo

    qid & accept id: (12250195, 12250216) query: How can I update more than one record in MS SQL? soup:

    For the first one would be:

    \n
    UPDATE Stackoverflow\nSet StateId = 1\nwhere GeneralId = 1000;\n
    \n

    For the second one:

    \n
    UPDATE Stackoverflow\nSet StateId = 1\nwhere GeneralId = 1001;\n
    \n

    For both of them:

    \n
    UPDATE Stackoverflow\nSet StateId = 1\nwhere GeneralId IN (1000,1001);\n
    \n soup wrap:

    For the first one would be:

    UPDATE Stackoverflow
    Set StateId = 1
    where GeneralId = 1000;
    

    For the second one:

    UPDATE Stackoverflow
    Set StateId = 1
    where GeneralId = 1001;
    

    For both of them:

    UPDATE Stackoverflow
    Set StateId = 1
    where GeneralId IN (1000,1001);
    
    qid & accept id: (12251993, 12252082) query: Dumping sqlite3 database for use in Titanium soup:

    Why do you dump the database file when you can simply copy it, i.e. use it as it is?

    \n

    As explained here, sqlite databases are cross-platform:

    \n
    \n

    A database in SQLite is a single disk file. Furthermore, the file\n format is cross-platform. A database that is created on one machine\n can be copied and used on a different machine with a different\n architecture. SQLite databases are portable across 32-bit and 64-bit\n machines and between big-endian and little-endian architectures.

    \n
    \n

    On the other hand, you should be able to dump, compress you database like this:

    \n
    echo '.dump' | sqlite3 foo.db | gzip -c > foo.dump.gz\n
    \n

    and restore it in a new SQLite database:

    \n
    gunzip -c foo.dump.gz | sqlite3 foo.new.db\n
    \n soup wrap:

    Why do you dump the database file when you can simply copy it, i.e. use it as it is?

    As explained here, sqlite databases are cross-platform:

    A database in SQLite is a single disk file. Furthermore, the file format is cross-platform. A database that is created on one machine can be copied and used on a different machine with a different architecture. SQLite databases are portable across 32-bit and 64-bit machines and between big-endian and little-endian architectures.

    On the other hand, you should be able to dump, compress you database like this:

    echo '.dump' | sqlite3 foo.db | gzip -c > foo.dump.gz
    

    and restore it in a new SQLite database:

    gunzip -c foo.dump.gz | sqlite3 foo.new.db
    
    qid & accept id: (12265411, 12265431) query: How can I tell if a VARCHAR variable contains a substring? soup:

    The standard SQL way is to use like:

    \n
    where @stringVar like '%thisstring%'\n
    \n

    That is in a query statement. You can also do this in TSQL:

    \n
    if @stringVar like '%thisstring%'\n
    \n soup wrap:

    The standard SQL way is to use like:

    where @stringVar like '%thisstring%'
    

    That is in a query statement. You can also do this in TSQL:

    if @stringVar like '%thisstring%'
    
    qid & accept id: (12335438, 12338490) query: Server timezone offset value soup:

    For the time zone you can:

    \n
    SHOW timezone;\n
    \n

    or the equivalent:

    \n
    SELECT current_setting('TIMEZONE');\n
    \n

    but this can be in any format accepted by the server, so it may return UTC, 08:00, Australia/Victoria, or similar.

    \n

    Frustratingly, there appears to be no built-in function to report the time offset from UTC the client is using in hours and minutes, which seems kind of insane to me. You can get the offset by comparing the current time in UTC to the current time locally:

    \n
    SELECT age(current_timestamp AT TIME ZONE 'UTC', current_timestamp)`\n
    \n

    ... but IMO it's cleaner to extract the tz offset in seconds from the current_timestamp and convert to an interval:

    \n
    SELECT to_char(extract(timezone from current_timestamp) * INTERVAL '1' second, 'FMHH24:MM');\n
    \n

    That'll match the desired result except that it doesn't produce a leading zero, so -05:00 is just -5:00. Annoyingly it seems to be impossible to get to_char to produce a leading zero for hours, leaving me with the following ugly manual formatting:

    \n
    CREATE OR REPLACE FUNCTION oracle_style_tz() RETURNS text AS $$\nSELECT to_char(extract(timezone_hour FROM current_timestamp),'FM00')||':'||\n       to_char(extract(timezone_minute FROM current_timestamp),'FM00');\n$$ LANGUAGE 'SQL' STABLE;\n
    \n

    Credit to Glenn for timezone_hour and timezone_minute instead of the hack I used earlier with extract(timezone from current_timestamp) * INTERVAL '1' second) and a CTE.

    \n

    If you don't need the leading zero you can instead use:

    \n
    CREATE OR REPLACE FUNCTION oracle_style_tz() RETURNS text AS $$\nSELECT to_char(extract(timezone from current_timestamp) * INTERVAL '1' second, 'FMHH24:MM');\n$$ LANGUAGE 'SQL' STABLE;\n
    \n

    See also:

    \n\n soup wrap:

    For the time zone you can:

    SHOW timezone;
    

    or the equivalent:

    SELECT current_setting('TIMEZONE');
    

    but this can be in any format accepted by the server, so it may return UTC, 08:00, Australia/Victoria, or similar.

    Frustratingly, there appears to be no built-in function to report the time offset from UTC the client is using in hours and minutes, which seems kind of insane to me. You can get the offset by comparing the current time in UTC to the current time locally:

    SELECT age(current_timestamp AT TIME ZONE 'UTC', current_timestamp)`
    

    ... but IMO it's cleaner to extract the tz offset in seconds from the current_timestamp and convert to an interval:

    SELECT to_char(extract(timezone from current_timestamp) * INTERVAL '1' second, 'FMHH24:MM');
    

    That'll match the desired result except that it doesn't produce a leading zero, so -05:00 is just -5:00. Annoyingly it seems to be impossible to get to_char to produce a leading zero for hours, leaving me with the following ugly manual formatting:

    CREATE OR REPLACE FUNCTION oracle_style_tz() RETURNS text AS $$
    SELECT to_char(extract(timezone_hour FROM current_timestamp),'FM00')||':'||
           to_char(extract(timezone_minute FROM current_timestamp),'FM00');
    $$ LANGUAGE 'SQL' STABLE;
    

    Credit to Glenn for timezone_hour and timezone_minute instead of the hack I used earlier with extract(timezone from current_timestamp) * INTERVAL '1' second) and a CTE.

    If you don't need the leading zero you can instead use:

    CREATE OR REPLACE FUNCTION oracle_style_tz() RETURNS text AS $$
    SELECT to_char(extract(timezone from current_timestamp) * INTERVAL '1' second, 'FMHH24:MM');
    $$ LANGUAGE 'SQL' STABLE;
    

    See also:

    qid & accept id: (12366390, 12366471) query: How to select product that have the maximum price of each category? soup:

    Try this one if you want to get the whole row,

    \n

    (supports most RDBMS)

    \n
    SELECT  a.*\nFROM    tbProduct a\n        INNER JOIN\n        (\n            SELECT Category, MAX(Price) maxPrice\n            FROM tbProduct\n            GROUP BY Category\n        ) b ON a.category = b.category AND\n                a.price = b.maxPrice\n
    \n

    If you are using MSSQL 2008+

    \n
    WITH allProducts AS\n(\nSELECT  ProductId,ProductName,Category,Price,\n        ROW_NUMBER() OVER (PARTITION BY CATEGORY ORDER BY Price DESC) ROW_NUM\nFROM tbProduct\n)\nSELECT ProductId,ProductName,Category,Price\nFROM allProducts\nWHERE ROW_NUM = 1\n
    \n

    or

    \n
    SELECT ProductId,ProductName,Category,Price\nFROM    \n(\nSELECT  ProductId,ProductName,Category,Price,\n        ROW_NUMBER() OVER (PARTITION BY CATEGORY ORDER BY Price DESC) ROW_NUM\nFROM tbProduct\n) allProducts\nWHERE ROW_NUM = 1\n
    \n

    SQLFiddle Demo

    \n soup wrap:

    Try this one if you want to get the whole row,

    (supports most RDBMS)

    SELECT  a.*
    FROM    tbProduct a
            INNER JOIN
            (
                SELECT Category, MAX(Price) maxPrice
                FROM tbProduct
                GROUP BY Category
            ) b ON a.category = b.category AND
                    a.price = b.maxPrice
    

    If you are using MSSQL 2008+

    WITH allProducts AS
    (
    SELECT  ProductId,ProductName,Category,Price,
            ROW_NUMBER() OVER (PARTITION BY CATEGORY ORDER BY Price DESC) ROW_NUM
    FROM tbProduct
    )
    SELECT ProductId,ProductName,Category,Price
    FROM allProducts
    WHERE ROW_NUM = 1
    

    or

    SELECT ProductId,ProductName,Category,Price
    FROM    
    (
    SELECT  ProductId,ProductName,Category,Price,
            ROW_NUMBER() OVER (PARTITION BY CATEGORY ORDER BY Price DESC) ROW_NUM
    FROM tbProduct
    ) allProducts
    WHERE ROW_NUM = 1
    

    SQLFiddle Demo

    qid & accept id: (12386646, 12386858) query: Execute a result in SQL Server using a stored procedure soup:

    You should use Dynamic SQL, to get running the returned nvarchar(max) query string from the first procedure / query.

    \n

    Edit:

    \n
    DECLARE @ResultOfTheFirstQuery nvarchar(max)\n\nSELECT @ResultOfTheFirstQuery = (Select Top(1)RequiredQuery \n                                 as ReqQry from EPMaster)\n\nexec sp_executeSql @ResultOfTheFirstQuery\n
    \n

    Or if you need a complex logic, you can write an other SP, which can heve a return value:

    \n
    DECLARE @ResultOfTheFirstQuery nvarchar(max)\n\nSELECT @ResultOfTheFirstQuery = FirstStoredprocedure @params\n\nexec sp_executeSql @ResultOfTheFirstQuery\n
    \n

    Here is an already well answered question how to get the paramater return. You can use RETURN or OUTPUT parameter.

    \n

    Here is how to use the sp_executeSql

    \n soup wrap:

    You should use Dynamic SQL, to get running the returned nvarchar(max) query string from the first procedure / query.

    Edit:

    DECLARE @ResultOfTheFirstQuery nvarchar(max)
    
    SELECT @ResultOfTheFirstQuery = (Select Top(1)RequiredQuery 
                                     as ReqQry from EPMaster)
    
    exec sp_executeSql @ResultOfTheFirstQuery
    

    Or if you need a complex logic, you can write an other SP, which can heve a return value:

    DECLARE @ResultOfTheFirstQuery nvarchar(max)
    
    SELECT @ResultOfTheFirstQuery = FirstStoredprocedure @params
    
    exec sp_executeSql @ResultOfTheFirstQuery
    

    Here is an already well answered question how to get the paramater return. You can use RETURN or OUTPUT parameter.

    Here is how to use the sp_executeSql

    qid & accept id: (12407247, 12407311) query: SQL stored procedure passing parameter into "order by" soup:

    Only by being slightly silly:

    \n
    CREATE PROCEDURE [dbo].[TopVRM]\n@orderby varchar(255)\nAS\nSELECT Peroid1.Pareto FROM dbo.Peroid1\nGROUP by Pareto\nORDER by CASE WHEN @orderby='ASC' THEN Pareto END,\n         CASE WHEN @orderby='DESC' THEN Pareto END DESC\n
    \n

    You don't strictly need to put the second sort condition in a CASE expression at all(*), and if Pareto is numeric, you may decide to just do CASE WHEN @orderby='ASC' THEN 1 ELSE -1 END * Pareto

    \n

    (*) The second sort condition only has an effect when the first sort condition considers two rows to be equal. This is either when both rows have the same Pareto value (so the reverse sort would also consider them equal), of because the first CASE expression is returning NULLs (so @orderby isn't 'ASC', so we want to perform the DESC sort.

    \n
    \n

    You might also want to consider retrieving both result sets in one go, rather than doing two calls:

    \n
    CREATE PROCEDURE [dbo].[TopVRM]\n@orderby varchar(255)\nAS\n\nSELECT * FROM (\n    SELECT\n       *,\n       ROW_NUMBER() OVER (ORDER BY Pareto) as rn1,\n       ROW_NUMBER() OVER (ORDER BY Pareto DESC) as rn2\n    FROM (\n        SELECT Peroid1.Pareto\n        FROM dbo.Peroid1\n        GROUP by Pareto\n    ) t\n) t2\nWHERE rn1 between 1 and 10 or rn2 between 1 and 10\nORDER BY rn1\n
    \n

    This will give you the top 10 and the bottom 10, in order from top to bottom. But if there are less than 20 results in total, you won't get duplicates, unlike your current plan.

    \n soup wrap:

    Only by being slightly silly:

    CREATE PROCEDURE [dbo].[TopVRM]
    @orderby varchar(255)
    AS
    SELECT Peroid1.Pareto FROM dbo.Peroid1
    GROUP by Pareto
    ORDER by CASE WHEN @orderby='ASC' THEN Pareto END,
             CASE WHEN @orderby='DESC' THEN Pareto END DESC
    

    You don't strictly need to put the second sort condition in a CASE expression at all(*), and if Pareto is numeric, you may decide to just do CASE WHEN @orderby='ASC' THEN 1 ELSE -1 END * Pareto

    (*) The second sort condition only has an effect when the first sort condition considers two rows to be equal. This is either when both rows have the same Pareto value (so the reverse sort would also consider them equal), of because the first CASE expression is returning NULLs (so @orderby isn't 'ASC', so we want to perform the DESC sort.


    You might also want to consider retrieving both result sets in one go, rather than doing two calls:

    CREATE PROCEDURE [dbo].[TopVRM]
    @orderby varchar(255)
    AS
    
    SELECT * FROM (
        SELECT
           *,
           ROW_NUMBER() OVER (ORDER BY Pareto) as rn1,
           ROW_NUMBER() OVER (ORDER BY Pareto DESC) as rn2
        FROM (
            SELECT Peroid1.Pareto
            FROM dbo.Peroid1
            GROUP by Pareto
        ) t
    ) t2
    WHERE rn1 between 1 and 10 or rn2 between 1 and 10
    ORDER BY rn1
    

    This will give you the top 10 and the bottom 10, in order from top to bottom. But if there are less than 20 results in total, you won't get duplicates, unlike your current plan.

    qid & accept id: (12419421, 12419497) query: [FIXED]From 2 mySQL databases, to one soup:

    --To get all the columns from locatie table

    \n
    select l.* from   locatie l\njoin   persooninfo p\non     l.id=p.id_p\n
    \n

    --To get all the columns from persooninfo table

    \n
    select l.* from   locatie l\njoin   persooninfo p\non     l.id=p.id_p\n
    \n

    ----To get all the columns from persooninfo and locatie table

    \n
    select * from   locatie l\njoin   persooninfo p\non     l.id=p.id_p\n
    \n soup wrap:

    --To get all the columns from locatie table

    select l.* from   locatie l
    join   persooninfo p
    on     l.id=p.id_p
    

    --To get all the columns from persooninfo table

    select l.* from   locatie l
    join   persooninfo p
    on     l.id=p.id_p
    

    ----To get all the columns from persooninfo and locatie table

    select * from   locatie l
    join   persooninfo p
    on     l.id=p.id_p
    
    qid & accept id: (12419854, 12420006) query: Dropping the same column name from mutiple tables in Oracle soup:

    No. An ALTER TABLE statement can not alter more than one table at a time. You could write some dynamic SQL based on ALL_TAB_COLS e.g.

    \n
    SELECT 'ALTER TABLE ' || owner || '.' || table_name || ' DROP COLUMN '|| column_name || ';'\nFROM all_tab_columns\nWHERE column_name = 'MY_UNWANTED_COLUMN'\nAND owner = 'MY_OWNER'\n/\n
    \n

    then run that script. You might want to add

    \n
    AND table_name IN ('MY_TAB1','MY_TAB2')\n
    \n

    to specify an exact list of tables for extra piece of mind.

    \n soup wrap:

    No. An ALTER TABLE statement can not alter more than one table at a time. You could write some dynamic SQL based on ALL_TAB_COLS e.g.

    SELECT 'ALTER TABLE ' || owner || '.' || table_name || ' DROP COLUMN '|| column_name || ';'
    FROM all_tab_columns
    WHERE column_name = 'MY_UNWANTED_COLUMN'
    AND owner = 'MY_OWNER'
    /
    

    then run that script. You might want to add

    AND table_name IN ('MY_TAB1','MY_TAB2')
    

    to specify an exact list of tables for extra piece of mind.

    qid & accept id: (12456897, 12457017) query: MySQL: same field value in multiple UNION soup:

    I think this is enough:

    \n
    SELECT candidate_id \nFROM actions_log AS a\nWHERE job_id = 1858 \n  AND ( action = 'a'  \n     OR action = 'b' \n    AND EXISTS \n        ( SELECT candidate_id \n          FROM actions_log \n          WHERE job_id = a.job_id\n            AND action = 'c'\n        )\n      ) ;\n
    \n

    or if you want to have the conditions separated, so you can build more complex queries easier:

    \n
        SELECT candidate_id \n    FROM actions_log AS a\n    WHERE job_id = 1858 \n      AND action = 'a'  \nUNION DISTINCT\n    SELECT b.candidate_id \n    FROM actions_log AS b\n      JOIN actions_log AS c\n        ON  c.candidate_id = b.candidate_id\n        AND c.job_id = b.job_id\n    WHERE b.job_id = 1858 \n      AND b.action = 'b'\n      AND c.action = 'c' ;\n
    \n soup wrap:

    I think this is enough:

    SELECT candidate_id 
    FROM actions_log AS a
    WHERE job_id = 1858 
      AND ( action = 'a'  
         OR action = 'b' 
        AND EXISTS 
            ( SELECT candidate_id 
              FROM actions_log 
              WHERE job_id = a.job_id
                AND action = 'c'
            )
          ) ;
    

    or if you want to have the conditions separated, so you can build more complex queries easier:

        SELECT candidate_id 
        FROM actions_log AS a
        WHERE job_id = 1858 
          AND action = 'a'  
    UNION DISTINCT
        SELECT b.candidate_id 
        FROM actions_log AS b
          JOIN actions_log AS c
            ON  c.candidate_id = b.candidate_id
            AND c.job_id = b.job_id
        WHERE b.job_id = 1858 
          AND b.action = 'b'
          AND c.action = 'c' ;
    
    qid & accept id: (12463628, 12464045) query: MySQL - Get a counter for each duplicate value soup:

    Unfortunately, MySQL does not have windowing functions which is what you will need. So you will have to use something like this:

    \n

    Final Query

    \n
    select data, group_row_number, overall_row_num\nfrom\n(\n  select data,\n        @num := if(@data = `data`, @num + 1, 1) as group_row_number,\n        @data := `data` as dummy, overall_row_num\n  from\n  (\n    select data, @rn:=@rn+1 overall_row_num\n    from yourtable, (SELECT @rn:=0) r\n  ) x\n  order by data, overall_row_num\n) x\norder by overall_row_num\n
    \n

    see SQL Fiddle with Demo

    \n

    Explanation:

    \n

    First, inner select, this applies a mock row_number to all of the records in your table (See SQL Fiddle with Demo):

    \n
    select data, @rn:=@rn+1 overall_row_num\nfrom yourtable, (SELECT @rn:=0) r\n
    \n

    Second part of the query, compares each row in your table to the next one to see if it has the same value, if it doesn't then start the group_row_number over (see SQL Fiddle with Demo):

    \n
    select data,\n      @num := if(@data = `data`, @num + 1, 1) as group_row_number,\n      @data := `data` as dummy, overall_row_num\nfrom\n(\n  select data, @rn:=@rn+1 overall_row_num\n  from yourtable, (SELECT @rn:=0) r\n) x\norder by data, overall_row_num\n
    \n

    The last select, returns the values you want and places them back in the order you requested:

    \n
    select data, group_row_number, overall_row_num\nfrom\n(\n  select data,\n        @num := if(@data = `data`, @num + 1, 1) as group_row_number,\n        @data := `data` as dummy, overall_row_num\n  from\n  (\n    select data, @rn:=@rn+1 overall_row_num\n    from yourtable, (SELECT @rn:=0) r\n  ) x\n  order by data, overall_row_num\n) x\norder by overall_row_num\n
    \n soup wrap:

    Unfortunately, MySQL does not have windowing functions which is what you will need. So you will have to use something like this:

    Final Query

    select data, group_row_number, overall_row_num
    from
    (
      select data,
            @num := if(@data = `data`, @num + 1, 1) as group_row_number,
            @data := `data` as dummy, overall_row_num
      from
      (
        select data, @rn:=@rn+1 overall_row_num
        from yourtable, (SELECT @rn:=0) r
      ) x
      order by data, overall_row_num
    ) x
    order by overall_row_num
    

    see SQL Fiddle with Demo

    Explanation:

    First, inner select, this applies a mock row_number to all of the records in your table (See SQL Fiddle with Demo):

    select data, @rn:=@rn+1 overall_row_num
    from yourtable, (SELECT @rn:=0) r
    

    Second part of the query, compares each row in your table to the next one to see if it has the same value, if it doesn't then start the group_row_number over (see SQL Fiddle with Demo):

    select data,
          @num := if(@data = `data`, @num + 1, 1) as group_row_number,
          @data := `data` as dummy, overall_row_num
    from
    (
      select data, @rn:=@rn+1 overall_row_num
      from yourtable, (SELECT @rn:=0) r
    ) x
    order by data, overall_row_num
    

    The last select, returns the values you want and places them back in the order you requested:

    select data, group_row_number, overall_row_num
    from
    (
      select data,
            @num := if(@data = `data`, @num + 1, 1) as group_row_number,
            @data := `data` as dummy, overall_row_num
      from
      (
        select data, @rn:=@rn+1 overall_row_num
        from yourtable, (SELECT @rn:=0) r
      ) x
      order by data, overall_row_num
    ) x
    order by overall_row_num
    
    qid & accept id: (12498046, 12498385) query: SQL - get latest records from table where field is unique soup:

    See SQL Fiddle

    \n
    SELECT T.*\nFROM T\nWHERE NOT EXISTS (\n  SELECT * \n  FROM T AS _T\n  WHERE _T.conversation_id = T.conversation_id\n  AND (\n    _T.date_created > T.date_created\n    OR\n    _T.date_created = T.date_created AND _T.id > T.id) \n)\nORDER BY T.date_created DESC\n
    \n

    gets

    \n
    ID      STATUS  CONVERSATION_ID   MESSAGE_ID    DATE_CREATED\n3         2         2                95         May, 05 2012 \n2         2         1                87         March, 03 2012 \n
    \n soup wrap:

    See SQL Fiddle

    SELECT T.*
    FROM T
    WHERE NOT EXISTS (
      SELECT * 
      FROM T AS _T
      WHERE _T.conversation_id = T.conversation_id
      AND (
        _T.date_created > T.date_created
        OR
        _T.date_created = T.date_created AND _T.id > T.id) 
    )
    ORDER BY T.date_created DESC
    

    gets

    ID      STATUS  CONVERSATION_ID   MESSAGE_ID    DATE_CREATED
    3         2         2                95         May, 05 2012 
    2         2         1                87         March, 03 2012 
    
    qid & accept id: (12527563, 12527948) query: it is possible to "group by" without losing the original rows? soup:

    One obvious solution is storing intermediate results withing another 'temporary' table, and than perform aggregation in the second step.

    \n

    Another solution is preparing a lookup table containing sums you need (but there obviously needs to be some grouping ID, I call it MASTER_ID), like that:

    \n
    CREATE TABLE comm_lkp AS\nSELECT MASTER_ID, SUM(commentsCount) as cnt\nFROM mycontents\nGROUP BY MASTER_ID\n
    \n

    Also create an index on that table on column MASTER_ID. Later, you can modify your query like that:

    \n
    SELECT\n    ...,\n    commentsCount,\n    cnt as commentsSum\nFROM\n    mycontents as a\n        JOIN comm_lkp as b ON (a.MASTER_ID=b.MASTER_ID)\nWHERE\n    name LIKE "%mysql%"\n
    \n

    It also shouldn't touch your performance as long as lookup table will be relatively small.

    \n soup wrap:

    One obvious solution is storing intermediate results withing another 'temporary' table, and than perform aggregation in the second step.

    Another solution is preparing a lookup table containing sums you need (but there obviously needs to be some grouping ID, I call it MASTER_ID), like that:

    CREATE TABLE comm_lkp AS
    SELECT MASTER_ID, SUM(commentsCount) as cnt
    FROM mycontents
    GROUP BY MASTER_ID
    

    Also create an index on that table on column MASTER_ID. Later, you can modify your query like that:

    SELECT
        ...,
        commentsCount,
        cnt as commentsSum
    FROM
        mycontents as a
            JOIN comm_lkp as b ON (a.MASTER_ID=b.MASTER_ID)
    WHERE
        name LIKE "%mysql%"
    

    It also shouldn't touch your performance as long as lookup table will be relatively small.

    qid & accept id: (12530027, 12530093) query: Duplicate table and move it to different filegroup soup:

    You could change the default filegroup before the select into, and reset it after:

    \n
    select 41 as i into newtable1\nalter database test modify filegroup [secondary] default\nselect 41 as i into newtable2\nalter database test modify filegroup [primary] default\n\nselect  t.name as TableName\n,       f.name as Filegroup\nfrom    sys.tables t\njoin    sys.indexes i\non      t.object_id = i.object_id\njoin    sys.filegroups f\non      f.data_space_id = i.data_space_id\nwhere   t.name like 'newtable%'\n
    \n

    This prints:

    \n
    TableName   Filegroup\nnewtable1   PRIMARY\nnewtable2   SECONDARY\n
    \n soup wrap:

    You could change the default filegroup before the select into, and reset it after:

    select 41 as i into newtable1
    alter database test modify filegroup [secondary] default
    select 41 as i into newtable2
    alter database test modify filegroup [primary] default
    
    select  t.name as TableName
    ,       f.name as Filegroup
    from    sys.tables t
    join    sys.indexes i
    on      t.object_id = i.object_id
    join    sys.filegroups f
    on      f.data_space_id = i.data_space_id
    where   t.name like 'newtable%'
    

    This prints:

    TableName   Filegroup
    newtable1   PRIMARY
    newtable2   SECONDARY
    
    qid & accept id: (12544051, 12545114) query: Randomly assign work location and each location should not exceed the number of designated employees soup:

    Maybe something like this:

    \n
    select C.* from \n(\n    select *, ROW_NUMBER() OVER(PARTITION BY P.PlaceID, E.Designation ORDER BY NEWID()) AS RandPosition\n        from Place as P cross join Employee E\n    where P.PlaceName != E.Home AND P.PlaceName != E.CurrentPosting\n) as C\nwhere \n    (C.Designation = 'Manager' AND C.RandPosition <= C.Manager) OR\n    (C.Designation = 'PO' AND C.RandPosition <= C.PO) OR\n    (C.Designation = 'Clerk' AND C.RandPosition <= C.Clerk)\n
    \n

    That should attempt to match employees randomly based on their designation discarding same currentPosting and home, and not assign more than what is specified in each column for the designation. However, this could return the same employee for several places, since they could match more than one based on that criteria.

    \n
    \n

    EDIT:\nAfter seeing your comment about not having a need for a high performing single query to solve this problem (which I'm not sure is even possible), and since it seems to be more of a "one-off" process that you will be calling, I wrote up the following code using a cursor and one temporary table to solve your problem of assignments:

    \n
    select *, null NewPlaceID into #Employee from Employee\n\ndeclare @empNo int\nDECLARE emp_cursor CURSOR FOR  \nSELECT EmpNo from Employee order by newid()\n\nOPEN emp_cursor   \nFETCH NEXT FROM emp_cursor INTO @empNo\n\nWHILE @@FETCH_STATUS = 0   \nBEGIN\n    update #Employee \n    set NewPlaceID = \n        (\n        select top 1 p.PlaceID from Place p \n        where \n            p.PlaceName != #Employee.Home AND \n            p.PlaceName != #Employee.CurrentPosting AND\n            (\n                CASE #Employee.Designation \n                WHEN 'Manager' THEN p.Manager\n                WHEN 'PO' THEN p.PO\n                WHEN 'Clerk' THEN p.Clerk\n                END\n            ) > (select count(*) from #Employee e2 where e2.NewPlaceID = p.PlaceID AND e2.Designation = #Employee.Designation)\n        order by newid()\n        ) \n    where #Employee.EmpNo = @empNo\n    FETCH NEXT FROM emp_cursor INTO @empNo   \nEND\n\nCLOSE emp_cursor\nDEALLOCATE emp_cursor\n\nselect e.*, p.PlaceName as RandomPosting from Employee e\ninner join #Employee e2 on (e.EmpNo = e2.EmpNo)\ninner join Place p on (e2.NewPlaceID = p.PlaceID)\n\ndrop table #Employee\n
    \n

    The basic idea is, that it iterates over the employees, in random order, and assigns to each one a random Place that meets the criteria of different home and current posting, as well as controlling the amount that get assigned to each place for each Designation to ensure that the locations are not "over-assigned" for each role.

    \n

    This snippet doesn't actually alter your data though. The final SELECT statement just returns the proposed assignments. However you could very easily alter it to make actual changes to your Employee table accordingly.

    \n soup wrap:

    Maybe something like this:

    select C.* from 
    (
        select *, ROW_NUMBER() OVER(PARTITION BY P.PlaceID, E.Designation ORDER BY NEWID()) AS RandPosition
            from Place as P cross join Employee E
        where P.PlaceName != E.Home AND P.PlaceName != E.CurrentPosting
    ) as C
    where 
        (C.Designation = 'Manager' AND C.RandPosition <= C.Manager) OR
        (C.Designation = 'PO' AND C.RandPosition <= C.PO) OR
        (C.Designation = 'Clerk' AND C.RandPosition <= C.Clerk)
    

    That should attempt to match employees randomly based on their designation discarding same currentPosting and home, and not assign more than what is specified in each column for the designation. However, this could return the same employee for several places, since they could match more than one based on that criteria.


    EDIT: After seeing your comment about not having a need for a high performing single query to solve this problem (which I'm not sure is even possible), and since it seems to be more of a "one-off" process that you will be calling, I wrote up the following code using a cursor and one temporary table to solve your problem of assignments:

    select *, null NewPlaceID into #Employee from Employee
    
    declare @empNo int
    DECLARE emp_cursor CURSOR FOR  
    SELECT EmpNo from Employee order by newid()
    
    OPEN emp_cursor   
    FETCH NEXT FROM emp_cursor INTO @empNo
    
    WHILE @@FETCH_STATUS = 0   
    BEGIN
        update #Employee 
        set NewPlaceID = 
            (
            select top 1 p.PlaceID from Place p 
            where 
                p.PlaceName != #Employee.Home AND 
                p.PlaceName != #Employee.CurrentPosting AND
                (
                    CASE #Employee.Designation 
                    WHEN 'Manager' THEN p.Manager
                    WHEN 'PO' THEN p.PO
                    WHEN 'Clerk' THEN p.Clerk
                    END
                ) > (select count(*) from #Employee e2 where e2.NewPlaceID = p.PlaceID AND e2.Designation = #Employee.Designation)
            order by newid()
            ) 
        where #Employee.EmpNo = @empNo
        FETCH NEXT FROM emp_cursor INTO @empNo   
    END
    
    CLOSE emp_cursor
    DEALLOCATE emp_cursor
    
    select e.*, p.PlaceName as RandomPosting from Employee e
    inner join #Employee e2 on (e.EmpNo = e2.EmpNo)
    inner join Place p on (e2.NewPlaceID = p.PlaceID)
    
    drop table #Employee
    

    The basic idea is, that it iterates over the employees, in random order, and assigns to each one a random Place that meets the criteria of different home and current posting, as well as controlling the amount that get assigned to each place for each Designation to ensure that the locations are not "over-assigned" for each role.

    This snippet doesn't actually alter your data though. The final SELECT statement just returns the proposed assignments. However you could very easily alter it to make actual changes to your Employee table accordingly.

    qid & accept id: (12579635, 12579757) query: MySQL: Migrating data into a many to many relationship from an OldDB plain table soup:

    try this:

    \n
    INSERT NewDB.center_has_b (center_id, b_id)\n select 'N', oldb_id from OldDB.oldb WHERE centerN = 1\n
    \n

    EDIT: This is based on the first comment for this answer

    \n
    insert into center_has_b (center_id,b_id)\nselect c.enter_id ,old.b_id\nfrom centers c\ncross join old.b\nwhere Allcenters = 'Y'\n
    \n soup wrap:

    try this:

    INSERT NewDB.center_has_b (center_id, b_id)
     select 'N', oldb_id from OldDB.oldb WHERE centerN = 1
    

    EDIT: This is based on the first comment for this answer

    insert into center_has_b (center_id,b_id)
    select c.enter_id ,old.b_id
    from centers c
    cross join old.b
    where Allcenters = 'Y'
    
    qid & accept id: (12590682, 12590748) query: MySQL database design: User and event table soup:

    Yes, you will want to create a JOIN table for the users and the events. Similar to this:

    \n
    create table users\n(\n    id int,\n    name varchar(10) -- add other fields as needed\n);\n\ncreate table events\n(\n    id int,\n    name varchar(10),\n    e_owner_id int, -- userId of who created the event\n    e_date datetime -- add other fields as needed\n);\n\ncreate table users_events  -- when user wants to attend a record will be added to this table\n(\n    u_id int,\n    e_id int\n);\n
    \n

    Then to query, you would use something like this:

    \n
    select *\nfrom users u\nleft join users_events ue\n    on u.id = ue.u_id\nleft join events e\n    on ue.e_id = e.id;\n
    \n soup wrap:

    Yes, you will want to create a JOIN table for the users and the events. Similar to this:

    create table users
    (
        id int,
        name varchar(10) -- add other fields as needed
    );
    
    create table events
    (
        id int,
        name varchar(10),
        e_owner_id int, -- userId of who created the event
        e_date datetime -- add other fields as needed
    );
    
    create table users_events  -- when user wants to attend a record will be added to this table
    (
        u_id int,
        e_id int
    );
    

    Then to query, you would use something like this:

    select *
    from users u
    left join users_events ue
        on u.id = ue.u_id
    left join events e
        on ue.e_id = e.id;
    
    qid & accept id: (12593776, 12593873) query: Oracle SQL: Joining another table with one missing tuple soup:
    select *\n  from order_information oi\n   left join mass_decode md \n     on (\n            oi.color_cd = md.cd \n        and oi.key = md.key\n     )\nwhere oi.key = 'KEY_A';\n
    \n

    SQLFiddle

    \n

    upd:

    \n

    According to your updates:

    \n
    select *\n  from order_information oi\n   left join mass_decode md \n     on oi.color_cd = md.cd\nwhere md.key = 'COLOR_CD' or md.key is null;\n
    \n

    SQLFiddle

    \n soup wrap:
    select *
      from order_information oi
       left join mass_decode md 
         on (
                oi.color_cd = md.cd 
            and oi.key = md.key
         )
    where oi.key = 'KEY_A';
    

    SQLFiddle

    upd:

    According to your updates:

    select *
      from order_information oi
       left join mass_decode md 
         on oi.color_cd = md.cd
    where md.key = 'COLOR_CD' or md.key is null;
    

    SQLFiddle

    qid & accept id: (12698945, 12698989) query: sql oracle duplicates soup:

    There are several ways to do this - see SQL Fiddle with Demo of all queries

    \n

    You can use a subquery:

    \n
    select t1.asset_no,\n  t1.sub,\n  t1.add_dtm\nfrom table1 t1\ninner join\n(\n  select max(add_dtm) mxdate, asset_no\n  from table1\n  group by asset_no\n) t2\n  on t1.add_dtm = t2.mxdate\n  and t1.asset_no = t2.asset_no\n
    \n

    or you can use CTE using row_number():

    \n
    with cte as\n(\n  select asset_no,\n    sub,\n    add_dtm,\n    row_number() over(partition by asset_no \n                      order by add_dtm desc) rn\n  from table1\n) \nselect *\nfrom cte\nwhere rn = 1\n
    \n

    Or without CTE using row_number():

    \n
    select *\nfrom \n(\n  select asset_no,\n    sub,\n    add_dtm,\n    row_number() over(partition by asset_no \n                      order by add_dtm desc) rn\n  from table1\n) x\nwhere rn = 1\n
    \n soup wrap:

    There are several ways to do this - see SQL Fiddle with Demo of all queries

    You can use a subquery:

    select t1.asset_no,
      t1.sub,
      t1.add_dtm
    from table1 t1
    inner join
    (
      select max(add_dtm) mxdate, asset_no
      from table1
      group by asset_no
    ) t2
      on t1.add_dtm = t2.mxdate
      and t1.asset_no = t2.asset_no
    

    or you can use CTE using row_number():

    with cte as
    (
      select asset_no,
        sub,
        add_dtm,
        row_number() over(partition by asset_no 
                          order by add_dtm desc) rn
      from table1
    ) 
    select *
    from cte
    where rn = 1
    

    Or without CTE using row_number():

    select *
    from 
    (
      select asset_no,
        sub,
        add_dtm,
        row_number() over(partition by asset_no 
                          order by add_dtm desc) rn
      from table1
    ) x
    where rn = 1
    
    qid & accept id: (12712480, 12712774) query: SQL query to test if string value contains carriage return soup:

    To find a value that contains non-printable characters such as carriage return or vertical tab or end of line you can use regexp_like function. In your case to display rows where a string value of a particular column contains carriage return at the end the similar query can be used.

    \n
    select *\n  from your_table_name\n where regexp_like(trim(string_column), '[[:space:]]$')\n
    \n

    Demo

    \n
    \n

    Answer to the comments

    \n

    Trim function, by default, deletes leading and trailing spaces and it will not delete carriage return or end of line characters. Lets carry out a simple test:

    \n
    SQL> create table Test_Table(\n  2    id number,\n  3    col1 varchar2(101)\n  4  );\n\nTable created\n\nSQL> insert into Test_Table (id, col1)\n  2    values(1, 'Simple string');\n\n1 row inserted\n\nSQL> commit;\n\nCommit complete\n\nSQL> insert into Test_Table (id, col1)\n  2    values(1, 'Simple string with carriage return at the end' || chr(13));\n\n1 row inserted\n\nSQL> commit;\n\nCommit complete\n\nSQL> insert into Test_Table (id, col1)\n  2    values(1, '   Simple string with carriage return at the end leading and trailing spaces' || chr(13)||'   ');\n\n1 row inserted\n\nSQL> commit;\n\nCommit complete\n\nSQL> insert into Test_Table (id, col1)\n  2    values(1, '  Simple string leading and trailing spaces  ');\n\n1 row inserted\n\nSQL> commit;\n\nCommit complete\n\nSQL> select *\n  2    from test_table;\n\n        ID COL1\n--------------------------------------------------------------------------------\n         1 Simple string\n         1 Simple string with carriage return at the end\n         1    Simple string with carriage return at the end leading and trailing spaces\n         1   Simple string leading and trailing spaces\n\nSQL> \nSQL> select *\n  2    from test_table\n  3   where regexp_like(trim(col1), '[[:space:]]$')\n  4  ;\n\n        ID COL1\n----------------------------------------------------------------------------------\n         1 Simple string with carriage return at the end\n         1    Simple string with carriage return at the end leading and trailing spaces\n\nSQL> \n
    \n soup wrap:

    To find a value that contains non-printable characters such as carriage return or vertical tab or end of line you can use regexp_like function. In your case to display rows where a string value of a particular column contains carriage return at the end the similar query can be used.

    select *
      from your_table_name
     where regexp_like(trim(string_column), '[[:space:]]$')
    

    Demo


    Answer to the comments

    Trim function, by default, deletes leading and trailing spaces and it will not delete carriage return or end of line characters. Lets carry out a simple test:

    SQL> create table Test_Table(
      2    id number,
      3    col1 varchar2(101)
      4  );
    
    Table created
    
    SQL> insert into Test_Table (id, col1)
      2    values(1, 'Simple string');
    
    1 row inserted
    
    SQL> commit;
    
    Commit complete
    
    SQL> insert into Test_Table (id, col1)
      2    values(1, 'Simple string with carriage return at the end' || chr(13));
    
    1 row inserted
    
    SQL> commit;
    
    Commit complete
    
    SQL> insert into Test_Table (id, col1)
      2    values(1, '   Simple string with carriage return at the end leading and trailing spaces' || chr(13)||'   ');
    
    1 row inserted
    
    SQL> commit;
    
    Commit complete
    
    SQL> insert into Test_Table (id, col1)
      2    values(1, '  Simple string leading and trailing spaces  ');
    
    1 row inserted
    
    SQL> commit;
    
    Commit complete
    
    SQL> select *
      2    from test_table;
    
            ID COL1
    --------------------------------------------------------------------------------
             1 Simple string
             1 Simple string with carriage return at the end
             1    Simple string with carriage return at the end leading and trailing spaces
             1   Simple string leading and trailing spaces
    
    SQL> 
    SQL> select *
      2    from test_table
      3   where regexp_like(trim(col1), '[[:space:]]$')
      4  ;
    
            ID COL1
    ----------------------------------------------------------------------------------
             1 Simple string with carriage return at the end
             1    Simple string with carriage return at the end leading and trailing spaces
    
    SQL> 
    
    qid & accept id: (12713468, 12713578) query: Can SQL determine which values from a set of possible column values do not exist? soup:

    Stick the allowed values in a temporary table allowed, then use a subquery using NOT IN:

    \n
    SELECT *\nFROM allowed\nWHERE allowed.val NOT IN (\n    SELECT maintable.val\n)\n
    \n

    Some DBs will allow you to build up a table "in-place", instead of having to create a separate table. E.g. in PostgreSQL (any version):

    \n
    SELECT *\nFROM (\n    SELECT 'foo'\n    UNION ALL SELECT 'bar'\n    UNION ALL SELECT 'baz'    -- etc.\n) inplace_allowed\nWHERE inplace_allowed.val NOT IN (\n    SELECT maintable.val\n)\n
    \n

    More modern versions of PostgreSQL (and perhaps other DBs) will let you use the slightly nicer VALUES syntax to do the same thing.

    \n soup wrap:

    Stick the allowed values in a temporary table allowed, then use a subquery using NOT IN:

    SELECT *
    FROM allowed
    WHERE allowed.val NOT IN (
        SELECT maintable.val
    )
    

    Some DBs will allow you to build up a table "in-place", instead of having to create a separate table. E.g. in PostgreSQL (any version):

    SELECT *
    FROM (
        SELECT 'foo'
        UNION ALL SELECT 'bar'
        UNION ALL SELECT 'baz'    -- etc.
    ) inplace_allowed
    WHERE inplace_allowed.val NOT IN (
        SELECT maintable.val
    )
    

    More modern versions of PostgreSQL (and perhaps other DBs) will let you use the slightly nicer VALUES syntax to do the same thing.

    qid & accept id: (12730070, 12730361) query: I need a way to use column values as column names in MySQL soup:

    You are trying to PIVOT the data but MySQL does not have a PIVOT function. Also to make this easier, you will want to partition the data based on the degerAdi value to apply a rownumber. If you have a known number of columns, then you can use:

    \n
    select rn,\n  max(case when DEGERADI = 'asd' then DEGER end) asd,\n  max(case when DEGERADI = 'rty' then DEGER end) rty,\n  max(case when DEGERADI = 'hhh' then DEGER end) hhh,\n  max(case when DEGERADI = 'hjh' then DEGER end) hjh,\n  max(case when DEGERADI = 'ffgu' then DEGER end) ffgu,\n  max(case when DEGERADI = 'qwe' then DEGER end) qwe\nfrom\n(\n  select id, degerAdi, deger,\n   @num := if(@degerAdi = `degerAdi`, @num + 1, 1) as rn,\n   @degerAdi := `degerAdi` as dummy\n  from table1\n) x\ngroup by rn;\n
    \n

    See SQL Fiddle With Demo

    \n

    If you have an unknown number of columns then you will want to use prepared statements:

    \n
    SET @sql = NULL;\nSELECT\n  GROUP_CONCAT(DISTINCT\n    CONCAT(\n      'max(case when degerAdi = ''',\n      degerAdi,\n      ''' then deger end) AS ',\n      degerAdi\n    )\n  ) INTO @sql\nFROM Table1;\n\nSET @sql \n  = CONCAT('SELECT rn, ', @sql, ' \n           from\n           (\n             select id, degerAdi, deger,\n              @num := if(@degerAdi = `degerAdi`, @num + 1, 1) as rn,\n              @degerAdi := `degerAdi` as dummy\n             from table1\n           ) x\n           group by rn');\n\nPREPARE stmt FROM @sql;\nEXECUTE stmt;\nDEALLOCATE PREPARE stmt;\n
    \n

    See SQL Fiddle with demo

    \n soup wrap:

    You are trying to PIVOT the data but MySQL does not have a PIVOT function. Also to make this easier, you will want to partition the data based on the degerAdi value to apply a rownumber. If you have a known number of columns, then you can use:

    select rn,
      max(case when DEGERADI = 'asd' then DEGER end) asd,
      max(case when DEGERADI = 'rty' then DEGER end) rty,
      max(case when DEGERADI = 'hhh' then DEGER end) hhh,
      max(case when DEGERADI = 'hjh' then DEGER end) hjh,
      max(case when DEGERADI = 'ffgu' then DEGER end) ffgu,
      max(case when DEGERADI = 'qwe' then DEGER end) qwe
    from
    (
      select id, degerAdi, deger,
       @num := if(@degerAdi = `degerAdi`, @num + 1, 1) as rn,
       @degerAdi := `degerAdi` as dummy
      from table1
    ) x
    group by rn;
    

    See SQL Fiddle With Demo

    If you have an unknown number of columns then you will want to use prepared statements:

    SET @sql = NULL;
    SELECT
      GROUP_CONCAT(DISTINCT
        CONCAT(
          'max(case when degerAdi = ''',
          degerAdi,
          ''' then deger end) AS ',
          degerAdi
        )
      ) INTO @sql
    FROM Table1;
    
    SET @sql 
      = CONCAT('SELECT rn, ', @sql, ' 
               from
               (
                 select id, degerAdi, deger,
                  @num := if(@degerAdi = `degerAdi`, @num + 1, 1) as rn,
                  @degerAdi := `degerAdi` as dummy
                 from table1
               ) x
               group by rn');
    
    PREPARE stmt FROM @sql;
    EXECUTE stmt;
    DEALLOCATE PREPARE stmt;
    

    See SQL Fiddle with demo

    qid & accept id: (12773500, 12783577) query: SQLite count ocurrences in row soup:

    As mentioned the SQL appears fine. I ran a quick test here with the following:

    \n
    create table #temp\n(num int)\n\ninsert #temp\nselect 1 union all\nselect 1 union all\nselect 1 union all\nselect 2 union all\nselect 3 \n\nselect Num, COUNT(num) as Occurances from #temp group by num\n\ndrop table #temp\n
    \n

    This gives the below result set:

    \n
    Num Occurances\n1       3\n2       1\n3       1\n
    \n

    Compare the above to your whole code, including the table creation etc.

    \n soup wrap:

    As mentioned the SQL appears fine. I ran a quick test here with the following:

    create table #temp
    (num int)
    
    insert #temp
    select 1 union all
    select 1 union all
    select 1 union all
    select 2 union all
    select 3 
    
    select Num, COUNT(num) as Occurances from #temp group by num
    
    drop table #temp
    

    This gives the below result set:

    Num Occurances
    1       3
    2       1
    3       1
    

    Compare the above to your whole code, including the table creation etc.

    qid & accept id: (12783579, 12785937) query: Read file with multiple empty lines from ORACLE DB with BASH soup:

    Assuming the data is loaded into the CLOB with the line breaks as CHR(13)||CHR(10), and you can see it in the expected format if you just select directly from the table, then the problem is with how SQL*Plus is interacting with DBMS_OUTPUT.

    \n

    By default, SET SERVEROUTPUT ON sets the FORMAT to WORD_WRAPPED. The documentation says 'SQL*Plus left justifies each line, skipping all leading whitespace', but doesn't note that this also skips all blank lines.

    \n

    If you set SERVEROUTPUT ON FORMAT WRAPPED or ... TRUNCATED then your blank lines will reappear. But you need to make sure your linesize is wide enough for the longest possible line you want to print, particularly if you go with TRUNCATED.

    \n

    (Also, your code is not declaring l_pos NUMBER := 1, and is missing a final DBMS_OUTPUT.NEW_LINE so you'll lose the final line from the CLOB).

    \n
    \n

    To demonstrate, if I create a dummy table with just a CLOB column, and populate it with a value that has the carriage return/linefeed you're looking for:

    \n
    create table t42(text clob);\n\ninsert into t42 values ('Hello Mr. X' || CHR(13) || CHR(10)\n    || CHR(13) || CHR(10)\n    || 'Text from Mailboddy' || CHR(13) || CHR(10)\n    || CHR(13) || CHR(10)\n    || 'Greetins' || CHR(13) || CHR(10)\n    || 'Mr. Y');\n\nselect * from t42;\n
    \n

    I get:

    \n
    TEXT\n--------------------------------------------------------------------------------\nHello Mr. X\n\nText from Mailboddy\n\nGreetins\nMr. Y\n
    \n

    Using your procedure (very slightly modified so it will run):

    \n
    sqlplus -s $DBLOGIN < file\nSET FEEDBACK OFF;\nSET SERVEROUTPUT ON FORMAT WORD_WRAPPED; -- setting this explicitly for effect\nDECLARE\n  l_text CLOB;\n  l_pos number := 1; -- added this\nBEGIN\n  SELECT text\n    INTO l_text\n    FROM t42;\n  while dbms_lob.substr(l_text, 1, l_pos) is not null LOOP\n    if dbms_lob.substr(l_text, 2, l_pos) = CHR(13) || CHR(10) then\n      DBMS_OUTPUT.NEW_LINE;\n      l_pos:=l_pos + 1;\n    else\n      DBMS_OUTPUT.put(dbms_lob.substr(l_text, 1, l_pos));\n    end if;\n    l_pos:=l_pos + 1;\n  END LOOP;\n  dbms_output.new_line; -- added this\nEND;\n/\n\nENDE_SQL\n
    \n

    file contains:

    \n
    Hello Mr. X\nText from Mailboddy\nGreetins\nMr. Y\n
    \n

    If I only change one line in your code, to:

    \n
    SET SERVEROUTPUT ON FORMAT WRAPPED;\n
    \n

    then file now contains:

    \n
    Hello Mr. X\n\nText from Mailboddy\n\nGreetins\nMr. Y\n
    \n
    \n

    You might want to consider UTL_FILE for this, rather than DBMS_OUTPUT, depending on your configuration. Something like this might give you some pointers.

    \n soup wrap:

    Assuming the data is loaded into the CLOB with the line breaks as CHR(13)||CHR(10), and you can see it in the expected format if you just select directly from the table, then the problem is with how SQL*Plus is interacting with DBMS_OUTPUT.

    By default, SET SERVEROUTPUT ON sets the FORMAT to WORD_WRAPPED. The documentation says 'SQL*Plus left justifies each line, skipping all leading whitespace', but doesn't note that this also skips all blank lines.

    If you set SERVEROUTPUT ON FORMAT WRAPPED or ... TRUNCATED then your blank lines will reappear. But you need to make sure your linesize is wide enough for the longest possible line you want to print, particularly if you go with TRUNCATED.

    (Also, your code is not declaring l_pos NUMBER := 1, and is missing a final DBMS_OUTPUT.NEW_LINE so you'll lose the final line from the CLOB).


    To demonstrate, if I create a dummy table with just a CLOB column, and populate it with a value that has the carriage return/linefeed you're looking for:

    create table t42(text clob);
    
    insert into t42 values ('Hello Mr. X' || CHR(13) || CHR(10)
        || CHR(13) || CHR(10)
        || 'Text from Mailboddy' || CHR(13) || CHR(10)
        || CHR(13) || CHR(10)
        || 'Greetins' || CHR(13) || CHR(10)
        || 'Mr. Y');
    
    select * from t42;
    

    I get:

    TEXT
    --------------------------------------------------------------------------------
    Hello Mr. X
    
    Text from Mailboddy
    
    Greetins
    Mr. Y
    

    Using your procedure (very slightly modified so it will run):

    sqlplus -s $DBLOGIN < file
    SET FEEDBACK OFF;
    SET SERVEROUTPUT ON FORMAT WORD_WRAPPED; -- setting this explicitly for effect
    DECLARE
      l_text CLOB;
      l_pos number := 1; -- added this
    BEGIN
      SELECT text
        INTO l_text
        FROM t42;
      while dbms_lob.substr(l_text, 1, l_pos) is not null LOOP
        if dbms_lob.substr(l_text, 2, l_pos) = CHR(13) || CHR(10) then
          DBMS_OUTPUT.NEW_LINE;
          l_pos:=l_pos + 1;
        else
          DBMS_OUTPUT.put(dbms_lob.substr(l_text, 1, l_pos));
        end if;
        l_pos:=l_pos + 1;
      END LOOP;
      dbms_output.new_line; -- added this
    END;
    /
    
    ENDE_SQL
    

    file contains:

    Hello Mr. X
    Text from Mailboddy
    Greetins
    Mr. Y
    

    If I only change one line in your code, to:

    SET SERVEROUTPUT ON FORMAT WRAPPED;
    

    then file now contains:

    Hello Mr. X
    
    Text from Mailboddy
    
    Greetins
    Mr. Y
    

    You might want to consider UTL_FILE for this, rather than DBMS_OUTPUT, depending on your configuration. Something like this might give you some pointers.

    qid & accept id: (12815194, 12815234) query: Selecting an additional empty row that does not exist soup:

    Try to use union all

    \n
    SELECT null as PROFILETITLE, null as DOCID \nUNION ALL\nSELECT PROFILETITLE, DOCID \nFROM PROFILES\nWHERE COMPANYCODE=? \nORDER BY PROFILETITLE\n
    \n

    but if you wont to add header, and if DOCID is int type, you have to use union all and cast as below

    \n
    SELECT 'PROFILETITLE' as PROFILETITLE, 'DOCID' as DOCID \nUNION ALL\nSELECT PROFILETITLE, CAST ( DOCID AS varchar(30) )\nFROM PROFILES\nWHERE COMPANYCODE=? \nORDER BY PROFILETITLE\n
    \n soup wrap:

    Try to use union all

    SELECT null as PROFILETITLE, null as DOCID 
    UNION ALL
    SELECT PROFILETITLE, DOCID 
    FROM PROFILES
    WHERE COMPANYCODE=? 
    ORDER BY PROFILETITLE
    

    but if you wont to add header, and if DOCID is int type, you have to use union all and cast as below

    SELECT 'PROFILETITLE' as PROFILETITLE, 'DOCID' as DOCID 
    UNION ALL
    SELECT PROFILETITLE, CAST ( DOCID AS varchar(30) )
    FROM PROFILES
    WHERE COMPANYCODE=? 
    ORDER BY PROFILETITLE
    
    qid & accept id: (12818621, 12822738) query: Postgresql. Create array inside select query soup:

    Assuming your starting table is named plop

    \n
    SELECT\n  plop.id,\n  CASE\n    WHEN plop.type = 1 THEN (SELECT array_agg(plop.entry * plop.size * val.x) FROM (VALUES (0.5), (0.3), (0.2)) val (x))::int4[]\n    WHEN plop.type = 2 THEN (SELECT array_agg(3 * plop.entry * x/x ) FROM generate_series(1, plop.size / 3) x)::int4[]\n    ELSE ARRAY[plop.entry * plop.size]::int4[]\n  END AS prize_pool\nFROM plop\n;\n
    \n

    That returns:

    \n
    ┌────┬──────────────────┐                                                                                                                                                                                       \n│ id │    prize_pool    │                                                                                                                                                                                       \n├────┼──────────────────┤                                                                                                                                                                                       \n│  1 │ {100}            │                                                                                                                                                                                       \n│  2 │ {200}            │                                                                                                                                                                                       \n│  3 │ {150,90,60}      │                                                                                                                                                                                       \n│  4 │ {90,90,90,90,90} │                                                                                                                                                                                       \n└────┴──────────────────┘\n
    \n

    Because entry x size / ( size / 3 ) = 3 x entry

    \n

    Note the x/x is always equal to 1 and is needed to indicate to Postgres on which set it must aggregate the results as an array.

    \n

    Hope it helps.

    \n soup wrap:

    Assuming your starting table is named plop

    SELECT
      plop.id,
      CASE
        WHEN plop.type = 1 THEN (SELECT array_agg(plop.entry * plop.size * val.x) FROM (VALUES (0.5), (0.3), (0.2)) val (x))::int4[]
        WHEN plop.type = 2 THEN (SELECT array_agg(3 * plop.entry * x/x ) FROM generate_series(1, plop.size / 3) x)::int4[]
        ELSE ARRAY[plop.entry * plop.size]::int4[]
      END AS prize_pool
    FROM plop
    ;
    

    That returns:

    ┌────┬──────────────────┐                                                                                                                                                                                       
    │ id │    prize_pool    │                                                                                                                                                                                       
    ├────┼──────────────────┤                                                                                                                                                                                       
    │  1 │ {100}            │                                                                                                                                                                                       
    │  2 │ {200}            │                                                                                                                                                                                       
    │  3 │ {150,90,60}      │                                                                                                                                                                                       
    │  4 │ {90,90,90,90,90} │                                                                                                                                                                                       
    └────┴──────────────────┘
    

    Because entry x size / ( size / 3 ) = 3 x entry

    Note the x/x is always equal to 1 and is needed to indicate to Postgres on which set it must aggregate the results as an array.

    Hope it helps.

    qid & accept id: (12823575, 12824065) query: How do I find pairs that share the one property (column) through multiple tuples (rows)? soup:

    If you can accept CSV instead of tabulated results, you could simply group the table twice:

    \n
    SELECT GROUP_CONCAT(User) FROM (\n  SELECT   User, GROUP_CONCAT(DISTINCT `Show` ORDER BY `Show` SEPARATOR 0x1e) AS s\n  FROM     Shows\n  GROUP BY User\n) t GROUP BY s\n
    \n

    Otherwise, you can join the above subquery to itself:

    \n
    SELECT DISTINCT LEAST(t.User, u.User) AS User1,\n             GREATEST(t.User, u.User) AS User2\nFROM (\n  SELECT   User, GROUP_CONCAT(DISTINCT `Show` ORDER BY `Show` SEPARATOR 0x1e) AS s\n  FROM     Shows\n  GROUP BY User\n) t JOIN (\n  SELECT   User, GROUP_CONCAT(DISTINCT `Show` ORDER BY `Show` SEPARATOR 0x1e) AS s\n  FROM     Shows\n  GROUP BY User\n) u USING (s)\nWHERE t.User <> u.User\n
    \n

    See them on sqlfiddle.

    \n

    Of course, if duplicate (User, Show) pairs are guaranteed not to exist in the Shows table, you could improve performance by removing the DISTINCT keyword from the GROUP_CONCAT() aggregations.

    \n soup wrap:

    If you can accept CSV instead of tabulated results, you could simply group the table twice:

    SELECT GROUP_CONCAT(User) FROM (
      SELECT   User, GROUP_CONCAT(DISTINCT `Show` ORDER BY `Show` SEPARATOR 0x1e) AS s
      FROM     Shows
      GROUP BY User
    ) t GROUP BY s
    

    Otherwise, you can join the above subquery to itself:

    SELECT DISTINCT LEAST(t.User, u.User) AS User1,
                 GREATEST(t.User, u.User) AS User2
    FROM (
      SELECT   User, GROUP_CONCAT(DISTINCT `Show` ORDER BY `Show` SEPARATOR 0x1e) AS s
      FROM     Shows
      GROUP BY User
    ) t JOIN (
      SELECT   User, GROUP_CONCAT(DISTINCT `Show` ORDER BY `Show` SEPARATOR 0x1e) AS s
      FROM     Shows
      GROUP BY User
    ) u USING (s)
    WHERE t.User <> u.User
    

    See them on sqlfiddle.

    Of course, if duplicate (User, Show) pairs are guaranteed not to exist in the Shows table, you could improve performance by removing the DISTINCT keyword from the GROUP_CONCAT() aggregations.

    qid & accept id: (12839031, 12840959) query: Sybase convert float to string soup:

    I'm not sure if there is easier way to do that on sybase.

    \n

    This example works for me

    \n
    declare @val float\ndeclare @val2 float\nselect @val = 17.666655942234 \nselect @val2 = 17.66\nselect substring(convert(varchar(30),@val), 1, patindex('%.%',convert(varchar(30),@val)))+reverse(convert(varchar(30),convert(int,reverse(substring(convert(varchar(30),@val), patindex('%.%',convert(varchar(30),@val))+1,6))))) as Val,\n       substring(convert(varchar(30),@val2), 1, patindex('%.%',convert(varchar(30),@val2)))+reverse(convert(varchar(30),convert(int,reverse(substring(convert(varchar(30),@val2), patindex('%.%',convert(varchar(30),@val2))+1,6))))) as Val2\n
    \n

    solution with varchar(15)

    \n
    declare @val numeric(10,5)\ndeclare @val2 numeric(10,5)\nselect @val = convert(numeric(10,5),17.666655942234)\nselect @val2 = convert(numeric(10,5),17.66)\nselect convert(varchar(15),substring(convert(varchar(15),@val), 1, patindex('%.%',convert(varchar(15),@val)))+reverse(convert(varchar(15),convert(int,reverse(substring(convert(varchar(15),@val), patindex('%.%',convert(varchar(15),@val))+1,6)))))) as Val,\n       convert(varchar(15),substring(convert(varchar(15),@val2), 1, patindex('%.%',convert(varchar(15),@val2)))+reverse(convert(varchar(15),convert(int,reverse(substring(convert(varchar(15),@val2), patindex('%.%',convert(varchar(15),@val2))+1,6)))))) as Val2\n
    \n soup wrap:

    I'm not sure if there is easier way to do that on sybase.

    This example works for me

    declare @val float
    declare @val2 float
    select @val = 17.666655942234 
    select @val2 = 17.66
    select substring(convert(varchar(30),@val), 1, patindex('%.%',convert(varchar(30),@val)))+reverse(convert(varchar(30),convert(int,reverse(substring(convert(varchar(30),@val), patindex('%.%',convert(varchar(30),@val))+1,6))))) as Val,
           substring(convert(varchar(30),@val2), 1, patindex('%.%',convert(varchar(30),@val2)))+reverse(convert(varchar(30),convert(int,reverse(substring(convert(varchar(30),@val2), patindex('%.%',convert(varchar(30),@val2))+1,6))))) as Val2
    

    solution with varchar(15)

    declare @val numeric(10,5)
    declare @val2 numeric(10,5)
    select @val = convert(numeric(10,5),17.666655942234)
    select @val2 = convert(numeric(10,5),17.66)
    select convert(varchar(15),substring(convert(varchar(15),@val), 1, patindex('%.%',convert(varchar(15),@val)))+reverse(convert(varchar(15),convert(int,reverse(substring(convert(varchar(15),@val), patindex('%.%',convert(varchar(15),@val))+1,6)))))) as Val,
           convert(varchar(15),substring(convert(varchar(15),@val2), 1, patindex('%.%',convert(varchar(15),@val2)))+reverse(convert(varchar(15),convert(int,reverse(substring(convert(varchar(15),@val2), patindex('%.%',convert(varchar(15),@val2))+1,6)))))) as Val2
    
    qid & accept id: (12849213, 12849254) query: MySQL query to return total Profit/Loss for a list of dates soup:

    Assuming that Date is stored as you show on the expected result this should work:

    \n
    SELECT\n   SUM(Amount) AS "Profit/Loss",\n   Date\nFROM your_table\nGROUP BY(Date)\n
    \n

    Otherwise id Date is of type DATE, DATETIME or TIMESTAMP you could do something like this:

    \n
    SELECT\n   SUM(Amount) AS "Profit/Loss",\n   DATE_FORMAT(Date, '%d-%m-%y') AS Date\nFROM your_table\nGROUP BY(DATE_FORMAT(Date, '%d-%m-%y'))\n
    \n

    references:

    \n\n

    EDIT (after OP's comment)

    \n

    to achieve the comulative SUM here is a good hint:

    \n
    SET @csum := 0;\nSELECT\n   (@csum := @csum + x.ProfitLoss) as ProfitLoss,\n   x.Date\nFROM\n(\n   SELECT\n      SUM(Amount) AS ProfitLoss,\n      DATE_FORMAT(Date, '%d-%m-%y') AS Date\n   FROM your_table\n   GROUP BY(DATE_FORMAT(Date, '%d-%m-%y'))\n) x\norder by x.Date;\n
    \n

    essentialy you store the current sum into a variable (@csum) and for each row of the grouped transactions you increase it by the daily balance

    \n soup wrap:

    Assuming that Date is stored as you show on the expected result this should work:

    SELECT
       SUM(Amount) AS "Profit/Loss",
       Date
    FROM your_table
    GROUP BY(Date)
    

    Otherwise id Date is of type DATE, DATETIME or TIMESTAMP you could do something like this:

    SELECT
       SUM(Amount) AS "Profit/Loss",
       DATE_FORMAT(Date, '%d-%m-%y') AS Date
    FROM your_table
    GROUP BY(DATE_FORMAT(Date, '%d-%m-%y'))
    

    references:

    EDIT (after OP's comment)

    to achieve the comulative SUM here is a good hint:

    SET @csum := 0;
    SELECT
       (@csum := @csum + x.ProfitLoss) as ProfitLoss,
       x.Date
    FROM
    (
       SELECT
          SUM(Amount) AS ProfitLoss,
          DATE_FORMAT(Date, '%d-%m-%y') AS Date
       FROM your_table
       GROUP BY(DATE_FORMAT(Date, '%d-%m-%y'))
    ) x
    order by x.Date;
    

    essentialy you store the current sum into a variable (@csum) and for each row of the grouped transactions you increase it by the daily balance

    qid & accept id: (12870094, 12870123) query: How can I group by on a field which has NULL values? soup:

    From Aggregate Functions in SQLite

    \n
    \n

    The count(X) function returns a count of the number of times that X is not NULL in a group. The count(*) function (with no arguments) returns the total number of rows in the group.

    \n
    \n

    So, the COUNT function does not count NULL so use COUNT(*) instead of COUNT(y).

    \n
    SELECT y, COUNT(*) AS COUNT\nFROM mytable\nGROUP BY y\n
    \n

    Or you can also use COUNT(x) like this one.

    \n
    SELECT y, COUNT(x) AS COUNT\nFROM mytable\nGROUP BY y\n
    \n

    See this SQLFiddle

    \n soup wrap:

    From Aggregate Functions in SQLite

    The count(X) function returns a count of the number of times that X is not NULL in a group. The count(*) function (with no arguments) returns the total number of rows in the group.

    So, the COUNT function does not count NULL so use COUNT(*) instead of COUNT(y).

    SELECT y, COUNT(*) AS COUNT
    FROM mytable
    GROUP BY y
    

    Or you can also use COUNT(x) like this one.

    SELECT y, COUNT(x) AS COUNT
    FROM mytable
    GROUP BY y
    

    See this SQLFiddle

    qid & accept id: (12875040, 12877084) query: Find similar objects that share the most tags soup:

    Given one object, you can find its tags like this:

    \n
     SELECT t1.id\n FROM tags t1\n where t1.parent_id = ?\n
    \n

    Building on that, you want to take that list of tags and find other parent_ids that share them.

    \n
     SELECT parent_id, count(*)\n FROM tags t2\n WHERE EXISTS (\n     SELECT t1.id\n     FROM tags t1\n     WHERE t1.parent_id = ?\n     AND t1.id = t2.id\n )\n GROUP BY parent_id\n
    \n

    That will give you a count of how many tags those other parent_ids share.

    \n

    You can ORDER BY count(*) desc if you'd like to find the "most similar" rows first.

    \n

    Hope that helps.

    \n soup wrap:

    Given one object, you can find its tags like this:

     SELECT t1.id
     FROM tags t1
     where t1.parent_id = ?
    

    Building on that, you want to take that list of tags and find other parent_ids that share them.

     SELECT parent_id, count(*)
     FROM tags t2
     WHERE EXISTS (
         SELECT t1.id
         FROM tags t1
         WHERE t1.parent_id = ?
         AND t1.id = t2.id
     )
     GROUP BY parent_id
    

    That will give you a count of how many tags those other parent_ids share.

    You can ORDER BY count(*) desc if you'd like to find the "most similar" rows first.

    Hope that helps.

    qid & accept id: (12879550, 12879631) query: How to select row with max value when duplicate rows exist in SQL Server soup:

    You're basically just missing a status comparison since you want one row per status;

    \n
    SELECT *\nFROM WF_Approval sr1\nWHERE NOT EXISTS (\n    SELECT *\n    FROM  WF_Approval sr2 \n    WHERE sr1.DocumentID = sr2.DocumentID AND \n          sr1.Status = sr2.Status AND                  # <-- new line\n          sr1.StepNumber < sr2.StepNumber\n) AND MasterStepID = 'Approval1'\n
    \n

    or rewritten as a JOIN;

    \n
    SELECT *\nFROM WF_Approval sr1\nLEFT JOIN WF_Approval sr2\n  ON sr1.DocumentID = sr2.DocumentID \n AND sr1.Status = sr2.Status\n AND sr1.StepNumber < sr2.StepNumber\nWHERE sr2.DocumentID IS NULL\n  AND sr1.MasterStepID = 'Approval1';\n
    \n

    SQLfiddle with both versions of the query here.

    \n soup wrap:

    You're basically just missing a status comparison since you want one row per status;

    SELECT *
    FROM WF_Approval sr1
    WHERE NOT EXISTS (
        SELECT *
        FROM  WF_Approval sr2 
        WHERE sr1.DocumentID = sr2.DocumentID AND 
              sr1.Status = sr2.Status AND                  # <-- new line
              sr1.StepNumber < sr2.StepNumber
    ) AND MasterStepID = 'Approval1'
    

    or rewritten as a JOIN;

    SELECT *
    FROM WF_Approval sr1
    LEFT JOIN WF_Approval sr2
      ON sr1.DocumentID = sr2.DocumentID 
     AND sr1.Status = sr2.Status
     AND sr1.StepNumber < sr2.StepNumber
    WHERE sr2.DocumentID IS NULL
      AND sr1.MasterStepID = 'Approval1';
    

    SQLfiddle with both versions of the query here.

    qid & accept id: (12899727, 12899749) query: SQL - Check if all the columns in one table also exist in another soup:
    select X\nfrom A\nLEFT OUTER JOIN B on A.x = B.X\nWHERE B.X IS NULL\n
    \n

    to get all records from table A that are not in table B. Or

    \n
    select X\nfrom B\nLEFT OUTER JOIN A on A.x = B.X\nWHERE A.X IS NULL\n
    \n

    to get all records from table B that are not in table A.

    \n soup wrap:
    select X
    from A
    LEFT OUTER JOIN B on A.x = B.X
    WHERE B.X IS NULL
    

    to get all records from table A that are not in table B. Or

    select X
    from B
    LEFT OUTER JOIN A on A.x = B.X
    WHERE A.X IS NULL
    

    to get all records from table B that are not in table A.

    qid & accept id: (12951673, 12952233) query: Oracle Cast using %TYPE attribute soup:

    %TYPE is only available in PL/SQL, and can only be used in the declaration section of a block. So, you can't do what you're attempting.

    \n

    You might think you could declare your own PL/SQL (sub)type and use that in the statement:

    \n
    declare\n    subtype my_type is t1.v%type;\nbegin\n    insert into t1 select cast(v as my_type) from t2;\nend;\n/\n
    \n

    ... but that also won't work, because cast() is an SQL function not a PL/SQL one, and only recognises built-in and schema-level collection types; and you can't create an SQL type using the %TYPE either.

    \n
    \n

    As a nasty hack, you could do something like:

    \n
    insert into t1 select substr(v, 1,\n    select data_length\n    from user_tab_columns\n    where table_name = 'T1'\n    and column_name = 'V') from t2;\n
    \n

    Which would be slightly more palatable if you could have that length stored in a variable - a substitution or bind variable in SQL*Plus, or a local variable in PL/SQL. For example, if it's a straight SQL update through SQL*Plus you could use a bind variable:

    \n
    var t1_v_len number;\nbegin\n    select data_length into :t1_v_len\n    from user_tab_columns\n    where table_name = 'T1' and column_name = 'V';\nend;\n/\ninsert into t1 select substr(v, 1, :t1_v_len) from t2;\n
    \n

    Something similar could be done in other set-ups, it depends where the insert is being performed.

    \n soup wrap:

    %TYPE is only available in PL/SQL, and can only be used in the declaration section of a block. So, you can't do what you're attempting.

    You might think you could declare your own PL/SQL (sub)type and use that in the statement:

    declare
        subtype my_type is t1.v%type;
    begin
        insert into t1 select cast(v as my_type) from t2;
    end;
    /
    

    ... but that also won't work, because cast() is an SQL function not a PL/SQL one, and only recognises built-in and schema-level collection types; and you can't create an SQL type using the %TYPE either.


    As a nasty hack, you could do something like:

    insert into t1 select substr(v, 1,
        select data_length
        from user_tab_columns
        where table_name = 'T1'
        and column_name = 'V') from t2;
    

    Which would be slightly more palatable if you could have that length stored in a variable - a substitution or bind variable in SQL*Plus, or a local variable in PL/SQL. For example, if it's a straight SQL update through SQL*Plus you could use a bind variable:

    var t1_v_len number;
    begin
        select data_length into :t1_v_len
        from user_tab_columns
        where table_name = 'T1' and column_name = 'V';
    end;
    /
    insert into t1 select substr(v, 1, :t1_v_len) from t2;
    

    Something similar could be done in other set-ups, it depends where the insert is being performed.

    qid & accept id: (12989520, 12989554) query: Update text of column soup:

    Try this one,

    \n
    update tab \nset mytext = concat('text none, ', Replace(mytext, 'text none',''));\n
    \n

    SQLFiddle Demo

    \n

    or simply do replace if you don't have any special reason to use concat

    \n
    update tab \nset mytext = Replace(mytext, 'text none','text none, ');\n
    \n soup wrap:

    Try this one,

    update tab 
    set mytext = concat('text none, ', Replace(mytext, 'text none',''));
    

    SQLFiddle Demo

    or simply do replace if you don't have any special reason to use concat

    update tab 
    set mytext = Replace(mytext, 'text none','text none, ');
    
    qid & accept id: (13003656, 13003667) query: SQL GROUP BY and a condition on COUNT soup:

    Use a HAVING clause to filter an aggregated column.

    \n
    SELECT   id, count(oID) \nFROM     MyTable \nGROUP BY oID \nHAVING   count(oID) = 1\n
    \n

    UPDATE 1

    \n

    wrap the results in a subquery

    \n
    SELECT a.*\nFROM tableName a INNER JOIN\n    (\n        SELECT   id \n        FROM     MyTable \n        GROUP BY id  \n        HAVING   count(oID) = 1\n    ) b ON a.ID = b.ID\n
    \n soup wrap:

    Use a HAVING clause to filter an aggregated column.

    SELECT   id, count(oID) 
    FROM     MyTable 
    GROUP BY oID 
    HAVING   count(oID) = 1
    

    UPDATE 1

    wrap the results in a subquery

    SELECT a.*
    FROM tableName a INNER JOIN
        (
            SELECT   id 
            FROM     MyTable 
            GROUP BY id  
            HAVING   count(oID) = 1
        ) b ON a.ID = b.ID
    
    qid & accept id: (13024512, 13024731) query: How to return requested results? soup:

    If you want to show all regions, and within each region to count the number with populations greater than 10 million, then probably this is easiest:

    \n
    SELECT region, SUM(CASE WHEN population > 10000000 THEN 1 ELSE 0 END) as BigCountries\nFROM bbc\nGROUP BY region\n
    \n

    So if you have a region where no countries have a population greater than 10000000, you'll still have a row with that region name and a 0.

    \n
    \n

    From your comments to @Yograj Gupta question - if you want regions where all countries have populations > 10000000, then you can either modify the above:

    \n
    SELECT region, COUNT(*) as Cnt,SUM(CASE WHEN population > 10000000 THEN 1 ELSE 0 END) as BigCountries\nFROM bbc\nGROUP BY region\nHAVING COUNT(*) = SUM(CASE WHEN population > 10000000 THEN 1 ELSE 0 END)\n
    \n

    Or just exploit a simpler property:

    \n
    SELECT region, COUNT(*) as Cnt,MIN(population) as LowestPop\nFROM bbc\nGROUP BY region\nHAVING MIN(population) > 10000000\n
    \n

    where the minimum population for any country in the region is > 10000000, then all countries must have a population > 10000000

    \n soup wrap:

    If you want to show all regions, and within each region to count the number with populations greater than 10 million, then probably this is easiest:

    SELECT region, SUM(CASE WHEN population > 10000000 THEN 1 ELSE 0 END) as BigCountries
    FROM bbc
    GROUP BY region
    

    So if you have a region where no countries have a population greater than 10000000, you'll still have a row with that region name and a 0.


    From your comments to @Yograj Gupta question - if you want regions where all countries have populations > 10000000, then you can either modify the above:

    SELECT region, COUNT(*) as Cnt,SUM(CASE WHEN population > 10000000 THEN 1 ELSE 0 END) as BigCountries
    FROM bbc
    GROUP BY region
    HAVING COUNT(*) = SUM(CASE WHEN population > 10000000 THEN 1 ELSE 0 END)
    

    Or just exploit a simpler property:

    SELECT region, COUNT(*) as Cnt,MIN(population) as LowestPop
    FROM bbc
    GROUP BY region
    HAVING MIN(population) > 10000000
    

    where the minimum population for any country in the region is > 10000000, then all countries must have a population > 10000000

    qid & accept id: (13054785, 13054905) query: How to update selective rows in a table in sql server? soup:

    Okay the query should look like this, to update items 1,2,3,4:

    \n
     UPDATE Items\n SET bitIsTab = 1\n WHERE ReqID IN (1,2,3,4);\n
    \n

    It can however be done using Linq:

    \n
    List selectedIds = { 1, 2, 3, 4 };\nvar itemsToBeUpdated = (from i in yourContext.Items \n                        where selectedIds.Contains(i.ReqID)\n                        select i);\nitemsToBeUpdated.ForEach(i=>i.bitIsTab = 1);\nyourContext.SubmitChanges();\n
    \n

    Or you could use a VARCHAR in your stored procedure:

    \n
    CREATE PROCEDURE sp_setTabItems\n    @ids varchar(500) AS\n UPDATE Items\n SET bitIsTab = 1\n WHERE charindex(',' + ReqID + ',', ',' + @ids + ',') > 0;\n
    \n

    And then use "1,2,3,4" as your stored procedure parameter.

    \n

    To execute the stored procedure:

    \n
     EXEC sp_setTabItems '1,2,3,4'\n
    \n

    Could also be done in a more reusable way, with the bitIsTab as a parameter:

    \n
    CREATE PROCEDURE sp_setTabItems\n    @isTab bit,\n    @ids varchar(500) AS\n UPDATE Items\n SET bitIsTab = @isTab \n WHERE charindex(',' + ReqID + ',', ',' + @ids + ',') > 0;\n
    \n

    And executed this way:

    \n
    EXEC sp_setTabItems '1,2,3,4',1\n
    \n

    I updated the stored procedure solution, since comparing a INT with a VARCHAR won't work with the EXEC.

    \n soup wrap:

    Okay the query should look like this, to update items 1,2,3,4:

     UPDATE Items
     SET bitIsTab = 1
     WHERE ReqID IN (1,2,3,4);
    

    It can however be done using Linq:

    List selectedIds = { 1, 2, 3, 4 };
    var itemsToBeUpdated = (from i in yourContext.Items 
                            where selectedIds.Contains(i.ReqID)
                            select i);
    itemsToBeUpdated.ForEach(i=>i.bitIsTab = 1);
    yourContext.SubmitChanges();
    

    Or you could use a VARCHAR in your stored procedure:

    CREATE PROCEDURE sp_setTabItems
        @ids varchar(500) AS
     UPDATE Items
     SET bitIsTab = 1
     WHERE charindex(',' + ReqID + ',', ',' + @ids + ',') > 0;
    

    And then use "1,2,3,4" as your stored procedure parameter.

    To execute the stored procedure:

     EXEC sp_setTabItems '1,2,3,4'
    

    Could also be done in a more reusable way, with the bitIsTab as a parameter:

    CREATE PROCEDURE sp_setTabItems
        @isTab bit,
        @ids varchar(500) AS
     UPDATE Items
     SET bitIsTab = @isTab 
     WHERE charindex(',' + ReqID + ',', ',' + @ids + ',') > 0;
    

    And executed this way:

    EXEC sp_setTabItems '1,2,3,4',1
    

    I updated the stored procedure solution, since comparing a INT with a VARCHAR won't work with the EXEC.

    qid & accept id: (13055295, 13055403) query: Increment value in SQL SELECT statement soup:

    You can try this

    \n
    select\n    td.DocID, td.FullName, td.DocContRole,\n    row_number() over (partition by td.DocID, td.DocContRole order by td.FullName) as NumRole\nfrom dbo.#TempDoc_DocContRoles as td\n
    \n

    So dynamic SQL will be smth like that

    \n

    SQL FIDDLE EXAMPLE

    \n
    create table #t2\n(\n    DocID int, FullName nvarchar(max), \n    NumRole nvarchar(max)\n)\n\ndeclare @pivot_columns nvarchar(max), @stmt nvarchar(max)\n\ninsert into #t2\nselect\n    td.DocID, td.FullName,\n    td.DocContRole + \n    cast(\n        row_number() over \n        (partition by td.DocID, td.DocContRole order by td.FullName)\n    as nvarchar(max)) as NumRole\nfrom t as td\n\nselect\n    @pivot_columns = \n    isnull(@pivot_columns + ', ', '') + \n    '[' +  NumRole + ']'\nfrom (select distinct NumRole from #t2) as T\n\nselect @stmt = '\nselect *\nfrom #t2 as t\npivot\n(\nmin(FullName)\nfor NumRole in (' + @pivot_columns + ')\n) as PT'\n\nexec sp_executesql\n    @stmt = @stmt\n
    \n soup wrap:

    You can try this

    select
        td.DocID, td.FullName, td.DocContRole,
        row_number() over (partition by td.DocID, td.DocContRole order by td.FullName) as NumRole
    from dbo.#TempDoc_DocContRoles as td
    

    So dynamic SQL will be smth like that

    SQL FIDDLE EXAMPLE

    create table #t2
    (
        DocID int, FullName nvarchar(max), 
        NumRole nvarchar(max)
    )
    
    declare @pivot_columns nvarchar(max), @stmt nvarchar(max)
    
    insert into #t2
    select
        td.DocID, td.FullName,
        td.DocContRole + 
        cast(
            row_number() over 
            (partition by td.DocID, td.DocContRole order by td.FullName)
        as nvarchar(max)) as NumRole
    from t as td
    
    select
        @pivot_columns = 
        isnull(@pivot_columns + ', ', '') + 
        '[' +  NumRole + ']'
    from (select distinct NumRole from #t2) as T
    
    select @stmt = '
    select *
    from #t2 as t
    pivot
    (
    min(FullName)
    for NumRole in (' + @pivot_columns + ')
    ) as PT'
    
    exec sp_executesql
        @stmt = @stmt
    
    qid & accept id: (13068001, 13068383) query: update each row with different values in temp table soup:

    SQL Server Solution

    \n

    This query will sequentially take the values from the temp table and update the code in the example table in round robin fashion, repeating the values from temp when required.

    \n
    update e\nset code = t.code\nfrom example e\njoin temp t on t.id = (e.id -1) % (select count(*) from temp) + 1\n
    \n

    If the ids are not sequential in either table, then you can row_number() them first, e.g.

    \n
    update e\nset code = t.code\nfrom (select *,rn=row_number() over (order by id) from example) e\njoin (select *,rn=row_number() over (order by id) from temp) t\n  on t.rn = (e.rn -1) % (select count(*) from temp) + 1\n
    \n

    The same technique (mod, row-number) can be used in other RDBMS, but the syntax will differ a little.

    \n soup wrap:

    SQL Server Solution

    This query will sequentially take the values from the temp table and update the code in the example table in round robin fashion, repeating the values from temp when required.

    update e
    set code = t.code
    from example e
    join temp t on t.id = (e.id -1) % (select count(*) from temp) + 1
    

    If the ids are not sequential in either table, then you can row_number() them first, e.g.

    update e
    set code = t.code
    from (select *,rn=row_number() over (order by id) from example) e
    join (select *,rn=row_number() over (order by id) from temp) t
      on t.rn = (e.rn -1) % (select count(*) from temp) + 1
    

    The same technique (mod, row-number) can be used in other RDBMS, but the syntax will differ a little.

    qid & accept id: (13069202, 13069340) query: Regexp_like with placeholders perl soup:

    I think you should be able to do:

    \n
    select id_name from name_table where regexp_like(name, ?);\n
    \n

    If only part of the regexp comes from the placeholder, use string concatenation:

    \n
    select id_name from name_table where regexp_like(name, ? || '[a-z]$');\n
    \n soup wrap:

    I think you should be able to do:

    select id_name from name_table where regexp_like(name, ?);
    

    If only part of the regexp comes from the placeholder, use string concatenation:

    select id_name from name_table where regexp_like(name, ? || '[a-z]$');
    
    qid & accept id: (13080106, 13080121) query: How to combine these queries that group by the same field? soup:

    If you have only three possible values of cache, you can use this,

    \n
    SELECT DATE(datetime) as datetime,\n        SUM(CASE WHEN cached = 'a' THEN 1 ELSE 0 END) cached_a,\n        SUM(CASE WHEN cached = 'b' THEN 1 ELSE 0 END) cached_b,\n        SUM(CASE WHEN cached = 'c' THEN 1 ELSE 0 END) cached_c\nFROM requests\nGROUP BY DAY(datetime)\n
    \n

    otherwise, if you have multiple number of cache, you can use Prepared Statement

    \n
    SET @sql = NULL;\nSELECT\n  GROUP_CONCAT(DISTINCT\n    CONCAT(\n      'SUM(CASE WHEN cached =  ''',\n      cached,\n      ''' then 1 ELSE 0 end) AS ',\n      CONCAT('cached_',cached)\n    )\n  ) INTO @sql\nFROM requests;\n\nSET @sql = CONCAT('SELECT DATE(datetime) as datetime, ', @sql, ' \n                   FROM requests \n                   GROUP BY DAY(datetime)');\n\nPREPARE stmt FROM @sql;\nEXECUTE stmt;\nDEALLOCATE PREPARE stmt;\n
    \n soup wrap:

    If you have only three possible values of cache, you can use this,

    SELECT DATE(datetime) as datetime,
            SUM(CASE WHEN cached = 'a' THEN 1 ELSE 0 END) cached_a,
            SUM(CASE WHEN cached = 'b' THEN 1 ELSE 0 END) cached_b,
            SUM(CASE WHEN cached = 'c' THEN 1 ELSE 0 END) cached_c
    FROM requests
    GROUP BY DAY(datetime)
    

    otherwise, if you have multiple number of cache, you can use Prepared Statement

    SET @sql = NULL;
    SELECT
      GROUP_CONCAT(DISTINCT
        CONCAT(
          'SUM(CASE WHEN cached =  ''',
          cached,
          ''' then 1 ELSE 0 end) AS ',
          CONCAT('cached_',cached)
        )
      ) INTO @sql
    FROM requests;
    
    SET @sql = CONCAT('SELECT DATE(datetime) as datetime, ', @sql, ' 
                       FROM requests 
                       GROUP BY DAY(datetime)');
    
    PREPARE stmt FROM @sql;
    EXECUTE stmt;
    DEALLOCATE PREPARE stmt;
    
    qid & accept id: (13096793, 13096833) query: SQL query to get total amount from 2 table and sort by date soup:
        SELECT COALESCE(o.date, p.date) date, Sales, Purchases\n      FROM (SELECT date, SUM(amount) Sales FROM CustomerOrder GROUP BY date) o\n FULL JOIN (SELECT date, SUM(amount) Purchases FROM PurchaseOrder GROUP BY date) p\n        ON o.date = p.date\n  ORDER BY date\n
    \n

    MySQL doesn't support FULL JOIN, so specifically for MySQL, you can use

    \n
        SELECT o.date, Sales, Purchases\n      FROM (SELECT date, SUM(amount) Sales FROM CustomerOrder GROUP BY date) o\n LEFT JOIN (SELECT date, SUM(amount) Purchases FROM PurchaseOrder GROUP BY date) p\n        ON o.date = p.date\n UNION ALL\n    SELECT date, NULL, SUM(amount) Purchases\n      FROM PurchaseOrder p2\n     WHERE NOT EXISTS (SELECT *\n                       FROM CustomerOrder o2\n                       WHERE o2.date = p2.date)\n  GROUP BY date\n  ORDER BY date\n
    \n soup wrap:
        SELECT COALESCE(o.date, p.date) date, Sales, Purchases
          FROM (SELECT date, SUM(amount) Sales FROM CustomerOrder GROUP BY date) o
     FULL JOIN (SELECT date, SUM(amount) Purchases FROM PurchaseOrder GROUP BY date) p
            ON o.date = p.date
      ORDER BY date
    

    MySQL doesn't support FULL JOIN, so specifically for MySQL, you can use

        SELECT o.date, Sales, Purchases
          FROM (SELECT date, SUM(amount) Sales FROM CustomerOrder GROUP BY date) o
     LEFT JOIN (SELECT date, SUM(amount) Purchases FROM PurchaseOrder GROUP BY date) p
            ON o.date = p.date
     UNION ALL
        SELECT date, NULL, SUM(amount) Purchases
          FROM PurchaseOrder p2
         WHERE NOT EXISTS (SELECT *
                           FROM CustomerOrder o2
                           WHERE o2.date = p2.date)
      GROUP BY date
      ORDER BY date
    
    qid & accept id: (13103114, 13103164) query: T:SQL: select values from rows as columns soup:

    It's easy to do this without PIVOT keyword, just by grouping

    \n
    select\n    P.ProfileID,\n    min(case when PD.PropertyName = 'FirstName' then P.PropertyValue else null end) as FirstName,\n    min(case when PD.PropertyName = 'LastName' then P.PropertyValue else null end) as LastName,\n    min(case when PD.PropertyName = 'Salary' then P.PropertyValue else null end) as Salary\nfrom Profiles as P\n    left outer join PropertyDefinitions as PD on PD.PropertyDefinitionID = P.PropertyDefinitionID\ngroup by P.ProfileID\n
    \n

    you can also do this with PIVOT keyword

    \n
    select\n    *\nfrom\n(\n    select P.ProfileID, P.PropertyValue, PD.PropertyName\n    from Profiles as P\n        left outer join PropertyDefinitions as PD on PD.PropertyDefinitionID = P.PropertyDefinitionID\n) as P\n    pivot\n    (\n        min(P.PropertyValue)\n        for P.PropertyName in ([FirstName], [LastName], [Salary])\n    ) as PIV\n
    \n

    UPDATE: For dynamic number of properties - take a look at Increment value in SQL SELECT statement

    \n soup wrap:

    It's easy to do this without PIVOT keyword, just by grouping

    select
        P.ProfileID,
        min(case when PD.PropertyName = 'FirstName' then P.PropertyValue else null end) as FirstName,
        min(case when PD.PropertyName = 'LastName' then P.PropertyValue else null end) as LastName,
        min(case when PD.PropertyName = 'Salary' then P.PropertyValue else null end) as Salary
    from Profiles as P
        left outer join PropertyDefinitions as PD on PD.PropertyDefinitionID = P.PropertyDefinitionID
    group by P.ProfileID
    

    you can also do this with PIVOT keyword

    select
        *
    from
    (
        select P.ProfileID, P.PropertyValue, PD.PropertyName
        from Profiles as P
            left outer join PropertyDefinitions as PD on PD.PropertyDefinitionID = P.PropertyDefinitionID
    ) as P
        pivot
        (
            min(P.PropertyValue)
            for P.PropertyName in ([FirstName], [LastName], [Salary])
        ) as PIV
    

    UPDATE: For dynamic number of properties - take a look at Increment value in SQL SELECT statement

    qid & accept id: (13110356, 13120794) query: Best way to store huge log data soup:

    Partitioning in postgresql works great for big logs. First create the parent table:

    \n
    create table  game_history_log (\n    gameid integer,\n    views integer,\n    plays integer,\n    likes integer,\n    log_date date\n);\n
    \n

    Now create the partitions. In this case one for each month, 900 k rows, would be good:

    \n
    create table game_history_log_201210 (\n    check (log_date between '2012-10-01' and '2012-10-31')\n) inherits (game_history_log);\n\ncreate table game_history_log_201211 (\n    check (log_date between '2012-11-01' and '2012-11-30')\n) inherits (game_history_log);\n
    \n

    Notice the check constraints in each partition. If you try to insert in the wrong partition:

    \n
    insert into game_history_log_201210 (\n    gameid, views, plays, likes, log_date\n) values (1, 2, 3, 4, '2012-09-30');\nERROR:  new row for relation "game_history_log_201210" violates check constraint "game_history_log_201210_log_date_check"\nDETAIL:  Failing row contains (1, 2, 3, 4, 2012-09-30).\n
    \n

    One of the advantages of partitioning is that it will only search in the correct partition reducing drastically and consistently the search size regardless of how many years of data there is. Here the explain for the search for a certain date:

    \n
    explain\nselect *\nfrom game_history_log\nwhere log_date = date '2012-10-02';\n                                              QUERY PLAN                                              \n------------------------------------------------------------------------------------------------------\n Result  (cost=0.00..30.38 rows=9 width=20)\n   ->  Append  (cost=0.00..30.38 rows=9 width=20)\n         ->  Seq Scan on game_history_log  (cost=0.00..0.00 rows=1 width=20)\n               Filter: (log_date = '2012-10-02'::date)\n         ->  Seq Scan on game_history_log_201210 game_history_log  (cost=0.00..30.38 rows=8 width=20)\n               Filter: (log_date = '2012-10-02'::date)\n
    \n

    Notice that apart from the parent table it only scanned the correct partition. Obviously you can have indexes on the partitions to avoid a sequential scan.

    \n

    Inheritance Partitioning

    \n soup wrap:

    Partitioning in postgresql works great for big logs. First create the parent table:

    create table  game_history_log (
        gameid integer,
        views integer,
        plays integer,
        likes integer,
        log_date date
    );
    

    Now create the partitions. In this case one for each month, 900 k rows, would be good:

    create table game_history_log_201210 (
        check (log_date between '2012-10-01' and '2012-10-31')
    ) inherits (game_history_log);
    
    create table game_history_log_201211 (
        check (log_date between '2012-11-01' and '2012-11-30')
    ) inherits (game_history_log);
    

    Notice the check constraints in each partition. If you try to insert in the wrong partition:

    insert into game_history_log_201210 (
        gameid, views, plays, likes, log_date
    ) values (1, 2, 3, 4, '2012-09-30');
    ERROR:  new row for relation "game_history_log_201210" violates check constraint "game_history_log_201210_log_date_check"
    DETAIL:  Failing row contains (1, 2, 3, 4, 2012-09-30).
    

    One of the advantages of partitioning is that it will only search in the correct partition reducing drastically and consistently the search size regardless of how many years of data there is. Here the explain for the search for a certain date:

    explain
    select *
    from game_history_log
    where log_date = date '2012-10-02';
                                                  QUERY PLAN                                              
    ------------------------------------------------------------------------------------------------------
     Result  (cost=0.00..30.38 rows=9 width=20)
       ->  Append  (cost=0.00..30.38 rows=9 width=20)
             ->  Seq Scan on game_history_log  (cost=0.00..0.00 rows=1 width=20)
                   Filter: (log_date = '2012-10-02'::date)
             ->  Seq Scan on game_history_log_201210 game_history_log  (cost=0.00..30.38 rows=8 width=20)
                   Filter: (log_date = '2012-10-02'::date)
    

    Notice that apart from the parent table it only scanned the correct partition. Obviously you can have indexes on the partitions to avoid a sequential scan.

    Inheritance Partitioning

    qid & accept id: (13128635, 13128831) query: Using a left join and checking if the row existed along with another check in where clause soup:

    According to this answer, in SQL-Server using NOT EXISTS is more efficient than LEFT JOIN/IS NULL

    \n
    SELECT  *\nFROM    Users u\nWHERE   u.IsActive = 1\nAND     u.Status <> 'disabled'\nAND     NOT EXISTS (SELECT 1 FROM Banned b WHERE b.UserID = u.UserID)\n
    \n

    EDIT

    \n

    For the sake of completeness this is how I would do it with a LEFT JOIN:

    \n
    SELECT  *\nFROM    Users u\n        LEFT JOIN Banned b\n            ON b.UserID = u.UserID\nWHERE   u.IsActive = 1\nAND     u.Status <> 'disabled'\nAND     b.UserID IS NULL        -- EXCLUDE ROWS WITH A MATCH IN `BANNED`\n
    \n soup wrap:

    According to this answer, in SQL-Server using NOT EXISTS is more efficient than LEFT JOIN/IS NULL

    SELECT  *
    FROM    Users u
    WHERE   u.IsActive = 1
    AND     u.Status <> 'disabled'
    AND     NOT EXISTS (SELECT 1 FROM Banned b WHERE b.UserID = u.UserID)
    

    EDIT

    For the sake of completeness this is how I would do it with a LEFT JOIN:

    SELECT  *
    FROM    Users u
            LEFT JOIN Banned b
                ON b.UserID = u.UserID
    WHERE   u.IsActive = 1
    AND     u.Status <> 'disabled'
    AND     b.UserID IS NULL        -- EXCLUDE ROWS WITH A MATCH IN `BANNED`
    
    qid & accept id: (13144230, 13144419) query: Divisioning of results of two select SQL-statements soup:

    You can create Views for things like this.

    \n
    create view vResult1 as\nselect your(\n         complicated(\n           query(\n             here()\n           )\n         )\n       );\n\ncreate view vResult2 as\nselect another(\n         complicated(\n           query(\n             here()\n           )\n         )\n       );\n
    \n

    Then you may run them:

    \n
    select vResult1/vResult2;\n
    \n

    If you need parameters for your complicated queries - you may use stored procedures.

    \n soup wrap:

    You can create Views for things like this.

    create view vResult1 as
    select your(
             complicated(
               query(
                 here()
               )
             )
           );
    
    create view vResult2 as
    select another(
             complicated(
               query(
                 here()
               )
             )
           );
    

    Then you may run them:

    select vResult1/vResult2;
    

    If you need parameters for your complicated queries - you may use stored procedures.

    qid & accept id: (13159227, 13162198) query: SQL Dynamic Columns soup:

    The basic syntax will be:

    \n
    select user,\n    sum(case when wrapupcode = 'Service' then 1 else 0 end) Service,\n    sum(case when wrapupcode = 'Sales' then 1 else 0 end) Sales,\n    sum(case when wrapupcode = 'Meeting' then 1 else 0 end) Meeting,\n    sum(case when wrapupcode = 'Other' then 1 else 0 end) Other,\n    count(timediff) timediff\nfrom\n(       \n    \n) src\ngroup by user\n
    \n

    Hard-coded static version will be something similar to this:

    \n
    select user,\n    sum(case when wrapupcode = 'Service' then 1 else 0 end) Service,\n    sum(case when wrapupcode = 'Sales' then 1 else 0 end) Sales,\n    sum(case when wrapupcode = 'Meeting' then 1 else 0 end) Meeting,\n    sum(case when wrapupcode = 'Other' then 1 else 0 end) Other,\n    count(timediff) timediff\nfrom\n(       \n    select u.loginid as user,\n        b.name wrapupcode,\n        time(age.`instime`) as initialtime,\n        age.`ENDOFWRAPUPTIME` AS endofwrapup,\n        count(timediff(age.`ENDOFWRAPUPTIME`,   time(age.`instime`))) as timediff\n    from agentcallinformation age\n    left join `axpuser` u\n        on age.userid = u.pkey\n    left join `breakcode` b\n        on age.wrapupcode = b.pkey\n        and age.wrapupcode <> ''\n    WHERE age.endofwrapuptime IS NOT null \n) src\ngroup by user\n
    \n

    I changed the query to use JOIN syntax instead of the correlated subqueries.

    \n

    If you need a dynamic version, then you can use prepared statements:

    \n
    SET @sql = NULL;\nSELECT\n  GROUP_CONCAT(DISTINCT\n    CONCAT(\n      'sum(case when wrapupcode = ''',\n      name,\n      ''' then 1 else 0 end) AS ',\n      name\n    )\n  ) INTO @sql\nFROM breakcode;\n\nSET @sql = CONCAT('SELECT user, ', @sql, ' \n                    , count(timediff) timediff\n                  from\n                  (     \n                    select u.loginid as user,\n                        b.name wrapupcode,\n                        time(age.`instime`) as initialtime,\n                        age.`ENDOFWRAPUPTIME` AS endofwrapup,\n                        count(timediff(age.`ENDOFWRAPUPTIME`,   time(age.`instime`))) as timediff\n                    from agentcallinformation age\n                    left join `axpuser` u\n                        on age.userid = u.pkey\n                    left join `breakcode` b\n                        on age.wrapupcode = b.pkey\n                        and age.wrapupcode <> ''\n                    WHERE age.endofwrapuptime IS NOT null \n                ) src\n                GROUP BY user');\n\nPREPARE stmt FROM @sql;\nEXECUTE stmt;\nDEALLOCATE PREPARE stmt;\n
    \n soup wrap:

    The basic syntax will be:

    select user,
        sum(case when wrapupcode = 'Service' then 1 else 0 end) Service,
        sum(case when wrapupcode = 'Sales' then 1 else 0 end) Sales,
        sum(case when wrapupcode = 'Meeting' then 1 else 0 end) Meeting,
        sum(case when wrapupcode = 'Other' then 1 else 0 end) Other,
        count(timediff) timediff
    from
    (       
        
    ) src
    group by user
    

    Hard-coded static version will be something similar to this:

    select user,
        sum(case when wrapupcode = 'Service' then 1 else 0 end) Service,
        sum(case when wrapupcode = 'Sales' then 1 else 0 end) Sales,
        sum(case when wrapupcode = 'Meeting' then 1 else 0 end) Meeting,
        sum(case when wrapupcode = 'Other' then 1 else 0 end) Other,
        count(timediff) timediff
    from
    (       
        select u.loginid as user,
            b.name wrapupcode,
            time(age.`instime`) as initialtime,
            age.`ENDOFWRAPUPTIME` AS endofwrapup,
            count(timediff(age.`ENDOFWRAPUPTIME`,   time(age.`instime`))) as timediff
        from agentcallinformation age
        left join `axpuser` u
            on age.userid = u.pkey
        left join `breakcode` b
            on age.wrapupcode = b.pkey
            and age.wrapupcode <> ''
        WHERE age.endofwrapuptime IS NOT null 
    ) src
    group by user
    

    I changed the query to use JOIN syntax instead of the correlated subqueries.

    If you need a dynamic version, then you can use prepared statements:

    SET @sql = NULL;
    SELECT
      GROUP_CONCAT(DISTINCT
        CONCAT(
          'sum(case when wrapupcode = ''',
          name,
          ''' then 1 else 0 end) AS ',
          name
        )
      ) INTO @sql
    FROM breakcode;
    
    SET @sql = CONCAT('SELECT user, ', @sql, ' 
                        , count(timediff) timediff
                      from
                      (     
                        select u.loginid as user,
                            b.name wrapupcode,
                            time(age.`instime`) as initialtime,
                            age.`ENDOFWRAPUPTIME` AS endofwrapup,
                            count(timediff(age.`ENDOFWRAPUPTIME`,   time(age.`instime`))) as timediff
                        from agentcallinformation age
                        left join `axpuser` u
                            on age.userid = u.pkey
                        left join `breakcode` b
                            on age.wrapupcode = b.pkey
                            and age.wrapupcode <> ''
                        WHERE age.endofwrapuptime IS NOT null 
                    ) src
                    GROUP BY user');
    
    PREPARE stmt FROM @sql;
    EXECUTE stmt;
    DEALLOCATE PREPARE stmt;
    
    qid & accept id: (13173833, 13173914) query: How to have many column from just one column? soup:

    This should do it:

    \n
    SELECT  ID,\n        StudentID,\n        Mon,\n        MAX(CASE WHEN Type LIKE 'Obtained%' THEN Value END) AS Obtained,\n        MAX(CASE WHEN Type LIKE 'Benefit%' THEN Value END) AS Benefit,\n        MAX(CASE WHEN Type LIKE 'Max%' THEN Value END) AS `Max`,\n        CASE WHEN RIGHT(Type, 2) = 'II' THEN 'II' ELSE 'I' END AS Type\nFROM    T\nGROUP BY ID, StudentID, Mon, CASE WHEN RIGHT(Type, 2) = 'II' THEN 'II' ELSE 'I' END\nORDER BY ID, StudentID, Mon, Type\n
    \n

    EXAMPLE ON SQL FIDDLE

    \n

    Although it would make more sense to store type separately. i.e. have one column for "obtained", "max" etc and another column for "I", "II"

    \n

    EDIT

    \n

    With your revised data structure this should work:

    \n
    SELECT  ID,\n        StudentID,\n        Mon,\n        COALESCE(MAX(CASE WHEN Type IN (1, 7) THEN Value END), 0) AS Obtained,\n        COALESCE(MAX(CASE WHEN Type IN (2, 8) THEN Value END), 0) AS Benefit,\n        COALESCE(MAX(CASE WHEN Type IN (4, 10) THEN Value END), 0) AS `Max`,\n        CASE WHEN Type IN (7, 8, 10) THEN 'II' WHEN Type IN (1, 2, 4) THEN 'I' END AS Type\nFROM    T\nWHERE   Type IN (1, 2, 4, 7, 8, 10)\nGROUP BY ID, StudentID, Mon, CASE WHEN Type IN (7, 8, 10) THEN 'II' WHEN Type IN (1, 2, 4) THEN 'I' END\nORDER BY ID, StudentID, Mon, Type\n
    \n

    EXAMPLE ON SQL FIDDLE

    \n soup wrap:

    This should do it:

    SELECT  ID,
            StudentID,
            Mon,
            MAX(CASE WHEN Type LIKE 'Obtained%' THEN Value END) AS Obtained,
            MAX(CASE WHEN Type LIKE 'Benefit%' THEN Value END) AS Benefit,
            MAX(CASE WHEN Type LIKE 'Max%' THEN Value END) AS `Max`,
            CASE WHEN RIGHT(Type, 2) = 'II' THEN 'II' ELSE 'I' END AS Type
    FROM    T
    GROUP BY ID, StudentID, Mon, CASE WHEN RIGHT(Type, 2) = 'II' THEN 'II' ELSE 'I' END
    ORDER BY ID, StudentID, Mon, Type
    

    EXAMPLE ON SQL FIDDLE

    Although it would make more sense to store type separately. i.e. have one column for "obtained", "max" etc and another column for "I", "II"

    EDIT

    With your revised data structure this should work:

    SELECT  ID,
            StudentID,
            Mon,
            COALESCE(MAX(CASE WHEN Type IN (1, 7) THEN Value END), 0) AS Obtained,
            COALESCE(MAX(CASE WHEN Type IN (2, 8) THEN Value END), 0) AS Benefit,
            COALESCE(MAX(CASE WHEN Type IN (4, 10) THEN Value END), 0) AS `Max`,
            CASE WHEN Type IN (7, 8, 10) THEN 'II' WHEN Type IN (1, 2, 4) THEN 'I' END AS Type
    FROM    T
    WHERE   Type IN (1, 2, 4, 7, 8, 10)
    GROUP BY ID, StudentID, Mon, CASE WHEN Type IN (7, 8, 10) THEN 'II' WHEN Type IN (1, 2, 4) THEN 'I' END
    ORDER BY ID, StudentID, Mon, Type
    

    EXAMPLE ON SQL FIDDLE

    qid & accept id: (13183568, 13184134) query: Database schema design for financial forecasting soup:

    I'd think it would be better to store each month's forecast in its own row in a table that looks like this

    \n
    month   forecast\n-----   --------\n    1      30000\n    2      31000\n    3      28000\n   ...       ...\n    60     52000\n
    \n

    Then you can use the aggregate functions to calculate forecast reports, discounted cash flow etc. ( Like if you want the un-discounted total for just 4 years): \nSELECT SUM(forecast) from FORECASTS where month=>1 and month<=48

    \n

    For salary expenses, I would think that having a view that does calculations on the fly (or if you DB engine supports "materialized views" should have sufficient performance unless we're talking some giant number of employees or really slow DB.

    \n

    Maybe have a salary history table, that trigger populates when employee data changes/payroll runs

    \n
    employeeId    month   Salary\n----------    -----   ------\n         1        1     4000\n         2        1     3000\n         3        1     5000\n         1        2     4100\n         2        2     3100\n         3        2     4800\n       ...      ...      ...\n
    \n

    Then again, you can do SUM or other aggregate function to get to the reported data.

    \n soup wrap:

    I'd think it would be better to store each month's forecast in its own row in a table that looks like this

    month   forecast
    -----   --------
        1      30000
        2      31000
        3      28000
       ...       ...
        60     52000
    

    Then you can use the aggregate functions to calculate forecast reports, discounted cash flow etc. ( Like if you want the un-discounted total for just 4 years): SELECT SUM(forecast) from FORECASTS where month=>1 and month<=48

    For salary expenses, I would think that having a view that does calculations on the fly (or if you DB engine supports "materialized views" should have sufficient performance unless we're talking some giant number of employees or really slow DB.

    Maybe have a salary history table, that trigger populates when employee data changes/payroll runs

    employeeId    month   Salary
    ----------    -----   ------
             1        1     4000
             2        1     3000
             3        1     5000
             1        2     4100
             2        2     3100
             3        2     4800
           ...      ...      ...
    

    Then again, you can do SUM or other aggregate function to get to the reported data.

    qid & accept id: (13230133, 13230189) query: Selecting all uppercased-value rows of a table in SQL Navigator soup:

    I believe Oracle is case sensitive by default? If so, then this should work:

    \n
    SELECT *\nFROM table_name\nWHERE LOWER(email) <> email\n
    \n

    If this works then you can simply update them with

    \n
    UPDATE table_name\nSET email = LOWER(email)\nWHERE LOWER(email) <> email\n
    \n soup wrap:

    I believe Oracle is case sensitive by default? If so, then this should work:

    SELECT *
    FROM table_name
    WHERE LOWER(email) <> email
    

    If this works then you can simply update them with

    UPDATE table_name
    SET email = LOWER(email)
    WHERE LOWER(email) <> email
    
    qid & accept id: (13234818, 13261639) query: Formatting External tables in Greenplum (PostgreSQL) soup:

    It appears that you can:

    \n
    SET DATESTYLE = 'YMD';\n
    \n

    before SELECTing from the table. This will affect the interpretation of all dates, though, not just those from the file. If you consistently use unambiguous ISO dates elsewhere that will be fine, but it may be a problem if (for example) you need to also accept 'D/M/Y' date literals in the same query.

    \n

    This is specific to GreenPlum's CREATE EXTERNAL TABLE and does not apply to SQL-standard SQL/MED foreign data wrappers, as shown below.

    \n
    \n

    What surprises me is that PostgreSQL proper (which does not have this CREATE EXTERNAL TABLE feature) always accepts ISO-style YYYY-MM-DD and YYYYMMDD dates, irrespective of DATESTYLE. Observe:

    \n
    regress=> SELECT '20121229'::date, '2012-12-29'::date, current_setting('DateStyle');\n    date    |    date    | current_setting \n------------+------------+-----------------\n 2012-12-29 | 2012-12-29 | ISO, MDY\n(1 row)\n\nregress=> SET DateStyle = 'DMY';\nSET\nregress=> SELECT '20121229'::date, '2012-12-29'::date, current_setting('DateStyle');\n    date    |    date    | current_setting \n------------+------------+-----------------\n 2012-12-29 | 2012-12-29 | ISO, DMY\n(1 row)\n
    \n

    ... so if GreenPlum behaved the same way, you should not need to do anything to get these YYYYMMDD dates to be read correctly from the input file.

    \n

    Here's how it works with a PostgreSQL file_fdw SQL/MED foreign data wrapper:

    \n
    CREATE EXTENSION file_fdw;\n\nCOPY (SELECT '20121229', '2012-12-29') TO '/tmp/dates.csv' CSV;\n\nSET DateStyle = 'DMY';\n\nCREATE SERVER csvtest FOREIGN DATA WRAPPER file_fdw;\n\nCREATE FOREIGN TABLE csvtest (\n    date1 date,\n    date2 date\n) SERVER csvtest OPTIONS ( filename '/tmp/dates.csv', format 'csv' );\n\nSELECT * FROM csvtest ;\n   date1    |   date2    \n------------+------------\n 2012-12-29 | 2012-12-29\n(1 row)\n
    \n

    The CSV file contents are:

    \n
    20121229,2012-12-29\n
    \n

    so you can see that Pg will always accept ISO dates for CSV, irrespective of datestyle.

    \n

    If GreenPlum doesn't, please file a bug. The idea of DateStyle changing the way a foreign table is read after creation is crazy.

    \n soup wrap:

    It appears that you can:

    SET DATESTYLE = 'YMD';
    

    before SELECTing from the table. This will affect the interpretation of all dates, though, not just those from the file. If you consistently use unambiguous ISO dates elsewhere that will be fine, but it may be a problem if (for example) you need to also accept 'D/M/Y' date literals in the same query.

    This is specific to GreenPlum's CREATE EXTERNAL TABLE and does not apply to SQL-standard SQL/MED foreign data wrappers, as shown below.


    What surprises me is that PostgreSQL proper (which does not have this CREATE EXTERNAL TABLE feature) always accepts ISO-style YYYY-MM-DD and YYYYMMDD dates, irrespective of DATESTYLE. Observe:

    regress=> SELECT '20121229'::date, '2012-12-29'::date, current_setting('DateStyle');
        date    |    date    | current_setting 
    ------------+------------+-----------------
     2012-12-29 | 2012-12-29 | ISO, MDY
    (1 row)
    
    regress=> SET DateStyle = 'DMY';
    SET
    regress=> SELECT '20121229'::date, '2012-12-29'::date, current_setting('DateStyle');
        date    |    date    | current_setting 
    ------------+------------+-----------------
     2012-12-29 | 2012-12-29 | ISO, DMY
    (1 row)
    

    ... so if GreenPlum behaved the same way, you should not need to do anything to get these YYYYMMDD dates to be read correctly from the input file.

    Here's how it works with a PostgreSQL file_fdw SQL/MED foreign data wrapper:

    CREATE EXTENSION file_fdw;
    
    COPY (SELECT '20121229', '2012-12-29') TO '/tmp/dates.csv' CSV;
    
    SET DateStyle = 'DMY';
    
    CREATE SERVER csvtest FOREIGN DATA WRAPPER file_fdw;
    
    CREATE FOREIGN TABLE csvtest (
        date1 date,
        date2 date
    ) SERVER csvtest OPTIONS ( filename '/tmp/dates.csv', format 'csv' );
    
    SELECT * FROM csvtest ;
       date1    |   date2    
    ------------+------------
     2012-12-29 | 2012-12-29
    (1 row)
    

    The CSV file contents are:

    20121229,2012-12-29
    

    so you can see that Pg will always accept ISO dates for CSV, irrespective of datestyle.

    If GreenPlum doesn't, please file a bug. The idea of DateStyle changing the way a foreign table is read after creation is crazy.

    qid & accept id: (13237623, 13237661) query: Copy data into another table soup:

    If both tables are truly the same schema:

    \n
    INSERT INTO newTable\nSELECT * FROM oldTable\n
    \n

    Otherwise, you'll have to specify the column names (the column list for newTable is optional if you are specifying a value for all columns and selecting columns in the same order as newTable's schema):

    \n
    INSERT INTO newTable (col1, col2, col3)\nSELECT column1, column2, column3\nFROM oldTable\n
    \n soup wrap:

    If both tables are truly the same schema:

    INSERT INTO newTable
    SELECT * FROM oldTable
    

    Otherwise, you'll have to specify the column names (the column list for newTable is optional if you are specifying a value for all columns and selecting columns in the same order as newTable's schema):

    INSERT INTO newTable (col1, col2, col3)
    SELECT column1, column2, column3
    FROM oldTable
    
    qid & accept id: (13241518, 13242284) query: sql query including month columns? soup:

    Try This:

    \n
    --setup\ncreate table #fa00100 (assetId int, assetindex int, acquisitionCost int, dateAcquired date)\ncreate table #fa00200 (assetIndex int, moDepreciateRate int, fullyDeprFlag nchar(1), fullyDeprFlagBit bit)\n\ninsert #fa00100 \n      select 1, 1, 100, '2012-01-09'\nunion select 2, 2, 500, '2012-05-09'\ninsert #fa00200\n      select 1, 10, 'N', 0\nunion select 2, 15, 'Y', 1\n
    \n

    .

    \n
    --solution\ncreate table #dates (d date not null primary key clustered)\ndeclare @sql nvarchar(max)\n, @pivotCols nvarchar(max)\n, @thisMonth date\n, @noMonths int = 4\n\nset @thisMonth = cast(1 + GETUTCDATE() - DAY(getutcdate()) as date)\nselect @thisMonth\nwhile @noMonths > 0\nbegin\n    insert #dates select DATEADD(month,@noMonths,@thisMonth) \n    set @noMonths = @noMonths - 1\nend\n\nselect @sql = ISNULL(@sql + NCHAR(10) + ',', '') \n--+ ' A.acquisitionCost - (B.moDepreciateRate * DATEDIFF(month,dateAcquired,''' + convert(nvarchar(8), d, 112) + ''')) ' --Original Line\n    + ' case when A.acquisitionCost - (B.moDepreciateRate * DATEDIFF(month,dateAcquired,''' + convert(nvarchar(8), d, 112) + ''')) <= 0 then 0 else A.acquisitionCost - (B.moDepreciateRate * DATEDIFF(month,dateAcquired,''' + convert(nvarchar(8), d, 112) + ''')) end ' --new version\n\n+ quotename(DATENAME(month, d) + '_' + right(cast(10000 + YEAR(d) as nvarchar(5)),4))\nfrom #dates\n\nset @sql = 'select A.assetid\n, A.acquisitionCost\n, B.moDepreciateRate \n,' + @sql + '\nfrom #fa00100 A\ninner join #fa00200 B \n    on A.assetindex = B.assetindex\nwhere B.fullyDeprFlag = ''N''\nand B.fullyDeprFlagBit = 0\n'\n--nb: B.fullyDeprFlag = ''N'' has double quotes to avoid the quotes from terminating the string\n--I've also included fullyDeprFlagBit to show how the SQL would look if you had a bit column - that will perform much better and will save space over using a character column\n\nprint @sql\nexec(@sql)\n\ndrop table #dates \n
    \n

    .

    \n
        --remove temp tables from setup\ndrop table #fa00100\ndrop table #fa00200\n
    \n soup wrap:

    Try This:

    --setup
    create table #fa00100 (assetId int, assetindex int, acquisitionCost int, dateAcquired date)
    create table #fa00200 (assetIndex int, moDepreciateRate int, fullyDeprFlag nchar(1), fullyDeprFlagBit bit)
    
    insert #fa00100 
          select 1, 1, 100, '2012-01-09'
    union select 2, 2, 500, '2012-05-09'
    insert #fa00200
          select 1, 10, 'N', 0
    union select 2, 15, 'Y', 1
    

    .

    --solution
    create table #dates (d date not null primary key clustered)
    declare @sql nvarchar(max)
    , @pivotCols nvarchar(max)
    , @thisMonth date
    , @noMonths int = 4
    
    set @thisMonth = cast(1 + GETUTCDATE() - DAY(getutcdate()) as date)
    select @thisMonth
    while @noMonths > 0
    begin
        insert #dates select DATEADD(month,@noMonths,@thisMonth) 
        set @noMonths = @noMonths - 1
    end
    
    select @sql = ISNULL(@sql + NCHAR(10) + ',', '') 
    --+ ' A.acquisitionCost - (B.moDepreciateRate * DATEDIFF(month,dateAcquired,''' + convert(nvarchar(8), d, 112) + ''')) ' --Original Line
        + ' case when A.acquisitionCost - (B.moDepreciateRate * DATEDIFF(month,dateAcquired,''' + convert(nvarchar(8), d, 112) + ''')) <= 0 then 0 else A.acquisitionCost - (B.moDepreciateRate * DATEDIFF(month,dateAcquired,''' + convert(nvarchar(8), d, 112) + ''')) end ' --new version
    
    + quotename(DATENAME(month, d) + '_' + right(cast(10000 + YEAR(d) as nvarchar(5)),4))
    from #dates
    
    set @sql = 'select A.assetid
    , A.acquisitionCost
    , B.moDepreciateRate 
    ,' + @sql + '
    from #fa00100 A
    inner join #fa00200 B 
        on A.assetindex = B.assetindex
    where B.fullyDeprFlag = ''N''
    and B.fullyDeprFlagBit = 0
    '
    --nb: B.fullyDeprFlag = ''N'' has double quotes to avoid the quotes from terminating the string
    --I've also included fullyDeprFlagBit to show how the SQL would look if you had a bit column - that will perform much better and will save space over using a character column
    
    print @sql
    exec(@sql)
    
    drop table #dates 
    

    .

        --remove temp tables from setup
    drop table #fa00100
    drop table #fa00200
    
    qid & accept id: (13249903, 13250303) query: MySQL select multiple rows by referencing to one data field soup:

    Sounds like you want this:

    \n
    select model_id\nfrom yourtable\nwhere property in (1, 3)\ngroup by model_id\nhaving count(*) > 1;\n
    \n

    See SQL Fiddle with Demo

    \n

    Or you can use the following:

    \n
    select model_id\nfrom yourtable t1\nwhere property = 1\n  and exists (select model_id\n              from yourtable t2\n              where t1.model_id = t2.model_id\n                and property = 3)\n
    \n

    See SQL Fiddle with Demo

    \n soup wrap:

    Sounds like you want this:

    select model_id
    from yourtable
    where property in (1, 3)
    group by model_id
    having count(*) > 1;
    

    See SQL Fiddle with Demo

    Or you can use the following:

    select model_id
    from yourtable t1
    where property = 1
      and exists (select model_id
                  from yourtable t2
                  where t1.model_id = t2.model_id
                    and property = 3)
    

    See SQL Fiddle with Demo

    qid & accept id: (13277973, 13278064) query: SQL average number of requests per user over time period soup:

    Something like SUM feature will work. Might be a little slow.

    \n
    SELECT SUM(requestType) FROM Requests WHERE `userEmail` = `userEmail` and `date` BETWEEN `first-date YYYY-MM-DD` AND `second-date YYYY-MM-DD`;  \n
    \n

    SQL SUM

    \n

    I would also recommend, if you have a lot of request, to have one row per user per day and just update the request total for that user.

    \n

    Edit: If you want the last 30 days something like this query should work. It worked on my test table.

    \n
     SELECT SUM(requestType) FROM Requests WHERE `userEmail` = `userEmail` and `date`BETWEEN curdate() - INTERVAL 30 DAY AND curdate();\n
    \n soup wrap:

    Something like SUM feature will work. Might be a little slow.

    SELECT SUM(requestType) FROM Requests WHERE `userEmail` = `userEmail` and `date` BETWEEN `first-date YYYY-MM-DD` AND `second-date YYYY-MM-DD`;  
    

    SQL SUM

    I would also recommend, if you have a lot of request, to have one row per user per day and just update the request total for that user.

    Edit: If you want the last 30 days something like this query should work. It worked on my test table.

     SELECT SUM(requestType) FROM Requests WHERE `userEmail` = `userEmail` and `date`BETWEEN curdate() - INTERVAL 30 DAY AND curdate();
    
    qid & accept id: (13281693, 13281782) query: Comparing number in formatted string in MySQL? soup:

    Because your numbers are zero padded, as long as the four letter prefix is the same and always the same length, then this should work as MySQL will do a lexicographical comparison.

    \n

    Note that one less 0 in the padding will cause this to fail:

    \n
    SET @policy1 = 'XXXX-00099';\nSET @policy2 = 'XXXX-000598';\nSELECT @policy1, @policy2, @policy1 > @policy2 AS comparison;\n=========================================\n> 'XXXX-00099', 'XXXX-000598', 1\n
    \n

    If you need to truly compare the numbers at the end, you will need to parse them out and cast them:

    \n
    SET @policy1 = 'XXXX-00099';\nSET @policy2 = 'XXXX-000598';\nSELECT @policy1, @policy2, \n   CONVERT(SUBSTRING(@policy2, INSTR(@policy2, '-')+1), UNSIGNED) >\n   CONVERT(SUBSTRING(@policy2, INSTR(@policy2, '-')+1), UNSIGNED) AS comparison;\n=========================================\n> 'XXXX-00099', 'XXXX-000598', 0\n
    \n soup wrap:

    Because your numbers are zero padded, as long as the four letter prefix is the same and always the same length, then this should work as MySQL will do a lexicographical comparison.

    Note that one less 0 in the padding will cause this to fail:

    SET @policy1 = 'XXXX-00099';
    SET @policy2 = 'XXXX-000598';
    SELECT @policy1, @policy2, @policy1 > @policy2 AS comparison;
    =========================================
    > 'XXXX-00099', 'XXXX-000598', 1
    

    If you need to truly compare the numbers at the end, you will need to parse them out and cast them:

    SET @policy1 = 'XXXX-00099';
    SET @policy2 = 'XXXX-000598';
    SELECT @policy1, @policy2, 
       CONVERT(SUBSTRING(@policy2, INSTR(@policy2, '-')+1), UNSIGNED) >
       CONVERT(SUBSTRING(@policy2, INSTR(@policy2, '-')+1), UNSIGNED) AS comparison;
    =========================================
    > 'XXXX-00099', 'XXXX-000598', 0
    
    qid & accept id: (13308281, 13308310) query: MySQL GROUP BY "and filter" soup:
    SELECT name, GROUP_CONCAT(number)\nFROM objects\nWHERE number IN (2,3)\nGROUP BY name\nHAVING COUNT(*) = 2\n
    \n\n

    or if you want to retain all value on which the name has,

    \n
    SELECT  a.name, GROUP_CONCAT(A.number)\nFROM    objects a\n        INNER JOIN\n        (\n          SELECT name\n          FROM objects\n          WHERE number IN (2,3)\n          GROUP BY name\n          HAVING COUNT(*) = 2\n        ) b ON a.Name = b.Name\nGROUP BY a.name\n
    \n\n soup wrap:
    SELECT name, GROUP_CONCAT(number)
    FROM objects
    WHERE number IN (2,3)
    GROUP BY name
    HAVING COUNT(*) = 2
    

    or if you want to retain all value on which the name has,

    SELECT  a.name, GROUP_CONCAT(A.number)
    FROM    objects a
            INNER JOIN
            (
              SELECT name
              FROM objects
              WHERE number IN (2,3)
              GROUP BY name
              HAVING COUNT(*) = 2
            ) b ON a.Name = b.Name
    GROUP BY a.name
    
    qid & accept id: (13345583, 13345825) query: oracle - how to list out the products that are going to expire in 2months time? soup:

    you data says that the others apart from pear+orange expire today, so assuming you want to exclude expiring today and include those expiring WITHIN 2 months time:

    \n
    SQL> select food, manufacturedate, add_months(manufacturedate,12) expiry_date from product where add_months(manufacturedate, 12) <= add_months(trunc(sysdate), 2) and add_months(manufacturedate, 12) > trunc(sysdate);\n\nFOOD        MANUFACTU EXPIRY_DA\n--------------- --------- ---------\norange      12-JAN-12 12-JAN-13\npear        12-JAN-12 12-JAN-13\n
    \n

    or a more index friendly way of putting it (removing the functions on the column side):

    \n
    SQL> select food, manufacturedate, add_months(manufacturedate,12) expiry_date from product where manufacturedate <= add_months(trunc(sysdate), -10) and manufacturedate > add_months(trunc(sysdate), -12);\n\nFOOD        MANUFACTU EXPIRY_DA\n--------------- --------- ---------\norange      12-JAN-12 12-JAN-13\npear        12-JAN-12 12-JAN-13\n
    \n soup wrap:

    you data says that the others apart from pear+orange expire today, so assuming you want to exclude expiring today and include those expiring WITHIN 2 months time:

    SQL> select food, manufacturedate, add_months(manufacturedate,12) expiry_date from product where add_months(manufacturedate, 12) <= add_months(trunc(sysdate), 2) and add_months(manufacturedate, 12) > trunc(sysdate);
    
    FOOD        MANUFACTU EXPIRY_DA
    --------------- --------- ---------
    orange      12-JAN-12 12-JAN-13
    pear        12-JAN-12 12-JAN-13
    

    or a more index friendly way of putting it (removing the functions on the column side):

    SQL> select food, manufacturedate, add_months(manufacturedate,12) expiry_date from product where manufacturedate <= add_months(trunc(sysdate), -10) and manufacturedate > add_months(trunc(sysdate), -12);
    
    FOOD        MANUFACTU EXPIRY_DA
    --------------- --------- ---------
    orange      12-JAN-12 12-JAN-13
    pear        12-JAN-12 12-JAN-13
    
    qid & accept id: (13377997, 13382350) query: Join and Union with Entity Framework soup:

    If i understand correctly,

    \n

    Customer may or may not have the email (Additional) in emails table.\nAlso, Customer have more than one additional emails entry in emails table. Like below

    \n
    List customers = new List \n{ \n    new Customer { ClientId = 1, Email = "client1@domain.com", Credits = 2 },\n    new Customer { ClientId = 2, Email = "client2@domain.com", Credits = 1 },\n    new Customer { ClientId = 3, Email = "client3@domain.com", Credits = 1 },\n};\n\nList emails = new List \n{ \n    new Emails { ClientId = 1, Email = "client1-2@domain.com" },\n    new Emails { ClientId = 1, Email = "client1-3@domain.com" },\n    new Emails { ClientId = 2, Email = "client2-1@domain.com" },\n};\n
    \n

    In that case, Use the below query to get it done,

    \n
    var result = from c in customers\n             let _emails = emails.Where(e => c.ClientId == e.ClientId).Select(t => t.Email)\n             where c.Email == "client3@domain.com" || _emails.Contains("client3@domain.com")\n             select new\n             {\n                 Allowed = c.Credits > 0,\n                 MainEmail = c.Email\n             };\n
    \n

    I hope it helps you.

    \n soup wrap:

    If i understand correctly,

    Customer may or may not have the email (Additional) in emails table. Also, Customer have more than one additional emails entry in emails table. Like below

    List customers = new List 
    { 
        new Customer { ClientId = 1, Email = "client1@domain.com", Credits = 2 },
        new Customer { ClientId = 2, Email = "client2@domain.com", Credits = 1 },
        new Customer { ClientId = 3, Email = "client3@domain.com", Credits = 1 },
    };
    
    List emails = new List 
    { 
        new Emails { ClientId = 1, Email = "client1-2@domain.com" },
        new Emails { ClientId = 1, Email = "client1-3@domain.com" },
        new Emails { ClientId = 2, Email = "client2-1@domain.com" },
    };
    

    In that case, Use the below query to get it done,

    var result = from c in customers
                 let _emails = emails.Where(e => c.ClientId == e.ClientId).Select(t => t.Email)
                 where c.Email == "client3@domain.com" || _emails.Contains("client3@domain.com")
                 select new
                 {
                     Allowed = c.Credits > 0,
                     MainEmail = c.Email
                 };
    

    I hope it helps you.

    qid & accept id: (13406949, 13408736) query: Formatting a number as a monetary value including separators soup:

    Do it on the client side. Having said that, this example should show you the way.

    \n
    with p(price1, multiplier) as (select 1234.5, 10)\nselect '$' + replace(cast((CAST(p.Price1 AS decimal(10,2)) * cast(isnull(p.Multiplier,1) as decimal(10,2))) as varchar), '.0000', ''),\n       '$' + parsename(convert(varchar,cast(p.price1*isnull(p.Multiplier,1) as money),1),2)\nfrom p\n
    \n

    The key is in the last expression

    \n
    '$' + parsename(convert(varchar,cast(p.price1*isnull(p.Multiplier,1) as money),1),2)\n
    \n

    Note: if p.price1 is of a higher precision than decimal(10,2), then you may have to cast it in the expression as well to produce a faithful translation since the original CAST(p.Priced1 as decimal(10,2)) will be performing rounding.

    \n soup wrap:

    Do it on the client side. Having said that, this example should show you the way.

    with p(price1, multiplier) as (select 1234.5, 10)
    select '$' + replace(cast((CAST(p.Price1 AS decimal(10,2)) * cast(isnull(p.Multiplier,1) as decimal(10,2))) as varchar), '.0000', ''),
           '$' + parsename(convert(varchar,cast(p.price1*isnull(p.Multiplier,1) as money),1),2)
    from p
    

    The key is in the last expression

    '$' + parsename(convert(varchar,cast(p.price1*isnull(p.Multiplier,1) as money),1),2)
    

    Note: if p.price1 is of a higher precision than decimal(10,2), then you may have to cast it in the expression as well to produce a faithful translation since the original CAST(p.Priced1 as decimal(10,2)) will be performing rounding.

    qid & accept id: (13410246, 13413778) query: syntax to query another table using relationship in ORM? soup:

    There are various ways to achieve that:

    \n

    1. use join(...) - I would opt for this one in your case

    \n
    qry = session.query(Sample).join(Cell).filter(Cell.name == "a_string")\n\n>> SELECT sample.id AS sample_id, sample.factor_id AS sample_factor_id\n>> FROM sample JOIN cell ON cell.id = sample.factor_id\n>> WHERE cell.name = :name_1\n
    \n

    2. use any/has(...) - this will use a sub-query

    \n
    qry = session.query(Sample).filter(Sample.cell.has(Cell.name == "a_string"))\n\n>> SELECT sample.id AS sample_id, sample.factor_id AS sample_factor_id\n>> FROM sample\n>> WHERE EXISTS (SELECT 1\n>> FROM cell\n>> WHERE cell.id = sample.factor_id AND cell.name = :name_1)\n
    \n soup wrap:

    There are various ways to achieve that:

    1. use join(...) - I would opt for this one in your case

    qry = session.query(Sample).join(Cell).filter(Cell.name == "a_string")
    
    >> SELECT sample.id AS sample_id, sample.factor_id AS sample_factor_id
    >> FROM sample JOIN cell ON cell.id = sample.factor_id
    >> WHERE cell.name = :name_1
    

    2. use any/has(...) - this will use a sub-query

    qry = session.query(Sample).filter(Sample.cell.has(Cell.name == "a_string"))
    
    >> SELECT sample.id AS sample_id, sample.factor_id AS sample_factor_id
    >> FROM sample
    >> WHERE EXISTS (SELECT 1
    >> FROM cell
    >> WHERE cell.id = sample.factor_id AND cell.name = :name_1)
    
    qid & accept id: (13419701, 13420383) query: Compare two sets of an SQL "GROUP BY" result soup:

    I assumed a TrainRoutes table with one row for each of R1, R2 etc. You could replace this with select distinct RouteID from Stops if required.

    \n
    Select\n    r1.RouteID Route1,\n    r2.RouteID Route2\nFrom\n    -- cross to compare each route with each route\n    dbo.TrainRoutes r1\n        Cross Join\n    dbo.TrainRoutes r2\n        Inner Join\n    dbo.Stops s1\n        On r1.RouteID = s1.RouteID\n        Inner Join\n    dbo.Stops s2\n        On r2.RouteID = s2.RouteID\nWhere\n    r1.RouteID < r2.RouteID -- no point in comparing R1 with R2 and R2 with R1\nGroup By\n    r1.RouteID,\n    r2.RouteID\nHaving\n     -- check each route has the same number of stations\n    count(Distinct s1.stationID) = count(Distinct s2.stationID) And\n    -- check each route has the same stops\n    Sum(Case When s1.StationID = s2.StationID Then 1 Else 0 End) = count(Distinct s1.StationID) And\n    -- check each route has different halts\n    sum(Case When s1.StationID = s2.StationID And s1.Halts = s2.Halts Then 1 Else 0 End) != count(Distinct s1.StationID)\n
    \n

    You can also do this without the TrainRoute table like so, but you're now cross joining two larger tables:

    \n
    Select\n    s1.RouteID Route1,\n    s2.RouteID Route2\nFrom\n    dbo.Stops s1\n        Cross Join\n    dbo.Stops s2\nWhere\n    s1.RouteID < s2.RouteID\nGroup By\n    s1.RouteID,\n    s2.RouteID\nHaving\n    count(Distinct s1.stationID) = count(Distinct s2.stationID) And\n    Sum(Case When s1.StationID = s2.StationID Then 1 Else 0 End) = count(Distinct s1.StationID) And\n    sum(Case When s1.StationID = s2.StationID And s1.Halts = s2.Halts Then 1 Else 0 End) != count(Distinct s1.StationID)\n
    \n

    http://sqlfiddle.com/#!6/76978/8

    \n soup wrap:

    I assumed a TrainRoutes table with one row for each of R1, R2 etc. You could replace this with select distinct RouteID from Stops if required.

    Select
        r1.RouteID Route1,
        r2.RouteID Route2
    From
        -- cross to compare each route with each route
        dbo.TrainRoutes r1
            Cross Join
        dbo.TrainRoutes r2
            Inner Join
        dbo.Stops s1
            On r1.RouteID = s1.RouteID
            Inner Join
        dbo.Stops s2
            On r2.RouteID = s2.RouteID
    Where
        r1.RouteID < r2.RouteID -- no point in comparing R1 with R2 and R2 with R1
    Group By
        r1.RouteID,
        r2.RouteID
    Having
         -- check each route has the same number of stations
        count(Distinct s1.stationID) = count(Distinct s2.stationID) And
        -- check each route has the same stops
        Sum(Case When s1.StationID = s2.StationID Then 1 Else 0 End) = count(Distinct s1.StationID) And
        -- check each route has different halts
        sum(Case When s1.StationID = s2.StationID And s1.Halts = s2.Halts Then 1 Else 0 End) != count(Distinct s1.StationID)
    

    You can also do this without the TrainRoute table like so, but you're now cross joining two larger tables:

    Select
        s1.RouteID Route1,
        s2.RouteID Route2
    From
        dbo.Stops s1
            Cross Join
        dbo.Stops s2
    Where
        s1.RouteID < s2.RouteID
    Group By
        s1.RouteID,
        s2.RouteID
    Having
        count(Distinct s1.stationID) = count(Distinct s2.stationID) And
        Sum(Case When s1.StationID = s2.StationID Then 1 Else 0 End) = count(Distinct s1.StationID) And
        sum(Case When s1.StationID = s2.StationID And s1.Halts = s2.Halts Then 1 Else 0 End) != count(Distinct s1.StationID)
    

    http://sqlfiddle.com/#!6/76978/8

    qid & accept id: (13427389, 13427423) query: Recipe Database, search by ingredient soup:

    Since a recipe can use multiple ingredients and you are looking for recipes that use one or more of the ingredients specified, you should use the DISTINCT keyword to prevent duplicate results where a recipe is using more than one ingredient from the list specified. Also, you can use IN clause to filter on multiple ingredient IDs.

    \n
    select DISTINCT r.name\nfrom \n    recipes r\n    inner join ingredient_index i\n    on i.recipe_id = r.recipe_id\nwhere i.ingredient_id IN (7, 5);\n
    \n

    Alternatively, if you are looking for recipes that are using all the ingredients specified in the list, then you can group the results by recipe name and check if the count of records is same as the number of ingredients in your list.

    \n
    select r.name\nfrom \n    recipes r\n    inner join ingredient_index i\n    on i.recipe_id = r.recipe_id\nwhere i.ingredient_id IN (7, 5)\nGROUP BY r.name\nHAVING COUNT(*) = 2\n
    \n

    This is assuming that there won't be duplicate records with same (recipe_id, ingredient_id) tuple (better ensured with a UNIQUE constraint).

    \n soup wrap:

    Since a recipe can use multiple ingredients and you are looking for recipes that use one or more of the ingredients specified, you should use the DISTINCT keyword to prevent duplicate results where a recipe is using more than one ingredient from the list specified. Also, you can use IN clause to filter on multiple ingredient IDs.

    select DISTINCT r.name
    from 
        recipes r
        inner join ingredient_index i
        on i.recipe_id = r.recipe_id
    where i.ingredient_id IN (7, 5);
    

    Alternatively, if you are looking for recipes that are using all the ingredients specified in the list, then you can group the results by recipe name and check if the count of records is same as the number of ingredients in your list.

    select r.name
    from 
        recipes r
        inner join ingredient_index i
        on i.recipe_id = r.recipe_id
    where i.ingredient_id IN (7, 5)
    GROUP BY r.name
    HAVING COUNT(*) = 2
    

    This is assuming that there won't be duplicate records with same (recipe_id, ingredient_id) tuple (better ensured with a UNIQUE constraint).

    qid & accept id: (13452415, 13452448) query: SQL Server 2008 insert into table using loop soup:
    \n

    I want to use the IDs I get from this query to insert in another table\n Member which use ContaId as a foreign key.

    \n
    \n

    You can use INSERT INTO .. SELECT instead of cursors and while loops like so:

    \n
    INSERT INTO Member(ContaId)\nSELECT TOP 1000 c.ContaId\nFROM FastGroupe fg\nINNER JOIN FastParticipant fp \n    ON fg.FastGroupeId = fp.FastGroupeId\nINNER JOIN Participant p\n    ON fp.ParticipantId = p.ParticipantId\nINNER JOIN Contact c\n    ON p.ContaId = c.ContaId\nWHERE FastGroupeName like '%Group%'\n
    \n

    Update: Try this:

    \n
    INSERT INTO Member(ContaId, PromoId)\nSELECT TOP 1000 c.ContaId, 91 AS PromoId\nFROM FastGroupe fg\n...\n
    \n

    This will insert the same value 91 for the PromoId for all the 1000 records. And since the MemberId is set to be automatic, just ignore it in the columns' list and it will get an automatic value.

    \n soup wrap:

    I want to use the IDs I get from this query to insert in another table Member which use ContaId as a foreign key.

    You can use INSERT INTO .. SELECT instead of cursors and while loops like so:

    INSERT INTO Member(ContaId)
    SELECT TOP 1000 c.ContaId
    FROM FastGroupe fg
    INNER JOIN FastParticipant fp 
        ON fg.FastGroupeId = fp.FastGroupeId
    INNER JOIN Participant p
        ON fp.ParticipantId = p.ParticipantId
    INNER JOIN Contact c
        ON p.ContaId = c.ContaId
    WHERE FastGroupeName like '%Group%'
    

    Update: Try this:

    INSERT INTO Member(ContaId, PromoId)
    SELECT TOP 1000 c.ContaId, 91 AS PromoId
    FROM FastGroupe fg
    ...
    

    This will insert the same value 91 for the PromoId for all the 1000 records. And since the MemberId is set to be automatic, just ignore it in the columns' list and it will get an automatic value.

    qid & accept id: (13471159, 13471249) query: Combine multiple rows of table in single row in SQL soup:

    You can use FOR XML PATH:

    \n
    SELECT Ticket, \n  STUFF((SELECT distinct ' - ' + cast(UpdatedBy as varchar(20)) + ' ' + comment\n              from yourtable t2\n              where t1.Ticket = t2.Ticket\n            FOR XML PATH(''), TYPE\n\n            ).value('.', 'NVARCHAR(MAX)') \n        ,1,2,'') comments\nfrom yourtable t1\ngroup by ticket\n
    \n

    See SQL Fiddle with Demo

    \n

    Result:

    \n
    | TICKET |                                       COMMENTS |\n-----------------------------------------------------------\n|    100 |  23 Text 1 - 24 Text 2 - 25 Text 3 - 26 Text 4 |\n
    \n soup wrap:

    You can use FOR XML PATH:

    SELECT Ticket, 
      STUFF((SELECT distinct ' - ' + cast(UpdatedBy as varchar(20)) + ' ' + comment
                  from yourtable t2
                  where t1.Ticket = t2.Ticket
                FOR XML PATH(''), TYPE
    
                ).value('.', 'NVARCHAR(MAX)') 
            ,1,2,'') comments
    from yourtable t1
    group by ticket
    

    See SQL Fiddle with Demo

    Result:

    | TICKET |                                       COMMENTS |
    -----------------------------------------------------------
    |    100 |  23 Text 1 - 24 Text 2 - 25 Text 3 - 26 Text 4 |
    
    qid & accept id: (13474207, 13474490) query: sql query if parameter is null select all soup:

    You can also use functions IFNULL,COALESCE,NVL,ISNULL to check null value. It depends on your RDBMS.

    \n

    MySQL:

    \n
    SELECT NAME, SURNAME FROM MY_TABLE WHERE NAME = IFNULL(?,NAME);\n
    \n

    or

    \n
    SELECT NAME, SURNAME FROM MY_TABLE WHERE NAME = COALESCE(?,NAME);\n
    \n

    ORACLE:

    \n
    SELECT NAME, SURNAME FROM MY_TABLE WHERE NAME = NVL(?,NAME);\n
    \n

    SQL Server / SYBASE:

    \n
    SELECT NAME, SURNAME FROM MY_TABLE WHERE NAME = ISNULL(?,NAME);\n
    \n soup wrap:

    You can also use functions IFNULL,COALESCE,NVL,ISNULL to check null value. It depends on your RDBMS.

    MySQL:

    SELECT NAME, SURNAME FROM MY_TABLE WHERE NAME = IFNULL(?,NAME);
    

    or

    SELECT NAME, SURNAME FROM MY_TABLE WHERE NAME = COALESCE(?,NAME);
    

    ORACLE:

    SELECT NAME, SURNAME FROM MY_TABLE WHERE NAME = NVL(?,NAME);
    

    SQL Server / SYBASE:

    SELECT NAME, SURNAME FROM MY_TABLE WHERE NAME = ISNULL(?,NAME);
    
    qid & accept id: (13523272, 13524149) query: SQL Server Group Concat with Different characters soup:
    \n

    make a more human readable solution

    \n
    \n

    Sorry, this is the best I can do with your requirement.

    \n

    SQL Fiddle

    \n

    MS SQL Server 2008 Schema Setup:

    \n
    create table YourTable\n(\n  ParentID int,\n  ChildName varchar(10)\n);\n\ninsert into YourTable values\n(1, 'Max'),\n(1, 'Jessie'),\n(2, 'Steven'),\n(2, 'Lucy'),\n(2, 'Jake'),\n(3, 'Mark');\n
    \n

    Query 1:

    \n
    with T as \n(\n  select ParentID,\n         ChildName,\n         row_number() over(partition by ParentID order by ChildName) as rn,\n         count(*) over(partition by ParentID) as cc\n  from YourTable\n)\nselect T1.ParentID,\n       (\n         select case\n                  when T2.rn = 1 and T2.cc > 1 then ' and '\n                  else ', ' \n                end + T2.ChildName\n         from T as T2\n         where T1.ParentID = T2.ParentID\n         order by T2.rn desc\n         for xml path(''), type\n       ).value('substring(text()[1], 3)', 'varchar(max)') as ChildNames\nfrom T as T1\ngroup by T1.ParentID\n
    \n

    Results:

    \n
    | PARENTID |            CHILDNAMES |\n------------------------------------\n|        1 |        Max and Jessie |\n|        2 | Steven, Lucy and Jake |\n|        3 |                  Mark |\n
    \n soup wrap:

    make a more human readable solution

    Sorry, this is the best I can do with your requirement.

    SQL Fiddle

    MS SQL Server 2008 Schema Setup:

    create table YourTable
    (
      ParentID int,
      ChildName varchar(10)
    );
    
    insert into YourTable values
    (1, 'Max'),
    (1, 'Jessie'),
    (2, 'Steven'),
    (2, 'Lucy'),
    (2, 'Jake'),
    (3, 'Mark');
    

    Query 1:

    with T as 
    (
      select ParentID,
             ChildName,
             row_number() over(partition by ParentID order by ChildName) as rn,
             count(*) over(partition by ParentID) as cc
      from YourTable
    )
    select T1.ParentID,
           (
             select case
                      when T2.rn = 1 and T2.cc > 1 then ' and '
                      else ', ' 
                    end + T2.ChildName
             from T as T2
             where T1.ParentID = T2.ParentID
             order by T2.rn desc
             for xml path(''), type
           ).value('substring(text()[1], 3)', 'varchar(max)') as ChildNames
    from T as T1
    group by T1.ParentID
    

    Results:

    | PARENTID |            CHILDNAMES |
    ------------------------------------
    |        1 |        Max and Jessie |
    |        2 | Steven, Lucy and Jake |
    |        3 |                  Mark |
    
    qid & accept id: (13537347, 13537369) query: Get row where column2 is X and column1 is max of column1 soup:
    SELECT * FROM table WHERE col2='CDE' ORDER BY col1 DESC LIMIT 1\n
    \n

    in case if col1 wasn't an increment it would go somewhat like

    \n
    SELECT *,MAX(col1) AS max_col1 FROM table WHERE col2='CDE' GROUP BY col2 LIMIT 1\n
    \n soup wrap:
    SELECT * FROM table WHERE col2='CDE' ORDER BY col1 DESC LIMIT 1
    

    in case if col1 wasn't an increment it would go somewhat like

    SELECT *,MAX(col1) AS max_col1 FROM table WHERE col2='CDE' GROUP BY col2 LIMIT 1
    
    qid & accept id: (13545617, 13545670) query: Reference from one table to another entire table and specified row soup:

    I assume you use mysql database.

    \n
    CREATE TABLE A\n(\n    id INT NOT NULL PRIMARY KEY,\n    b_id INT NOT NULL,\n    c_id INT NOT NULL,\n    FOREIGN KEY (b_id) REFERENCES B (id),\n    FOREIGN KEY (c_id) REFERENCES C (id)\n) TYPE = INNODB;\n
    \n

    Update for using postgresql:

    \n
    CREATE TABLE "A"\n(\n   id integer NOT NULL, \n   b_id integer NOT NULL, \n   c_id integer NOT NULL, \n   CONSTRAINT id PRIMARY KEY (id), \n   CONSTRAINT b_id FOREIGN KEY (b_id) REFERENCES "B" (id) \n      ON UPDATE NO ACTION ON DELETE NO ACTION, --with no action restriction\n   CONSTRAINT c_id FOREIGN KEY (c_id) REFERENCES "C" (id) \n      ON UPDATE CASCADE ON DELETE CASCADE  --with cascade restriction\n) \nWITH (\n  OIDS = FALSE\n)\n;\nALTER TABLE "C" OWNER TO postgres;\n
    \n
    \n soup wrap:

    I assume you use mysql database.

    CREATE TABLE A
    (
        id INT NOT NULL PRIMARY KEY,
        b_id INT NOT NULL,
        c_id INT NOT NULL,
        FOREIGN KEY (b_id) REFERENCES B (id),
        FOREIGN KEY (c_id) REFERENCES C (id)
    ) TYPE = INNODB;
    

    Update for using postgresql:

    CREATE TABLE "A"
    (
       id integer NOT NULL, 
       b_id integer NOT NULL, 
       c_id integer NOT NULL, 
       CONSTRAINT id PRIMARY KEY (id), 
       CONSTRAINT b_id FOREIGN KEY (b_id) REFERENCES "B" (id) 
          ON UPDATE NO ACTION ON DELETE NO ACTION, --with no action restriction
       CONSTRAINT c_id FOREIGN KEY (c_id) REFERENCES "C" (id) 
          ON UPDATE CASCADE ON DELETE CASCADE  --with cascade restriction
    ) 
    WITH (
      OIDS = FALSE
    )
    ;
    ALTER TABLE "C" OWNER TO postgres;
    

    qid & accept id: (13584250, 13584444) query: SQL using listagg() and group by non duplicated values soup:

    Query:

    \n

    SQLFIDDLEEXAMPLE

    \n
    SELECT \nID, LISTAGG(TELNO, ', ') \nWITHIN GROUP (ORDER BY TELNO) \nAS TEL_LIST\nFROM   tbl\nGROUP BY ID;\n
    \n

    Result:

    \n
    | ID |                             TEL_LIST |\n---------------------------------------------\n|  1 |               0123456789, 0207983498 |\n|  2 | 0124339848, 02387694364, 09348374834 |\n
    \n soup wrap:

    Query:

    SQLFIDDLEEXAMPLE

    SELECT 
    ID, LISTAGG(TELNO, ', ') 
    WITHIN GROUP (ORDER BY TELNO) 
    AS TEL_LIST
    FROM   tbl
    GROUP BY ID;
    

    Result:

    | ID |                             TEL_LIST |
    ---------------------------------------------
    |  1 |               0123456789, 0207983498 |
    |  2 | 0124339848, 02387694364, 09348374834 |
    
    qid & accept id: (13595333, 13595976) query: How copy data from one database to another on different server? soup:

    Use Oracle export to export a whole table to a file, copy the file to serverB and import.

    \n
    http://www.orafaq.com/wiki/Import_Export_FAQ\n
    \n

    You can use rsync to sync an oracle .dbf file or files to another server. This has problems and syncing all files works more reliably.

    \n

    For groups of records, write a query to build a pipe-delimited (or whatever delimiter suits your data) file with rows you need to move. Copy that file to serverB. Write a control file for sqlldr and use sqlldr to load the rows into the table. sqlldr is part of the oracle installation.

    \n
    http://www.thegeekstuff.com/2012/06/oracle-sqlldr/\n
    \n

    If you have db listeners up on each server and tnsnames knows about both, you can directly:

    \n
    insert into mytable@remote \nselect * from mytable\n  where somecolumn=somevalue;\n
    \n

    Look at the remote table section:

    \n
    http://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_9014.htm\n
    \n

    If this is going to be an ongoing thing, create a db link from instance@serverA to instance@serverB.\nYou can then do anything you have permissions for with data on one instance or the other or both.

    \n
    http://psoug.org/definition/CREATE_DATABASE_LINK.htm\n
    \n soup wrap:

    Use Oracle export to export a whole table to a file, copy the file to serverB and import.

    http://www.orafaq.com/wiki/Import_Export_FAQ
    

    You can use rsync to sync an oracle .dbf file or files to another server. This has problems and syncing all files works more reliably.

    For groups of records, write a query to build a pipe-delimited (or whatever delimiter suits your data) file with rows you need to move. Copy that file to serverB. Write a control file for sqlldr and use sqlldr to load the rows into the table. sqlldr is part of the oracle installation.

    http://www.thegeekstuff.com/2012/06/oracle-sqlldr/
    

    If you have db listeners up on each server and tnsnames knows about both, you can directly:

    insert into mytable@remote 
    select * from mytable
      where somecolumn=somevalue;
    

    Look at the remote table section:

    http://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_9014.htm
    

    If this is going to be an ongoing thing, create a db link from instance@serverA to instance@serverB. You can then do anything you have permissions for with data on one instance or the other or both.

    http://psoug.org/definition/CREATE_DATABASE_LINK.htm
    
    qid & accept id: (13618078, 13618305) query: Query where foreign key column can be NULL soup:

    If there is "no row at all for the uid", and you JOIN like you do, you get no row as result. Use LEFT [OUTER] JOIN instead:

    \n
    SELECT u.uid, u.fname, u.lname\nFROM   u \nLEFT   JOIN u_org o ON u.uid = o.uid \nLEFT   JOIN login l ON u.uid = l.uid \nWHERE (o.orgid = 2 OR o.orgid IS NULL)\nAND    l.access IS DISTINCT FROM 4;\n
    \n

    Also, you need the parenthesis I added because of operator precedence. (AND binds before OR).

    \n

    I use IS DISTINCT FROM instead of != in the last WHERE condition because, again, login.access might be NULL, which would not qualify.

    \n

    However, since you only seem to be interested in columns from table u to begin with, this alternative query would be more elegant:

    \n
    SELECT u.uid, u.fname, u.lname\nFROM   u\nWHERE (u.uid IS NULL OR EXISTS (\n   SELECT 1\n   FROM   u_org o\n   WHERE  o.uid = u.uid\n   AND    o.orgid = 2\n   ))\nAND NOT EXISTS (\n   SELECT 1\n   FROM   login l\n   WHERE  l.uid = u.uid\n   AND    l.access = 4\n   );\n
    \n

    This alternative has the additional advantage, that you always get one row from u, even if there are multiple rows in u_org or login.

    \n soup wrap:

    If there is "no row at all for the uid", and you JOIN like you do, you get no row as result. Use LEFT [OUTER] JOIN instead:

    SELECT u.uid, u.fname, u.lname
    FROM   u 
    LEFT   JOIN u_org o ON u.uid = o.uid 
    LEFT   JOIN login l ON u.uid = l.uid 
    WHERE (o.orgid = 2 OR o.orgid IS NULL)
    AND    l.access IS DISTINCT FROM 4;
    

    Also, you need the parenthesis I added because of operator precedence. (AND binds before OR).

    I use IS DISTINCT FROM instead of != in the last WHERE condition because, again, login.access might be NULL, which would not qualify.

    However, since you only seem to be interested in columns from table u to begin with, this alternative query would be more elegant:

    SELECT u.uid, u.fname, u.lname
    FROM   u
    WHERE (u.uid IS NULL OR EXISTS (
       SELECT 1
       FROM   u_org o
       WHERE  o.uid = u.uid
       AND    o.orgid = 2
       ))
    AND NOT EXISTS (
       SELECT 1
       FROM   login l
       WHERE  l.uid = u.uid
       AND    l.access = 4
       );
    

    This alternative has the additional advantage, that you always get one row from u, even if there are multiple rows in u_org or login.

    qid & accept id: (13632163, 13632683) query: Create a view with alternate/default values for missing relationships soup:

    Is this along the right lines for what you're after?

    \n

    Runnable example here: http://sqlfiddle.com/#!3/894e9/4

    \n
    if object_id('[FloorName]') is not null drop table [FloorName]\nif object_id('[BuildingName]') is not null drop table [BuildingName]\nif object_id('[Floor]') is not null drop table [Floor]\nif object_id('[Building]') is not null drop table [Building]\nif object_id('[Language]') is not null drop table [Language]\n\ncreate table [Language]\n(\n    Id bigint not null identity(1,1) primary key clustered\n    , code nvarchar(5)\n)\ncreate table [Building]\n(\n    Id bigint not null identity(1,1) primary key clustered\n    , something nvarchar(64)\n)\ncreate table [Floor]\n(\n    Id bigint not null identity(1,1) primary key clustered\n    , BuildingId bigint foreign key references [Building](Id)\n    , something nvarchar(64)\n)\ncreate table [BuildingName]\n(\n    Id bigint not null identity(1,1) primary key clustered\n    , BuildingId bigint foreign key references [Building](Id)\n    , LanguageId bigint foreign key references [Language](Id)\n    , name nvarchar(64)\n)\ncreate table [FloorName]\n(\n    Id bigint not null identity(1,1) primary key clustered\n    , FloorId bigint foreign key references [Floor](Id)\n    , LanguageId bigint foreign key references [Language](Id)\n    , name nvarchar(64)\n)\n\ninsert [Language]\n      select 'en-us'\nunion select 'en-gb'\nunion select 'fr'\n\ninsert [Building]\n      select 'B1'\nunion select 'B2'\n\ninsert [Floor]\n      select 1, 'F1.1'\nunion select 1, 'F1.2'\nunion select 1, 'F1.3'\nunion select 1, 'F1.4'\nunion select 1, 'F1.5'\nunion select 2, 'F2.1'\nunion select 2, 'F2.2'\nunion select 2, 'F2.3'\nunion select 2, 'F2.4'\nunion select 2, 'F2.5'\n\ninsert BuildingName\nselect b.Id\n, l.id\n, 'BuildingName :: ' + b.something + ' ' + l.code\nfrom [Building] b\ncross join [Language] l\nwhere l.code in ('en-us', 'fr')\n\ninsert FloorName\nselect f.Id\n, l.Id\n, 'FloorName :: ' + f.something + ' ' + l.code\nfrom [Floor] f\ncross join [Language] l\nwhere f.something in ( 'F1.1', 'F1.2', 'F2.1')\nand l.code in ('en-us', 'fr')\n\ninsert FloorName\nselect  f.Id\n, l.Id\n, 'FloorName :: ' + f.something + ' ' + l.code\nfrom [Floor] f\ncross join [Language] l\nwhere f.something not in ( 'F1.1', 'F1.2', 'F2.1')\nand l.code in ('en-us')\n\n\ndeclare @defaultLanguageId bigint\nselect @defaultLanguageId = id from [Language] where code = 'en-us' --default language is US English\n\nselect b.Id\n, b.something\n, bn.name\n, isnull(bfn.name, bfnDefault.name)\n, bl.code BuildingLanguage\nfrom [Building] b\ninner join [BuildingName] bn\n    on bn.BuildingId = b.Id\ninner join [Language] bl\n    on bl.Id = bn.LanguageId\ninner join [Floor] bf\n    on bf.BuildingId = b.Id\nleft outer join [FloorName] bfn\n    on bfn.FloorId = bf.Id\n    and bfn.LanguageId = bl.Id\nleft outer join [Language] bfl\n    on bfl.Id = bfn.LanguageId\nleft outer join [FloorName] bfnDefault\n    on bfnDefault.FloorId = bf.Id\n    and bfnDefault.LanguageId = @defaultLanguageId\n
    \n

    EDIT

    \n

    This version defaults any language:

    \n
    select b.Id\n, b.something\n, bn.name\n, isnull(bfn.name, (select top 1 name from [FloorName] x where x.FloorId=bf.Id))\n, bl.code BuildingLanguage\nfrom [Building] b\ninner join [BuildingName] bn\n    on bn.BuildingId = b.Id\ninner join [Language] bl\n    on bl.Id = bn.LanguageId\ninner join [Floor] bf\n    on bf.BuildingId = b.Id\nleft outer join [FloorName] bfn\n    on bfn.FloorId = bf.Id\n    and bfn.LanguageId = bl.Id\nleft outer join [Language] bfl\n    on bfl.Id = bfn.LanguageId\n
    \n soup wrap:

    Is this along the right lines for what you're after?

    Runnable example here: http://sqlfiddle.com/#!3/894e9/4

    if object_id('[FloorName]') is not null drop table [FloorName]
    if object_id('[BuildingName]') is not null drop table [BuildingName]
    if object_id('[Floor]') is not null drop table [Floor]
    if object_id('[Building]') is not null drop table [Building]
    if object_id('[Language]') is not null drop table [Language]
    
    create table [Language]
    (
        Id bigint not null identity(1,1) primary key clustered
        , code nvarchar(5)
    )
    create table [Building]
    (
        Id bigint not null identity(1,1) primary key clustered
        , something nvarchar(64)
    )
    create table [Floor]
    (
        Id bigint not null identity(1,1) primary key clustered
        , BuildingId bigint foreign key references [Building](Id)
        , something nvarchar(64)
    )
    create table [BuildingName]
    (
        Id bigint not null identity(1,1) primary key clustered
        , BuildingId bigint foreign key references [Building](Id)
        , LanguageId bigint foreign key references [Language](Id)
        , name nvarchar(64)
    )
    create table [FloorName]
    (
        Id bigint not null identity(1,1) primary key clustered
        , FloorId bigint foreign key references [Floor](Id)
        , LanguageId bigint foreign key references [Language](Id)
        , name nvarchar(64)
    )
    
    insert [Language]
          select 'en-us'
    union select 'en-gb'
    union select 'fr'
    
    insert [Building]
          select 'B1'
    union select 'B2'
    
    insert [Floor]
          select 1, 'F1.1'
    union select 1, 'F1.2'
    union select 1, 'F1.3'
    union select 1, 'F1.4'
    union select 1, 'F1.5'
    union select 2, 'F2.1'
    union select 2, 'F2.2'
    union select 2, 'F2.3'
    union select 2, 'F2.4'
    union select 2, 'F2.5'
    
    insert BuildingName
    select b.Id
    , l.id
    , 'BuildingName :: ' + b.something + ' ' + l.code
    from [Building] b
    cross join [Language] l
    where l.code in ('en-us', 'fr')
    
    insert FloorName
    select f.Id
    , l.Id
    , 'FloorName :: ' + f.something + ' ' + l.code
    from [Floor] f
    cross join [Language] l
    where f.something in ( 'F1.1', 'F1.2', 'F2.1')
    and l.code in ('en-us', 'fr')
    
    insert FloorName
    select  f.Id
    , l.Id
    , 'FloorName :: ' + f.something + ' ' + l.code
    from [Floor] f
    cross join [Language] l
    where f.something not in ( 'F1.1', 'F1.2', 'F2.1')
    and l.code in ('en-us')
    
    
    declare @defaultLanguageId bigint
    select @defaultLanguageId = id from [Language] where code = 'en-us' --default language is US English
    
    select b.Id
    , b.something
    , bn.name
    , isnull(bfn.name, bfnDefault.name)
    , bl.code BuildingLanguage
    from [Building] b
    inner join [BuildingName] bn
        on bn.BuildingId = b.Id
    inner join [Language] bl
        on bl.Id = bn.LanguageId
    inner join [Floor] bf
        on bf.BuildingId = b.Id
    left outer join [FloorName] bfn
        on bfn.FloorId = bf.Id
        and bfn.LanguageId = bl.Id
    left outer join [Language] bfl
        on bfl.Id = bfn.LanguageId
    left outer join [FloorName] bfnDefault
        on bfnDefault.FloorId = bf.Id
        and bfnDefault.LanguageId = @defaultLanguageId
    

    EDIT

    This version defaults any language:

    select b.Id
    , b.something
    , bn.name
    , isnull(bfn.name, (select top 1 name from [FloorName] x where x.FloorId=bf.Id))
    , bl.code BuildingLanguage
    from [Building] b
    inner join [BuildingName] bn
        on bn.BuildingId = b.Id
    inner join [Language] bl
        on bl.Id = bn.LanguageId
    inner join [Floor] bf
        on bf.BuildingId = b.Id
    left outer join [FloorName] bfn
        on bfn.FloorId = bf.Id
        and bfn.LanguageId = bl.Id
    left outer join [Language] bfl
        on bfl.Id = bfn.LanguageId
    
    qid & accept id: (13678718, 13679093) query: Execute a WHERE clause before another one soup:

    6 answers and 5 of them don't work (for SQL Server)...

    \n
    SELECT *\n  FROM foo\n WHERE CASE WHEN LEN(bar) = 4 THEN\n       CASE WHEN CONVERT(Int,bar) >= 5000 THEN 1 ELSE 0 END\n       END = 1;\n
    \n

    The WHERE/INNER JOIN conditions can be executed in any order that the query optimizer determines is best. There is no short-circuit boolean evaluation.

    \n

    Specifically for your question, since you KNOW that the data with 4-characters is a number, then you can do a direct lexicographical (text) comparison (yes it works):

    \n
    SELECT *\n  FROM foo\n WHERE LEN(bar) = 4 AND bar > '5000';\n
    \n soup wrap:

    6 answers and 5 of them don't work (for SQL Server)...

    SELECT *
      FROM foo
     WHERE CASE WHEN LEN(bar) = 4 THEN
           CASE WHEN CONVERT(Int,bar) >= 5000 THEN 1 ELSE 0 END
           END = 1;
    

    The WHERE/INNER JOIN conditions can be executed in any order that the query optimizer determines is best. There is no short-circuit boolean evaluation.

    Specifically for your question, since you KNOW that the data with 4-characters is a number, then you can do a direct lexicographical (text) comparison (yes it works):

    SELECT *
      FROM foo
     WHERE LEN(bar) = 4 AND bar > '5000';
    
    qid & accept id: (13717630, 13719535) query: Choose view select statement dynamically by session variable in PostgreSQL soup:

    Try something like this:

    \n
    SELECT 'A'\nFROM tableA\nWHERE current_setting(setting_name) = 'setting A'\nUNION ALL\nSELECT 'B'\nFROM tableB\nWHERE current_setting(setting_name) = 'setting B'\n
    \n

    Details on postgresql session variables here.

    \n

    UPD It will give the results of one of the SELECT. If current_setting(setting_name) equals to 'setting A' the first query will return the results, but the second wont.

    \n

    For your example the query will look like:

    \n
    SELECT 'A'\nFROM tableA\nWHERE myVar = 1\nUNION ALL\nSELECT 'B'\nFROM tableB\nWHERE myVar != 1\n
    \n

    UPD Checked: postgres executes only one of the queries. EXPLAIN ANALYZE shows that the second query was planned but marked as (never executes).

    \n soup wrap:

    Try something like this:

    SELECT 'A'
    FROM tableA
    WHERE current_setting(setting_name) = 'setting A'
    UNION ALL
    SELECT 'B'
    FROM tableB
    WHERE current_setting(setting_name) = 'setting B'
    

    Details on postgresql session variables here.

    UPD It will give the results of one of the SELECT. If current_setting(setting_name) equals to 'setting A' the first query will return the results, but the second wont.

    For your example the query will look like:

    SELECT 'A'
    FROM tableA
    WHERE myVar = 1
    UNION ALL
    SELECT 'B'
    FROM tableB
    WHERE myVar != 1
    

    UPD Checked: postgres executes only one of the queries. EXPLAIN ANALYZE shows that the second query was planned but marked as (never executes).

    qid & accept id: (13730484, 13731188) query: SELECT multiple rows from single column into single row soup:

    You would use FOR XML PATH for this:

    \n
    select p.name,\n  Stuff((SELECT ', ' + s.skillName \n         FROM skilllink l\n         left join skill s\n           on l.skillid = s.id \n         where p.id = l.personid\n         FOR XML PATH('')),1,1,'') Skills\nfrom person p\n
    \n

    See SQL Fiddle with Demo

    \n

    Result:

    \n
    | NAME |            SKILLS |\n----------------------------\n| Bill | Telepathy, Karate |\n|  Bob |            (null) |\n|  Jim |         Carpentry |\n
    \n soup wrap:

    You would use FOR XML PATH for this:

    select p.name,
      Stuff((SELECT ', ' + s.skillName 
             FROM skilllink l
             left join skill s
               on l.skillid = s.id 
             where p.id = l.personid
             FOR XML PATH('')),1,1,'') Skills
    from person p
    

    See SQL Fiddle with Demo

    Result:

    | NAME |            SKILLS |
    ----------------------------
    | Bill | Telepathy, Karate |
    |  Bob |            (null) |
    |  Jim |         Carpentry |
    
    qid & accept id: (13758033, 13758855) query: how to join multiple select statement together soup:

    I guess you need this?

    \n
    select * from mastertable\nleft join carcolortable on mastertable.carcolor=carcolortable.id\nleft join varianttable on mastertable.variant=varianttable.id\nleft join accessoriestable on mastertable.accessories=accessoriestable.id\n
    \n

    If as you've described in the comment mastertable.carcolor (and others) contains a comma separated list of Id's in varchar then it should be:

    \n
    select * from mastertable\nleft join carcolortable on \n        ( ','+mastertable.carcolor+',' \n          LIKE \n          '%,'+CAST(carcolortable.id as varchar(100))+',%'\n         )\nleft join varianttable on \n        ( ','+mastertable.variant+',' \n          LIKE \n          '%,'+CAST(varianttable.id as varchar(100))+',%'\n         )\n\nleft join accessoriestable on \n        ( ','+mastertable.accessories+',' \n          LIKE \n          '%,'+CAST(accessoriestable.id as varchar(100))+',%'\n         )\n
    \n soup wrap:

    I guess you need this?

    select * from mastertable
    left join carcolortable on mastertable.carcolor=carcolortable.id
    left join varianttable on mastertable.variant=varianttable.id
    left join accessoriestable on mastertable.accessories=accessoriestable.id
    

    If as you've described in the comment mastertable.carcolor (and others) contains a comma separated list of Id's in varchar then it should be:

    select * from mastertable
    left join carcolortable on 
            ( ','+mastertable.carcolor+',' 
              LIKE 
              '%,'+CAST(carcolortable.id as varchar(100))+',%'
             )
    left join varianttable on 
            ( ','+mastertable.variant+',' 
              LIKE 
              '%,'+CAST(varianttable.id as varchar(100))+',%'
             )
    
    left join accessoriestable on 
            ( ','+mastertable.accessories+',' 
              LIKE 
              '%,'+CAST(accessoriestable.id as varchar(100))+',%'
             )
    
    qid & accept id: (13771275, 13771834) query: Query different IDs with different values? soup:

    For a list of players without duplicates an EXISTS semi-join is probably best:

    \n
    SELECT playerFirstName, playerLastName\nFROM   player AS p \nWHERE EXISTS (\n   SELECT 1\n   FROM   player2Statistic AS ps \n   WHERE  ps.playerID = p.playerID\n   AND    ps.StatisticID = 1\n   AND    ps.p2sStatistic > 65\n   )\nAND EXISTS (\n   SELECT 1\n   FROM   player2Statistic AS ps \n   WHERE  ps.playerID = p.playerID\n   AND    ps.StatisticID = 3\n   AND    ps.p2sStatistic > 295\n   );\n
    \n

    Column names and context are derived from the provided screenshots. The query in the question does not quite cover it.
    \nNote the parenthesis, they are needed to cope with operator precedence.

    \n

    This is probably faster (duplicates are probably not possible):

    \n
    SELECT p.playerFirstName, p.playerLastName\nFROM   player           AS p \nJOIN   player2Statistic AS ps1 USING (playerID)\nJOIN   player2Statistic AS ps3 USING (playerID)\nAND    ps1.StatisticID = 1\nAND    ps1.p2sStatistic > 65\nAND    ps3.StatisticID = 3\nAND    ps3.p2sStatistic > 295;\n
    \n

    If your top-secret brand of RDBMS does not support the SQL-standard (USING (playerID), substitute: ON ps1.playerID = p.playerID to the same effect.

    \n

    It's a case of relational division. Find many more query techniques to deal with it under this related question:
    \nHow to filter SQL results in a has-many-through relation

    \n soup wrap:

    For a list of players without duplicates an EXISTS semi-join is probably best:

    SELECT playerFirstName, playerLastName
    FROM   player AS p 
    WHERE EXISTS (
       SELECT 1
       FROM   player2Statistic AS ps 
       WHERE  ps.playerID = p.playerID
       AND    ps.StatisticID = 1
       AND    ps.p2sStatistic > 65
       )
    AND EXISTS (
       SELECT 1
       FROM   player2Statistic AS ps 
       WHERE  ps.playerID = p.playerID
       AND    ps.StatisticID = 3
       AND    ps.p2sStatistic > 295
       );
    

    Column names and context are derived from the provided screenshots. The query in the question does not quite cover it.
    Note the parenthesis, they are needed to cope with operator precedence.

    This is probably faster (duplicates are probably not possible):

    SELECT p.playerFirstName, p.playerLastName
    FROM   player           AS p 
    JOIN   player2Statistic AS ps1 USING (playerID)
    JOIN   player2Statistic AS ps3 USING (playerID)
    AND    ps1.StatisticID = 1
    AND    ps1.p2sStatistic > 65
    AND    ps3.StatisticID = 3
    AND    ps3.p2sStatistic > 295;
    

    If your top-secret brand of RDBMS does not support the SQL-standard (USING (playerID), substitute: ON ps1.playerID = p.playerID to the same effect.

    It's a case of relational division. Find many more query techniques to deal with it under this related question:
    How to filter SQL results in a has-many-through relation

    qid & accept id: (13789442, 13791342) query: List all the jobs that have been executed within a specified date? soup:

    To list all the jobs that started within a specified date:

    \n
    declare @date date = getdate()\n\nSELECT\n    J.job_id,\n    J.name\nFROM msdb.dbo.sysjobs AS J \nINNER JOIN msdb.dbo.sysjobhistory AS H ON H.job_id = J.job_id\nWHERE run_date = CONVERT(VARCHAR(8), GETDATE(), 112)\nGROUP BY J.job_id, J.name\n
    \n

    To list all the steps for a specified job on a specified date with their status:

    \n
    declare @date date = getdate()\ndeclare @job_name varchar(50) = 'test'\n\nSELECT\n    H.run_date,\n    H.run_time,\n    H.step_id,\n    H.step_name,\n    H.run_status\nFROM msdb.dbo.sysjobs AS J\nINNER JOIN msdb.dbo.sysjobhistory AS H ON H.job_id = J.job_id\nWHERE \n    run_date = CONVERT(VARCHAR(8), GETDATE(), 112)\n    AND J.name = @job_name\n
    \n

    More information here.

    \n soup wrap:

    To list all the jobs that started within a specified date:

    declare @date date = getdate()
    
    SELECT
        J.job_id,
        J.name
    FROM msdb.dbo.sysjobs AS J 
    INNER JOIN msdb.dbo.sysjobhistory AS H ON H.job_id = J.job_id
    WHERE run_date = CONVERT(VARCHAR(8), GETDATE(), 112)
    GROUP BY J.job_id, J.name
    

    To list all the steps for a specified job on a specified date with their status:

    declare @date date = getdate()
    declare @job_name varchar(50) = 'test'
    
    SELECT
        H.run_date,
        H.run_time,
        H.step_id,
        H.step_name,
        H.run_status
    FROM msdb.dbo.sysjobs AS J
    INNER JOIN msdb.dbo.sysjobhistory AS H ON H.job_id = J.job_id
    WHERE 
        run_date = CONVERT(VARCHAR(8), GETDATE(), 112)
        AND J.name = @job_name
    

    More information here.

    qid & accept id: (13791170, 13791263) query: How do I join tables where a column has exactly all values that I want? soup:

    Try this

    \n
    SELECT\n    whatever\nFROM\n    A\n    INNER JOIN B\n        ON A.A_ID = B.A_ID\nWHERE\n    B.C_ID IN (4, 5)\n
    \n

    or

    \n
    SELECT\n    whatever\nFROM\n    A\n    INNER JOIN B\n        ON A.A_ID = B.A_ID\nWHERE\n    B.C_ID = 4 OR B.C_ID = 5\n
    \n
    \n

    UPDATE

    \n

    If you want only matching pairs

    \n
    SELECT\n    whatever\nFROM\n    A\n    INNER JOIN B\n        ON A.A_ID = B.A_ID\nWHERE\n    A.A_ID IN (SELECT A_ID\n               FROM B\n               WHERE C_ID IN (4, 5)\n               GROUP BY A_ID\n               HAVING COUNT(*) = 2) AND\n    B.C_ID IN (4, 5)\n
    \n

    The sub-select groups by A_ID and counts the records. The HAVING clause works like the WHERE clause but is executed after grouping. So the inner select returns only A_IDs corresponding to (4, 5)-pairs of C_ID. The whole query always returns an even number of records like

    \n
    \nA_ID  |  B_ID  |  C_ID\n 1    |   1    |   4\n 1    |   2    |   5\n 2    |   3    |   4\n 2    |   4    |   5\n
    \n
    \n

    EDIT

    \n

    If you only want A_IDs where not only C_IDs 4 and 5 are present but where no further C_IDs exist then change the query to

    \n
    SELECT B.*\nFROM A INNER JOIN B ON A.A_ID = B.A_ID\nWHERE B.C_ID IN (4, 5) AND\n      A.A_ID IN (SELECT A_ID\n                 FROM B\n                 GROUP BY A_ID\n                 HAVING MIN(C_ID)=4 AND MAX(C_ID)=5 AND COUNT(*)=2)\n
    \n

    If the two numbers (4 and 5 in this example) are always contiguous, you can drop the COUNT(*)=2 part.

    \n

    (Note: accoring to one of your comments the join is on the A_ID column. I changed that in all my examples.)

    \n

    UPDATE by Robin

    \n

    Thanks, with your help I came up with this:

    \n
    SELECT\n    *\nFROM\n    A a\n    INNER JOIN B\n        ON a.A_ID = B.A_ID\nWHERE\n    (SELECT COUNT(*) FROM B b WHERE b.A_ID = a.A_ID and C_ID IN (4, 5)) =\n (SELECT COUNT(*) FROM A aa INNER JOIN B b ON aa.A_ID = b.A_ID WHERE b.A_ID = a.A_ID)\n
    \n soup wrap:

    Try this

    SELECT
        whatever
    FROM
        A
        INNER JOIN B
            ON A.A_ID = B.A_ID
    WHERE
        B.C_ID IN (4, 5)
    

    or

    SELECT
        whatever
    FROM
        A
        INNER JOIN B
            ON A.A_ID = B.A_ID
    WHERE
        B.C_ID = 4 OR B.C_ID = 5
    

    UPDATE

    If you want only matching pairs

    SELECT
        whatever
    FROM
        A
        INNER JOIN B
            ON A.A_ID = B.A_ID
    WHERE
        A.A_ID IN (SELECT A_ID
                   FROM B
                   WHERE C_ID IN (4, 5)
                   GROUP BY A_ID
                   HAVING COUNT(*) = 2) AND
        B.C_ID IN (4, 5)
    

    The sub-select groups by A_ID and counts the records. The HAVING clause works like the WHERE clause but is executed after grouping. So the inner select returns only A_IDs corresponding to (4, 5)-pairs of C_ID. The whole query always returns an even number of records like

    A_ID  |  B_ID  |  C_ID
     1    |   1    |   4
     1    |   2    |   5
     2    |   3    |   4
     2    |   4    |   5
    

    EDIT

    If you only want A_IDs where not only C_IDs 4 and 5 are present but where no further C_IDs exist then change the query to

    SELECT B.*
    FROM A INNER JOIN B ON A.A_ID = B.A_ID
    WHERE B.C_ID IN (4, 5) AND
          A.A_ID IN (SELECT A_ID
                     FROM B
                     GROUP BY A_ID
                     HAVING MIN(C_ID)=4 AND MAX(C_ID)=5 AND COUNT(*)=2)
    

    If the two numbers (4 and 5 in this example) are always contiguous, you can drop the COUNT(*)=2 part.

    (Note: accoring to one of your comments the join is on the A_ID column. I changed that in all my examples.)

    UPDATE by Robin

    Thanks, with your help I came up with this:

    SELECT
        *
    FROM
        A a
        INNER JOIN B
            ON a.A_ID = B.A_ID
    WHERE
        (SELECT COUNT(*) FROM B b WHERE b.A_ID = a.A_ID and C_ID IN (4, 5)) =
     (SELECT COUNT(*) FROM A aa INNER JOIN B b ON aa.A_ID = b.A_ID WHERE b.A_ID = a.A_ID)
    
    qid & accept id: (13832037, 13832050) query: MySQL: Select values based on current month and day soup:
    SELECT  *\nFROM    History\nWHERE   DATE_FORMAT(CURDATE(), '%M') = `month` AND\n        DAY(CURDATE()) = `day_num`\n
    \n\n

    OR

    \n
    SELECT  *\nFROM    History\nWHERE   MONTHNAME(CURDATE()) = `month` AND\n        DAY(CURDATE()) = `day_num`\n
    \n\n

    Other Sources

    \n\n soup wrap:
    SELECT  *
    FROM    History
    WHERE   DATE_FORMAT(CURDATE(), '%M') = `month` AND
            DAY(CURDATE()) = `day_num`
    

    OR

    SELECT  *
    FROM    History
    WHERE   MONTHNAME(CURDATE()) = `month` AND
            DAY(CURDATE()) = `day_num`
    

    Other Sources

    qid & accept id: (13840468, 13840517) query: SQL query for fetching a single record in format "column heading: column value" soup:

    You can use the UNPIVOT function to do this, the version below concatenates the column name and value together, but you can always display them as separate columns:

    \n
    select col+':'+cast(value as varchar(10)) col\nfrom test\nunpivot\n(\n  value\n  for col in (A, B, C, D)\n) unpiv\n
    \n

    See SQL Fiddle with Demo

    \n

    The above works great if you have a known number of columns, but if you have 800 columns that you want to transform, you might want to use dynamic sql to perform this:

    \n
    DECLARE @colsUnpivot AS NVARCHAR(MAX),\n    @query  AS NVARCHAR(MAX)\n\nselect @colsUnpivot = stuff((select ','+quotename(C.name)\n         from sys.columns as C\n         where C.object_id = object_id('test')\n         for xml path('')), 1, 1, '')\n\nset @query \n  = 'select col+'':''+cast(value as varchar(10)) col\n     from test\n     unpivot\n     (\n       value\n       for col in ('+ @colsunpivot +')\n     ) u'\n\nexec(@query)\n
    \n

    See SQL Fiddle with Demo

    \n

    Note: when using UNPIVOT the datatypes of all of the columns that need to be transformed must be the same. So you might have to cast/convert data as needed.

    \n

    Edit #1, since your datatypes are different on all of your columns and you need to unpivot them, then you can use the following code.

    \n

    The first piece get the list of columns that you want to unpivot dynamically:

    \n
    select @colsUnpivot = stuff((select ','+quotename(C.name)\n             from sys.columns as C\n             where C.object_id = object_id('test')\n             for xml path('')), 1, 1, '')\n
    \n

    The second piece gets the same list of columns but wraps each column in a cast as a varchar:

    \n
    select @colsUnpivotCast = stuff((select ', cast('+quotename(C.name)+' as varchar(50)) as '+quotename(C.name)\n         from sys.columns as C\n         where C.object_id = object_id('test')\n         for xml path('')), 1, 1, '')\n
    \n

    Then your final query will be:

    \n
    DECLARE @colsUnpivot AS NVARCHAR(MAX),\n    @colsUnpivotCast AS NVARCHAR(MAX),\n    @query  AS NVARCHAR(MAX)\n\n\nselect @colsUnpivot = stuff((select ','+quotename(C.name)\n         from sys.columns as C\n         where C.object_id = object_id('test')\n         for xml path('')), 1, 1, '')\n\nselect @colsUnpivotCast = stuff((select ', cast('+quotename(C.name)+' as varchar(50)) as '+quotename(C.name)\n         from sys.columns as C\n         where C.object_id = object_id('test')\n         for xml path('')), 1, 1, '')\n\n\nset @query \n  = 'select col+'':''+value col\n     from\n    (\n      select '+@colsUnpivotCast+'\n      from test\n    ) src\n     unpivot\n     (\n       value\n       for col in ('+ @colsunpivot +')\n     ) u'\n\n\nexec(@query)\n
    \n

    See SQL Fiddle with Demo

    \n

    The UNPIVOT function is performing the same process as a UNION ALL which would look like this:

    \n
    select col+':'+value as col\nfrom\n(\n  select A value, 'A' col\n  from test\n  union all\n  select cast(B as varchar(10)) value, 'B' col\n  from test\n  union all\n  select cast(C as varchar(10)) value, 'C' col\n  from test\n  union all\n  select cast(D as varchar(10)) value, 'D' col\n  from test\n) src\n
    \n

    See SQL Fiddle with Demo

    \n

    The result of all of the queries is the same:

    \n
    |    COL |\n----------\n|    A:1 |\n| B:2.00 |\n|    C:3 |\n|    D:4 |\n
    \n

    Edit #2: using UNPIVOT strips out any of the null columns which could cause some data to drop. If that is the case, then you will want to wrap the columns with IsNull() to replace the null values:

    \n
    DECLARE @colsUnpivot AS NVARCHAR(MAX),\n    @colsUnpivotCast AS NVARCHAR(MAX),\n    @query  AS NVARCHAR(MAX)\n\n\nselect @colsUnpivot = stuff((select ','+quotename(C.name)\n         from sys.columns as C\n         where C.object_id = object_id('test')\n         for xml path('')), 1, 1, '')\n\nselect @colsUnpivotCast = stuff((select ', IsNull(cast('+quotename(C.name)+' as varchar(50)), '''') as '+quotename(C.name)\n         from sys.columns as C\n         where C.object_id = object_id('test')\n         for xml path('')), 1, 1, '')\n\n\nset @query \n  = 'select col+'':''+value col\n     from\n    (\n      select '+@colsUnpivotCast+'\n      from test\n    ) src\n     unpivot\n     (\n       value\n       for col in ('+ @colsunpivot +')\n     ) u'\n\n\nexec(@query)\n
    \n

    See SQL Fiddle with Demo

    \n

    Replacing the null values, will give a result like this:

    \n
    |    COL |\n----------\n|    A:1 |\n| B:2.00 |\n|     C: |\n|    D:4 |\n
    \n soup wrap:

    You can use the UNPIVOT function to do this, the version below concatenates the column name and value together, but you can always display them as separate columns:

    select col+':'+cast(value as varchar(10)) col
    from test
    unpivot
    (
      value
      for col in (A, B, C, D)
    ) unpiv
    

    See SQL Fiddle with Demo

    The above works great if you have a known number of columns, but if you have 800 columns that you want to transform, you might want to use dynamic sql to perform this:

    DECLARE @colsUnpivot AS NVARCHAR(MAX),
        @query  AS NVARCHAR(MAX)
    
    select @colsUnpivot = stuff((select ','+quotename(C.name)
             from sys.columns as C
             where C.object_id = object_id('test')
             for xml path('')), 1, 1, '')
    
    set @query 
      = 'select col+'':''+cast(value as varchar(10)) col
         from test
         unpivot
         (
           value
           for col in ('+ @colsunpivot +')
         ) u'
    
    exec(@query)
    

    See SQL Fiddle with Demo

    Note: when using UNPIVOT the datatypes of all of the columns that need to be transformed must be the same. So you might have to cast/convert data as needed.

    Edit #1, since your datatypes are different on all of your columns and you need to unpivot them, then you can use the following code.

    The first piece get the list of columns that you want to unpivot dynamically:

    select @colsUnpivot = stuff((select ','+quotename(C.name)
                 from sys.columns as C
                 where C.object_id = object_id('test')
                 for xml path('')), 1, 1, '')
    

    The second piece gets the same list of columns but wraps each column in a cast as a varchar:

    select @colsUnpivotCast = stuff((select ', cast('+quotename(C.name)+' as varchar(50)) as '+quotename(C.name)
             from sys.columns as C
             where C.object_id = object_id('test')
             for xml path('')), 1, 1, '')
    

    Then your final query will be:

    DECLARE @colsUnpivot AS NVARCHAR(MAX),
        @colsUnpivotCast AS NVARCHAR(MAX),
        @query  AS NVARCHAR(MAX)
    
    
    select @colsUnpivot = stuff((select ','+quotename(C.name)
             from sys.columns as C
             where C.object_id = object_id('test')
             for xml path('')), 1, 1, '')
    
    select @colsUnpivotCast = stuff((select ', cast('+quotename(C.name)+' as varchar(50)) as '+quotename(C.name)
             from sys.columns as C
             where C.object_id = object_id('test')
             for xml path('')), 1, 1, '')
    
    
    set @query 
      = 'select col+'':''+value col
         from
        (
          select '+@colsUnpivotCast+'
          from test
        ) src
         unpivot
         (
           value
           for col in ('+ @colsunpivot +')
         ) u'
    
    
    exec(@query)
    

    See SQL Fiddle with Demo

    The UNPIVOT function is performing the same process as a UNION ALL which would look like this:

    select col+':'+value as col
    from
    (
      select A value, 'A' col
      from test
      union all
      select cast(B as varchar(10)) value, 'B' col
      from test
      union all
      select cast(C as varchar(10)) value, 'C' col
      from test
      union all
      select cast(D as varchar(10)) value, 'D' col
      from test
    ) src
    

    See SQL Fiddle with Demo

    The result of all of the queries is the same:

    |    COL |
    ----------
    |    A:1 |
    | B:2.00 |
    |    C:3 |
    |    D:4 |
    

    Edit #2: using UNPIVOT strips out any of the null columns which could cause some data to drop. If that is the case, then you will want to wrap the columns with IsNull() to replace the null values:

    DECLARE @colsUnpivot AS NVARCHAR(MAX),
        @colsUnpivotCast AS NVARCHAR(MAX),
        @query  AS NVARCHAR(MAX)
    
    
    select @colsUnpivot = stuff((select ','+quotename(C.name)
             from sys.columns as C
             where C.object_id = object_id('test')
             for xml path('')), 1, 1, '')
    
    select @colsUnpivotCast = stuff((select ', IsNull(cast('+quotename(C.name)+' as varchar(50)), '''') as '+quotename(C.name)
             from sys.columns as C
             where C.object_id = object_id('test')
             for xml path('')), 1, 1, '')
    
    
    set @query 
      = 'select col+'':''+value col
         from
        (
          select '+@colsUnpivotCast+'
          from test
        ) src
         unpivot
         (
           value
           for col in ('+ @colsunpivot +')
         ) u'
    
    
    exec(@query)
    

    See SQL Fiddle with Demo

    Replacing the null values, will give a result like this:

    |    COL |
    ----------
    |    A:1 |
    | B:2.00 |
    |     C: |
    |    D:4 |
    
    qid & accept id: (13862099, 13863447) query: Magento SQL query: Get all simple products that are "not visible individually" soup:

    There are many reasons to not do this without the ORM, all of which may (or may not) apply to your needs (store filters, reading data from the correct table, etc). At the very least, you can use the product collection object to build the query which you would run:

    \n
    $coll = Mage::getModel('catalog/product')->getCollection();\n$coll->addAttributeToFilter('visibility' , Mage_Catalog_Model_Product_Visibility::VISIBILITY_NOT_VISIBLE);\necho $coll->getSelect();\n
    \n

    The resulting query will look like this:

    \n
    SELECT `e`.*, IF(at_visibility.value_id > 0, at_visibility.value, at_visibility_default.value) AS `visibility`\nFROM `catalog_product_entity` AS `e`\nINNER JOIN `catalog_product_entity_int` AS `at_visibility_default`\n    ON (`at_visibility_default`.`entity_id` = `e`.`entity_id`)\n    AND (`at_visibility_default`.`attribute_id` = '526')\n    AND `at_visibility_default`.`store_id` = 0\nLEFT JOIN `catalog_product_entity_int` AS `at_visibility` ON (`at_visibility`.`entity_id` = `e`.`entity_id`)\n    AND (`at_visibility`.`attribute_id` = '526')\n    AND (`at_visibility`.`store_id` = 1)\nWHERE (IF(at_visibility.value_id > 0, at_visibility.value, at_visibility_default.value) = '1')\n
    \n soup wrap:

    There are many reasons to not do this without the ORM, all of which may (or may not) apply to your needs (store filters, reading data from the correct table, etc). At the very least, you can use the product collection object to build the query which you would run:

    $coll = Mage::getModel('catalog/product')->getCollection();
    $coll->addAttributeToFilter('visibility' , Mage_Catalog_Model_Product_Visibility::VISIBILITY_NOT_VISIBLE);
    echo $coll->getSelect();
    

    The resulting query will look like this:

    SELECT `e`.*, IF(at_visibility.value_id > 0, at_visibility.value, at_visibility_default.value) AS `visibility`
    FROM `catalog_product_entity` AS `e`
    INNER JOIN `catalog_product_entity_int` AS `at_visibility_default`
        ON (`at_visibility_default`.`entity_id` = `e`.`entity_id`)
        AND (`at_visibility_default`.`attribute_id` = '526')
        AND `at_visibility_default`.`store_id` = 0
    LEFT JOIN `catalog_product_entity_int` AS `at_visibility` ON (`at_visibility`.`entity_id` = `e`.`entity_id`)
        AND (`at_visibility`.`attribute_id` = '526')
        AND (`at_visibility`.`store_id` = 1)
    WHERE (IF(at_visibility.value_id > 0, at_visibility.value, at_visibility_default.value) = '1')
    
    qid & accept id: (13901809, 13902047) query: Sql how to remove duplicate records with merging values? soup:

    As far as I know, you can't do this, you can't UPDATE and DELETE in one single query. However, you can do this as two UPDATE and DELETE queries like so:

    \n
    UPDATE Table1 t1\nINNER JOIN\n(\n  SELECT val1, GROUP_CONCAT(val2 SEPARATOR ',') Val2\n  FROM Table1\n  GROUP BY val1\n) t2 ON t1.val1 = t2.val1\nSET t1.val2 = t2.val2;\n\nDELETE t\nFROM table1 t\nWHERE id NOT IN\n(\n  SELECT ID\n  FROM\n  (\n    SELECT MIN(ID) id, val1\n    FROM table1\n    GROUP BY val1\n   ) sub\n );\n
    \n

    This will make the changes you want.

    \n

    Note that: You have to put these two queries in one TRANSACTION.

    \n

    SQL Fiddle Demo

    \n

    These two queries will make your table looks like:

    \n
    | ID |  VAL1 |    VAL2 |\n------------------------\n|  1 |  john | sam,joe |\n|  2 | larry |     tom |\n
    \n soup wrap:

    As far as I know, you can't do this, you can't UPDATE and DELETE in one single query. However, you can do this as two UPDATE and DELETE queries like so:

    UPDATE Table1 t1
    INNER JOIN
    (
      SELECT val1, GROUP_CONCAT(val2 SEPARATOR ',') Val2
      FROM Table1
      GROUP BY val1
    ) t2 ON t1.val1 = t2.val1
    SET t1.val2 = t2.val2;
    
    DELETE t
    FROM table1 t
    WHERE id NOT IN
    (
      SELECT ID
      FROM
      (
        SELECT MIN(ID) id, val1
        FROM table1
        GROUP BY val1
       ) sub
     );
    

    This will make the changes you want.

    Note that: You have to put these two queries in one TRANSACTION.

    SQL Fiddle Demo

    These two queries will make your table looks like:

    | ID |  VAL1 |    VAL2 |
    ------------------------
    |  1 |  john | sam,joe |
    |  2 | larry |     tom |
    
    qid & accept id: (13967474, 13967524) query: count no of instance of tuple with same value in some attribute soup:

    Try this:

    \n
    SELECT orderid, COUNT(orderid) no_of_iteraction\nFROM tblTemp \nGROUP BY orderid\n
    \n

    OR

    \n

    As per your request using SUM function

    \n
    SELECT orderid, SUM(1) no_of_iteraction\nFROM tblTemp \nGROUP BY orderid\n
    \n

    OR

    \n
    SELECT orderid, SUM(cnt)\nFROM (SELECT orderid, 1 cnt FROM tblTemp ORDER BY orderid) AS A \nGROUP BY orderid\n
    \n soup wrap:

    Try this:

    SELECT orderid, COUNT(orderid) no_of_iteraction
    FROM tblTemp 
    GROUP BY orderid
    

    OR

    As per your request using SUM function

    SELECT orderid, SUM(1) no_of_iteraction
    FROM tblTemp 
    GROUP BY orderid
    

    OR

    SELECT orderid, SUM(cnt)
    FROM (SELECT orderid, 1 cnt FROM tblTemp ORDER BY orderid) AS A 
    GROUP BY orderid
    
    qid & accept id: (14056134, 14056652) query: reusing result of SELECT statement within a CASE statement sqllite soup:

    This is an alternative way of structuring the query:

    \n
    INSERT INTO search_email(meta, subject, body, sender, tos, ccs, folder, threadid)\n    SELECT 'meta1', 'subject1', 'body1', 'sender1', 'tos1', 'ccs1', 'folder1',\n            coalesce((SELECT search_email.threadID\n                      FROM search_email \n                      WHERE search_email.subject MATCH '%query%' AND \n                            ((search_email.sender = '%sender%' AND search_email.tos = '%receiver%') OR\n                             (search_email.sender = '%receiver%' AND search_email.tos = '%sender%')\n                            )\n                      LIMIT 1\n                     ),\n                     \n                    )\n
    \n

    This is using a select instead of values. It gets the thread id that matches the conditions, or NULL if none match. The second clause of the coalesce is then run when the first is NULL. You can generate the new id there.

    \n

    I do have a problem with this approach. To me, it seems that you should have a Thread table that manages the threads. The ThreadId should be an autoincremented id in this table. The emails table can then reference this id. In other words, I think the data model needs to be thought out in more detail.

    \n

    The following query will not query work, but it gives the idea of moving the thread to the subquery:

    \n
    INSERT INTO search_email(meta, subject, body, sender, tos, ccs, folder, threadid)\n    SELECT 'meta1', 'subject1', 'body1', 'sender1', 'tos1', 'ccs1', 'folder1',\n            coalesce(t.threadID,\n                     \n                    )\n    from (SELECT search_email.threadID\n          FROM search_email \n          WHERE search_email.subject MATCH '%query%' AND \n                ((search_email.sender = '%sender%' AND search_email.tos = '%receiver%') OR\n                 (search_email.sender = '%receiver%' AND search_email.tos = '%sender%')\n                )\n          LIMIT 1\n         ) t\n
    \n

    The reason it will not work is because the from clause will return no rows rather than 1 row with a NULL value. So, to get what you want, you can use:

    \n
        from (SELECT search_email.threadID\n          FROM search_email \n          WHERE search_email.subject MATCH '%query%' AND \n                ((search_email.sender = '%sender%' AND search_email.tos = '%receiver%') OR\n                 (search_email.sender = '%receiver%' AND search_email.tos = '%sender%')\n                )\n          union all\n          select NULL\n          order by (case when threadId is not null then 1 else 0 end) desc\n          LIMIT 1\n         ) t\n
    \n

    This ensures that a NULL value is returned when there is no thread.

    \n soup wrap:

    This is an alternative way of structuring the query:

    INSERT INTO search_email(meta, subject, body, sender, tos, ccs, folder, threadid)
        SELECT 'meta1', 'subject1', 'body1', 'sender1', 'tos1', 'ccs1', 'folder1',
                coalesce((SELECT search_email.threadID
                          FROM search_email 
                          WHERE search_email.subject MATCH '%query%' AND 
                                ((search_email.sender = '%sender%' AND search_email.tos = '%receiver%') OR
                                 (search_email.sender = '%receiver%' AND search_email.tos = '%sender%')
                                )
                          LIMIT 1
                         ),
                         
                        )
    

    This is using a select instead of values. It gets the thread id that matches the conditions, or NULL if none match. The second clause of the coalesce is then run when the first is NULL. You can generate the new id there.

    I do have a problem with this approach. To me, it seems that you should have a Thread table that manages the threads. The ThreadId should be an autoincremented id in this table. The emails table can then reference this id. In other words, I think the data model needs to be thought out in more detail.

    The following query will not query work, but it gives the idea of moving the thread to the subquery:

    INSERT INTO search_email(meta, subject, body, sender, tos, ccs, folder, threadid)
        SELECT 'meta1', 'subject1', 'body1', 'sender1', 'tos1', 'ccs1', 'folder1',
                coalesce(t.threadID,
                         
                        )
        from (SELECT search_email.threadID
              FROM search_email 
              WHERE search_email.subject MATCH '%query%' AND 
                    ((search_email.sender = '%sender%' AND search_email.tos = '%receiver%') OR
                     (search_email.sender = '%receiver%' AND search_email.tos = '%sender%')
                    )
              LIMIT 1
             ) t
    

    The reason it will not work is because the from clause will return no rows rather than 1 row with a NULL value. So, to get what you want, you can use:

        from (SELECT search_email.threadID
              FROM search_email 
              WHERE search_email.subject MATCH '%query%' AND 
                    ((search_email.sender = '%sender%' AND search_email.tos = '%receiver%') OR
                     (search_email.sender = '%receiver%' AND search_email.tos = '%sender%')
                    )
              union all
              select NULL
              order by (case when threadId is not null then 1 else 0 end) desc
              LIMIT 1
             ) t
    

    This ensures that a NULL value is returned when there is no thread.

    qid & accept id: (14091654, 14091682) query: How to query insert on updated rows? soup:
    UPDATE logs_month SET status ='1'\nWHERE DATE_FORMAT(month,"%m/%y") = '11/12';\nCOMMIT;\nINSERT INTO some_table (columns) values (select columns\nfrom logs_month where DATE_FORMAT(month,"%m/%y") = '11/12';\n
    \n

    You can do with TRIGGER also,

    \n
    DELIMITER $$\nCREATE TRIGGER `logs_m` \nAFTER UPDATE ON `logs_month`\nFOR EACH ROW \nBEGIN\n    IF NEW.status=1 THEN\n    INSERT INTO some_table (field) values (NEW.field);\n    END IF;\nEND$$\n\nDELIMITER ;\n
    \n

    You can do like this

    \n soup wrap:
    UPDATE logs_month SET status ='1'
    WHERE DATE_FORMAT(month,"%m/%y") = '11/12';
    COMMIT;
    INSERT INTO some_table (columns) values (select columns
    from logs_month where DATE_FORMAT(month,"%m/%y") = '11/12';
    

    You can do with TRIGGER also,

    DELIMITER $$
    CREATE TRIGGER `logs_m` 
    AFTER UPDATE ON `logs_month`
    FOR EACH ROW 
    BEGIN
        IF NEW.status=1 THEN
        INSERT INTO some_table (field) values (NEW.field);
        END IF;
    END$$
    
    DELIMITER ;
    

    You can do like this

    qid & accept id: (14124763, 14124880) query: How to write a Sql query to find distinct values that have never met the following "Where Not(a=x and b=x)" soup:

    One way would be

    \n
    SELECT DISTINCT CustomerId FROM Attributes a \nWHERE NOT EXISTS (\n    SELECT * FROM Attributes forbidden \n    WHERE forbidden.CustomerId = a.CustomerId AND forbidden.Class = _forbiddenClassValue_ AND forbidden.Code = _forbiddenCodeValue_\n)\n
    \n

    or with join

    \n
    SELECT DISTINCT a.CustomerId FROM Attributes a\nLEFT JOIN (\n    SELECT CustomerId FROM Attributes\n    WHERE Class = _forbiddenClassValue_ AND Code = _forbiddenCodeValue_\n) havingForbiddenPair ON a.CustomerId = havingForbiddenPair.CustomerId\nWHERE havingForbiddenPair.CustomerId IS NULL\n
    \n

    Yet another way is to use EXCEPT, as per ypercube's answer

    \n soup wrap:

    One way would be

    SELECT DISTINCT CustomerId FROM Attributes a 
    WHERE NOT EXISTS (
        SELECT * FROM Attributes forbidden 
        WHERE forbidden.CustomerId = a.CustomerId AND forbidden.Class = _forbiddenClassValue_ AND forbidden.Code = _forbiddenCodeValue_
    )
    

    or with join

    SELECT DISTINCT a.CustomerId FROM Attributes a
    LEFT JOIN (
        SELECT CustomerId FROM Attributes
        WHERE Class = _forbiddenClassValue_ AND Code = _forbiddenCodeValue_
    ) havingForbiddenPair ON a.CustomerId = havingForbiddenPair.CustomerId
    WHERE havingForbiddenPair.CustomerId IS NULL
    

    Yet another way is to use EXCEPT, as per ypercube's answer

    qid & accept id: (14159629, 14159677) query: sql table pivot soup:

    You did not specify what RDBMS you are using but this will work in all versions:

    \n
    select blog,\n  id,\n  max(case when attribute = 'pid' then value end) postid,\n  max(case when attribute = 'date' then value end) date,\n  max(case when attribute = 'title' then value end) title\nfrom yourtable\ngroup by blog, id\n
    \n

    See SQL Fiddle with Demo

    \n

    If you are using a database with the PIVOT function, then your query will be like this:

    \n
    select blog, id, pid as postid, date, title\nfrom \n(\n  select blog, id, attribute, value\n  from yourtable\n) src\npivot\n(\n  max(value)\n  for attribute in (pid, date, title)\n) piv\n
    \n

    See SQL Fiddle with Demo

    \n

    The result for both will be:

    \n
    | BLOG | ID | POSTID | DATE | TITLE |\n-------------------------------------\n|    p |  1 |   abc1 | abc2 |  abc3 |\n|    p |  2 |   abc1 | abc2 |  abc3 |\n|    p |  3 |   abc1 | abc2 |  abc3 |\n
    \n soup wrap:

    You did not specify what RDBMS you are using but this will work in all versions:

    select blog,
      id,
      max(case when attribute = 'pid' then value end) postid,
      max(case when attribute = 'date' then value end) date,
      max(case when attribute = 'title' then value end) title
    from yourtable
    group by blog, id
    

    See SQL Fiddle with Demo

    If you are using a database with the PIVOT function, then your query will be like this:

    select blog, id, pid as postid, date, title
    from 
    (
      select blog, id, attribute, value
      from yourtable
    ) src
    pivot
    (
      max(value)
      for attribute in (pid, date, title)
    ) piv
    

    See SQL Fiddle with Demo

    The result for both will be:

    | BLOG | ID | POSTID | DATE | TITLE |
    -------------------------------------
    |    p |  1 |   abc1 | abc2 |  abc3 |
    |    p |  2 |   abc1 | abc2 |  abc3 |
    |    p |  3 |   abc1 | abc2 |  abc3 |
    
    qid & accept id: (14168940, 14168955) query: How to delete rows in other database tables soup:

    You want to add ON DELETE CASCADE to your foreign key constraints.

    \n

    First, drop the current constraint without a cascading delete.

    \n
    ALTER TABLE Session_Completed\nDROP PRIMARY KEY pk_SessionId\n
    \n

    Then, re-add the constraint with the ON DELETE CASCADES:

    \n
    ALTER TABLE Session_Completed\n  add CONSTRAINT fk_sessionid\n    FOREIGN KEY (SessionId)\n    REFERENCES session(SessionId)\n    ON DELETE CASCADE;\n
    \n soup wrap:

    You want to add ON DELETE CASCADE to your foreign key constraints.

    First, drop the current constraint without a cascading delete.

    ALTER TABLE Session_Completed
    DROP PRIMARY KEY pk_SessionId
    

    Then, re-add the constraint with the ON DELETE CASCADES:

    ALTER TABLE Session_Completed
      add CONSTRAINT fk_sessionid
        FOREIGN KEY (SessionId)
        REFERENCES session(SessionId)
        ON DELETE CASCADE;
    
    qid & accept id: (14206236, 14206250) query: I have a table where i need to group and count 2 columns within a certain date range soup:
    SELECT LocationX, LocationY, City, Type, COUNT(*) CountOfLocation  \nFROM   tableName\nWHERE  DateTimeStamp BETWEEN '2013-08-01 8:49:00' AND '2013-08-01 8:59:59'\nGROUP  BY LocationX, LocationY, City, Type\n
    \n\n

    UPDATE

    \n
    SELECT LocationX, LocationY, City, Type, COUNT(*) AS CountOfLocation  \nFROM   tableName\nWHERE  DateTimeStamp BETWEEN #2013-08-01 08:49:00# AND #2013-08-01 08:59:59#\nGROUP  BY LocationX, LocationY, City, Type\n
    \n soup wrap:
    SELECT LocationX, LocationY, City, Type, COUNT(*) CountOfLocation  
    FROM   tableName
    WHERE  DateTimeStamp BETWEEN '2013-08-01 8:49:00' AND '2013-08-01 8:59:59'
    GROUP  BY LocationX, LocationY, City, Type
    

    UPDATE

    SELECT LocationX, LocationY, City, Type, COUNT(*) AS CountOfLocation  
    FROM   tableName
    WHERE  DateTimeStamp BETWEEN #2013-08-01 08:49:00# AND #2013-08-01 08:59:59#
    GROUP  BY LocationX, LocationY, City, Type
    
    qid & accept id: (14211346, 14211957) query: How to remove white space characters from a string in SQL Server soup:

    Using ASCII(RIGHT(ProductAlternateKey, 1)) you can see that the right most character in row 2 is a Line Feed or Ascii Character 10.

    \n

    This can not be removed using the standard LTrim RTrim functions.

    \n

    You could however use (REPLACE(ProductAlternateKey, CHAR(10), '')

    \n

    You may also want to account for carriage returns and tabs. These three (Line feeds, carriage returns and tabs) are the usual culprits and can be removed with the following :

    \n
    LTRIM(RTRIM(REPLACE(REPLACE(REPLACE(ProductAlternateKey, CHAR(10), ''), CHAR(13), ''), CHAR(9), '')))\n
    \n

    If you encounter any more "white space" characters that can't be removed with the above then try one or all of the below:

    \n
    --NULL\nReplace([YourString],CHAR(0),'');\n--Horizontal Tab\nReplace([YourString],CHAR(9),'');\n--Line Feed\nReplace([YourString],CHAR(10),'');\n--Vertical Tab\nReplace([YourString],CHAR(11),'');\n--Form Feed\nReplace([YourString],CHAR(12),'');\n--Carriage Return\nReplace([YourString],CHAR(13),'');\n--Column Break\nReplace([YourString],CHAR(14),'');\n--Non-breaking space\nReplace([YourString],CHAR(160),'');\n
    \n

    This list of potential white space characters could be used to create a function such as :

    \n
    Create Function [dbo].[CleanAndTrimString] \n(@MyString as varchar(Max))\nReturns varchar(Max)\nAs\nBegin\n    --NULL\n    Set @MyString = Replace(@MyString,CHAR(0),'');\n    --Horizontal Tab\n    Set @MyString = Replace(@MyString,CHAR(9),'');\n    --Line Feed\n    Set @MyString = Replace(@MyString,CHAR(10),'');\n    --Vertical Tab\n    Set @MyString = Replace(@MyString,CHAR(11),'');\n    --Form Feed\n    Set @MyString = Replace(@MyString,CHAR(12),'');\n    --Carriage Return\n    Set @MyString = Replace(@MyString,CHAR(13),'');\n    --Column Break\n    Set @MyString = Replace(@MyString,CHAR(14),'');\n    --Non-breaking space\n    Set @MyString = Replace(@MyString,CHAR(160),'');\n\n    Set @MyString = LTRIM(RTRIM(@MyString));\n    Return @MyString\nEnd\nGo\n
    \n

    Which you could then use as follows:

    \n
    Select \n    dbo.CleanAndTrimString(ProductAlternateKey) As ProductAlternateKey\nfrom DimProducts\n
    \n soup wrap:

    Using ASCII(RIGHT(ProductAlternateKey, 1)) you can see that the right most character in row 2 is a Line Feed or Ascii Character 10.

    This can not be removed using the standard LTrim RTrim functions.

    You could however use (REPLACE(ProductAlternateKey, CHAR(10), '')

    You may also want to account for carriage returns and tabs. These three (Line feeds, carriage returns and tabs) are the usual culprits and can be removed with the following :

    LTRIM(RTRIM(REPLACE(REPLACE(REPLACE(ProductAlternateKey, CHAR(10), ''), CHAR(13), ''), CHAR(9), '')))
    

    If you encounter any more "white space" characters that can't be removed with the above then try one or all of the below:

    --NULL
    Replace([YourString],CHAR(0),'');
    --Horizontal Tab
    Replace([YourString],CHAR(9),'');
    --Line Feed
    Replace([YourString],CHAR(10),'');
    --Vertical Tab
    Replace([YourString],CHAR(11),'');
    --Form Feed
    Replace([YourString],CHAR(12),'');
    --Carriage Return
    Replace([YourString],CHAR(13),'');
    --Column Break
    Replace([YourString],CHAR(14),'');
    --Non-breaking space
    Replace([YourString],CHAR(160),'');
    

    This list of potential white space characters could be used to create a function such as :

    Create Function [dbo].[CleanAndTrimString] 
    (@MyString as varchar(Max))
    Returns varchar(Max)
    As
    Begin
        --NULL
        Set @MyString = Replace(@MyString,CHAR(0),'');
        --Horizontal Tab
        Set @MyString = Replace(@MyString,CHAR(9),'');
        --Line Feed
        Set @MyString = Replace(@MyString,CHAR(10),'');
        --Vertical Tab
        Set @MyString = Replace(@MyString,CHAR(11),'');
        --Form Feed
        Set @MyString = Replace(@MyString,CHAR(12),'');
        --Carriage Return
        Set @MyString = Replace(@MyString,CHAR(13),'');
        --Column Break
        Set @MyString = Replace(@MyString,CHAR(14),'');
        --Non-breaking space
        Set @MyString = Replace(@MyString,CHAR(160),'');
    
        Set @MyString = LTRIM(RTRIM(@MyString));
        Return @MyString
    End
    Go
    

    Which you could then use as follows:

    Select 
        dbo.CleanAndTrimString(ProductAlternateKey) As ProductAlternateKey
    from DimProducts
    
    qid & accept id: (14253673, 14256098) query: Applying Where clause for Order by in SQL soup:

    The problem is the table violates first normal form, EmpLotusNotes should not contain the name of an employee and the country, presumably the country they work in.

    \n

    You should challenge the reasons why you are not allowed to clean up the structure and the data.

    \n

    See https://www.google.com.au/search?q=sql+first+normal+form+atomic

    \n

    The answer, if you still cannot normalise the database after challenging, is create a query for countries, create a query to split the data in the first table into first normal form, then join the two.

    \n

    An example that works for mysql follows, for MS SQL you would use CHARINDEX instead of INSTR and substring instead of substr.

    \n
    select employeesWithCountries.*\n, countries.sort \nfrom (\n    select empId, empLotusNotes, substr( empLotusNotes, afterStartOfDelimiter ) country from (\n        select empId\n        , empLotusNotes\n        , INSTR( empLotusNotes, '/' ) + 1 as afterStartOfDelimiter \n        from EmployeesLotusNotes\n    ) employees\n) employeesWithCountries\ninner join (\n    SELECT 'Japan' as country, 1 as sort\n    union\n    SELECT 'China' as country, 2 as sort\n    union\n    SELECT 'India' as country, 3 as sort\n    union\n    SELECT 'USA' as country, 4 as sort\n) countries\non employeesWithCountries.country = countries.country\norder by countries.sort, employeesWithCountries.empLotusNotes\n
    \n

    Results.

    \n
    30003    Kyo Jun/Japan   Japan    1\n40004    Jee Lee/China   China    2\n10001    Amit B/India    India    3\n20002    Bharat C/India  India    3\n50005    Xavier K/USA    USA      4\n
    \n soup wrap:

    The problem is the table violates first normal form, EmpLotusNotes should not contain the name of an employee and the country, presumably the country they work in.

    You should challenge the reasons why you are not allowed to clean up the structure and the data.

    See https://www.google.com.au/search?q=sql+first+normal+form+atomic

    The answer, if you still cannot normalise the database after challenging, is create a query for countries, create a query to split the data in the first table into first normal form, then join the two.

    An example that works for mysql follows, for MS SQL you would use CHARINDEX instead of INSTR and substring instead of substr.

    select employeesWithCountries.*
    , countries.sort 
    from (
        select empId, empLotusNotes, substr( empLotusNotes, afterStartOfDelimiter ) country from (
            select empId
            , empLotusNotes
            , INSTR( empLotusNotes, '/' ) + 1 as afterStartOfDelimiter 
            from EmployeesLotusNotes
        ) employees
    ) employeesWithCountries
    inner join (
        SELECT 'Japan' as country, 1 as sort
        union
        SELECT 'China' as country, 2 as sort
        union
        SELECT 'India' as country, 3 as sort
        union
        SELECT 'USA' as country, 4 as sort
    ) countries
    on employeesWithCountries.country = countries.country
    order by countries.sort, employeesWithCountries.empLotusNotes
    

    Results.

    30003    Kyo Jun/Japan   Japan    1
    40004    Jee Lee/China   China    2
    10001    Amit B/India    India    3
    20002    Bharat C/India  India    3
    50005    Xavier K/USA    USA      4
    
    qid & accept id: (14285554, 14288371) query: Zend Database Table getrow soup:

    There does not appear a way to do it in a single simple query. Also, fetchOne only gets the first column of the first record. That helper would return just the ID, and not the product_key.

    \n

    Option 1: \nModify your ProductKeys model to get the key and set it as used:

    \n
    class My_Model_ProductKeys extends Zend_Db_Table_Abstract\n...\nfunction getKeyAndMarkUsed()\n{\n  $select = $this->select();\n  $select->where('used=?',1)->limit(1)->order('ID');\n  $keyRow = $this->fetchRow();\n  if ($keyRow){\n    $this->update(array('used'=>1),'id='.$keyRow->ID);\n    return $keyRow->id;\n  }\n  else{\n    //no keys left! what to do??? Create a new key?\n    throw new Exception('No keys left!');\n  }\n}\n
    \n

    Then you would just:

    \n
    $productKey = $this->_helper->model('ProductKeys')->getKeyAndMarkUsed();\n
    \n

    Option 2: \nMake a database procedure to do the above functionality and call that instead.

    \n soup wrap:

    There does not appear a way to do it in a single simple query. Also, fetchOne only gets the first column of the first record. That helper would return just the ID, and not the product_key.

    Option 1: Modify your ProductKeys model to get the key and set it as used:

    class My_Model_ProductKeys extends Zend_Db_Table_Abstract
    ...
    function getKeyAndMarkUsed()
    {
      $select = $this->select();
      $select->where('used=?',1)->limit(1)->order('ID');
      $keyRow = $this->fetchRow();
      if ($keyRow){
        $this->update(array('used'=>1),'id='.$keyRow->ID);
        return $keyRow->id;
      }
      else{
        //no keys left! what to do??? Create a new key?
        throw new Exception('No keys left!');
      }
    }
    

    Then you would just:

    $productKey = $this->_helper->model('ProductKeys')->getKeyAndMarkUsed();
    

    Option 2: Make a database procedure to do the above functionality and call that instead.

    qid & accept id: (14286714, 14287267) query: SQL sum of column value, unique per user per day soup:

    Try something like:

    \n
    SELECT\n  DATE(created_at) AS date,\n  SUM(CASE WHEN state = 'complete' THEN 1 ELSE 0 END) AS complete,\n  SUM(CASE WHEN state = 'paid' THEN 1 ELSE 0 END) AS paid,\n  COUNT(DISTINCT CASE WHEN state IN('new','paying','completing') THEN user_id ELSE NULL END) AS in_progress,\n  COUNT(DISTINCT CASE WHEN state IN('payment_failed','completion_failed') THEN user_id ELSE NULL END) AS failed\nFROM orders\nWHERE created_at BETWEEN ? AND ?\nGROUP BY DATE(created_at);\n
    \n

    The main idea - COUNT (DISTINCT ...) will count unique user_id and wont count NULL values.

    \n

    Details: aggregate functions, 4.2.7. Aggregate Expressions

    \n

    The whole query with same style counts and simplified CASE WHEN ...:

    \n
    SELECT\n  DATE(created_at) AS date,\n  COUNT(CASE WHEN state = 'complete' THEN 1 END) AS complete,\n  COUNT(CASE WHEN state = 'paid' THEN 1 END) AS paid,\n  COUNT(DISTINCT CASE WHEN state IN('new','paying','completing') THEN user_id END) AS in_progress,\n  COUNT(DISTINCT CASE WHEN state IN('payment_failed','completion_failed') THEN user_id END) AS failed\nFROM orders\nWHERE created_at BETWEEN ? AND ?\nGROUP BY DATE(created_at);\n
    \n soup wrap:

    Try something like:

    SELECT
      DATE(created_at) AS date,
      SUM(CASE WHEN state = 'complete' THEN 1 ELSE 0 END) AS complete,
      SUM(CASE WHEN state = 'paid' THEN 1 ELSE 0 END) AS paid,
      COUNT(DISTINCT CASE WHEN state IN('new','paying','completing') THEN user_id ELSE NULL END) AS in_progress,
      COUNT(DISTINCT CASE WHEN state IN('payment_failed','completion_failed') THEN user_id ELSE NULL END) AS failed
    FROM orders
    WHERE created_at BETWEEN ? AND ?
    GROUP BY DATE(created_at);
    

    The main idea - COUNT (DISTINCT ...) will count unique user_id and wont count NULL values.

    Details: aggregate functions, 4.2.7. Aggregate Expressions

    The whole query with same style counts and simplified CASE WHEN ...:

    SELECT
      DATE(created_at) AS date,
      COUNT(CASE WHEN state = 'complete' THEN 1 END) AS complete,
      COUNT(CASE WHEN state = 'paid' THEN 1 END) AS paid,
      COUNT(DISTINCT CASE WHEN state IN('new','paying','completing') THEN user_id END) AS in_progress,
      COUNT(DISTINCT CASE WHEN state IN('payment_failed','completion_failed') THEN user_id END) AS failed
    FROM orders
    WHERE created_at BETWEEN ? AND ?
    GROUP BY DATE(created_at);
    
    qid & accept id: (14296002, 14296370) query: Need to find Average of top 3 records grouped by ID in SQL soup:

    First - get the max(maxattached) for every customer and month:

    \n
    SELECT id,\n       max(maxattached) as max_att         \nFROM myTable \nWHERE weekending >= now() - interval '1 year' \nGROUP BY id, date_trunc('month',weekending);\n
    \n

    Next - for every customer rank all his values:

    \n
    SELECT id,\n       max_att,\n       row_number() OVER (PARTITION BY id ORDER BY max_att DESC) as max_att_rank\nFROM ;\n
    \n

    Next - get the top 3 for every customer:

    \n
    SELECT id,\n       max_att\nFROM \nWHERE max_att_rank <= 3;\n
    \n

    Next - get the avg of the values for every customer:

    \n
    SELECT id,\n       avg(max_att) as avg_att\nFROM \nGROUP BY id;\n
    \n

    Next - just put all the queries together and rewrite/simplify them for your case.

    \n

    UPDATE: Here is an SQLFiddle with your test data and the queries: SQLFiddle.

    \n

    UPDATE2: Here is the query, that will work on 8.1 :

    \n
    SELECT customer_id,\n       (SELECT round(avg(max_att),0)\n        FROM (SELECT max(maxattached) as max_att         \n              FROM table1\n              WHERE weekending >= now() - interval '2 year' \n                AND id = ct.customer_id\n              GROUP BY date_trunc('month',weekending)\n              ORDER BY max_att DESC\n              LIMIT 3) sub \n        ) as avg_att\nFROM customer_table ct;\n
    \n

    The idea - to take your initial query and run it for every customer (customer_table - table with all unique id for customers).

    \n

    Here is SQLFiddle with this query: SQLFiddle.

    \n

    Only tested on version 8.3 (8.1 is too old to be on SQLFiddle).

    \n soup wrap:

    First - get the max(maxattached) for every customer and month:

    SELECT id,
           max(maxattached) as max_att         
    FROM myTable 
    WHERE weekending >= now() - interval '1 year' 
    GROUP BY id, date_trunc('month',weekending);
    

    Next - for every customer rank all his values:

    SELECT id,
           max_att,
           row_number() OVER (PARTITION BY id ORDER BY max_att DESC) as max_att_rank
    FROM ;
    

    Next - get the top 3 for every customer:

    SELECT id,
           max_att
    FROM 
    WHERE max_att_rank <= 3;
    

    Next - get the avg of the values for every customer:

    SELECT id,
           avg(max_att) as avg_att
    FROM 
    GROUP BY id;
    

    Next - just put all the queries together and rewrite/simplify them for your case.

    UPDATE: Here is an SQLFiddle with your test data and the queries: SQLFiddle.

    UPDATE2: Here is the query, that will work on 8.1 :

    SELECT customer_id,
           (SELECT round(avg(max_att),0)
            FROM (SELECT max(maxattached) as max_att         
                  FROM table1
                  WHERE weekending >= now() - interval '2 year' 
                    AND id = ct.customer_id
                  GROUP BY date_trunc('month',weekending)
                  ORDER BY max_att DESC
                  LIMIT 3) sub 
            ) as avg_att
    FROM customer_table ct;
    

    The idea - to take your initial query and run it for every customer (customer_table - table with all unique id for customers).

    Here is SQLFiddle with this query: SQLFiddle.

    Only tested on version 8.3 (8.1 is too old to be on SQLFiddle).

    qid & accept id: (14313834, 14314043) query: Apply the same aggregate to every column in a table soup:

    First, since COUNT() only counts non-null values, your query can be simplified:

    \n
    SELECT count(DISTINCT names) AS unique_names\n      ,count(names) AS names_not_null\nFROM   table;\n
    \n

    But that's the number of non-null values and contradicts your description:

    \n
    \n

    count of the number of null values in the column

    \n
    \n

    For that you would use:

    \n
    count(*) - count(names) AS names_null\n
    \n

    Since count(*) count all rows and count(names) only rows with non-null names.
    \nRemoved inferior alternative after hint by @Andriy.

    \n

    To automate that for all columns build an SQL statement off of the catalog table pg_attribute dynamically. You can use EXECUTE in a PL/pgSQL function to execute it immediately. Find full code examples with links to the manual and explanation under these closely related questions:

    \n\n soup wrap:

    First, since COUNT() only counts non-null values, your query can be simplified:

    SELECT count(DISTINCT names) AS unique_names
          ,count(names) AS names_not_null
    FROM   table;
    

    But that's the number of non-null values and contradicts your description:

    count of the number of null values in the column

    For that you would use:

    count(*) - count(names) AS names_null
    

    Since count(*) count all rows and count(names) only rows with non-null names.
    Removed inferior alternative after hint by @Andriy.

    To automate that for all columns build an SQL statement off of the catalog table pg_attribute dynamically. You can use EXECUTE in a PL/pgSQL function to execute it immediately. Find full code examples with links to the manual and explanation under these closely related questions:

    qid & accept id: (14345171, 14477252) query: How to get data from two databases in two servers with one SELECT statement? soup:

    I have done this with MySQL,Oracle and SQL server. You can create linked servers from a central MSSQL server to your Oracle and other MSSQL servers. You can then either query the object directly using the linked server or you can create a synonymn to the linked server tables in your database.

    \n

    Steps around creating and using a linked server are:

    \n
      \n
    1. On your "main" MSSQL server create two linked servers to the servers that contains the two databases or as you said database A and database B.
    2. \n
    3. You can then query the tables on the linked servers directly using plain TSQL select statements.
    4. \n
    \n

    To create a linked server to Oracle see this link: http://support.microsoft.com/kb/280106

    \n

    A little more about synonyms. If you are going to be using these linked server tables in a LOT of queries it might be worth the effort to use synonymns to help maintain the code for you. A synonymn allows you to reference something under a different name.

    \n

    So for example when selecting data from a linked server you would generally use the following syntax to get the data:

    \n
    SELECT *\nFROM Linkedserver.database.schema.table\n
    \n

    If you created a synonym for Linkedserver.database.schema.table as DBTable1 the syntax would be:

    \n
    SELECT *\nFROM DBTable1\n
    \n

    It saves a bit on typing plus if your linked server ever changed you would not need to go do changes all over your code. Like I said this can really be of benefit if you use linked servers in a lot of code.

    \n

    On a more cautionary note you CAN do a join between two table on different servers. HOwever this is normally painfully slow. I have found that you can select the data from the different server into temp tables and joining the temp tables can generally speed things up. Your milage might vary but if you are going to join the tables on the different servers this technique can help.

    \n

    Let me know if you need more details.

    \n soup wrap:

    I have done this with MySQL,Oracle and SQL server. You can create linked servers from a central MSSQL server to your Oracle and other MSSQL servers. You can then either query the object directly using the linked server or you can create a synonymn to the linked server tables in your database.

    Steps around creating and using a linked server are:

    1. On your "main" MSSQL server create two linked servers to the servers that contains the two databases or as you said database A and database B.
    2. You can then query the tables on the linked servers directly using plain TSQL select statements.

    To create a linked server to Oracle see this link: http://support.microsoft.com/kb/280106

    A little more about synonyms. If you are going to be using these linked server tables in a LOT of queries it might be worth the effort to use synonymns to help maintain the code for you. A synonymn allows you to reference something under a different name.

    So for example when selecting data from a linked server you would generally use the following syntax to get the data:

    SELECT *
    FROM Linkedserver.database.schema.table
    

    If you created a synonym for Linkedserver.database.schema.table as DBTable1 the syntax would be:

    SELECT *
    FROM DBTable1
    

    It saves a bit on typing plus if your linked server ever changed you would not need to go do changes all over your code. Like I said this can really be of benefit if you use linked servers in a lot of code.

    On a more cautionary note you CAN do a join between two table on different servers. HOwever this is normally painfully slow. I have found that you can select the data from the different server into temp tables and joining the temp tables can generally speed things up. Your milage might vary but if you are going to join the tables on the different servers this technique can help.

    Let me know if you need more details.

    qid & accept id: (14355527, 14355745) query: Get row, if ID is not in Array/comma-seperated-list soup:

    Don't really like your solution. You are making things a lot harder for yourself with your underlying database design.

    \n

    You have two tables, one representing users and another representing questions. What you really need is a table linking the two concepts, something like user-questions.

    \n

    Suggested design:-

    \n
    create table `user-questions`\n(\n   user_id int,\n   question_id int,\n   answered datetime\n)\n
    \n

    Suggested approach for recording answers.

    \n

    Every time your user answers a question, whack a row into user-questions to signify the fact that a user has answered the question.

    \n

    Under this structure, solving your specific problem, finding questions that haven't been answered yet, becomes trivial.

    \n
    -- Find a question that hasn't been answered by user id 22.\nSELECT\n  q.* \nFROM \n  `questions`\nLEFT OUTER JOIN `user-questions` uq\nON q.question_id = uq.question_id\n-- Just a sample user ID\nAND uq.user_id = 22\nWHERE\n  uq.question_id IS NULL\n
    \n

    I don't play day to day with MySQL, so please feel free to correct any typos, SO'ers. The approach is sound, though.

    \n soup wrap:

    Don't really like your solution. You are making things a lot harder for yourself with your underlying database design.

    You have two tables, one representing users and another representing questions. What you really need is a table linking the two concepts, something like user-questions.

    Suggested design:-

    create table `user-questions`
    (
       user_id int,
       question_id int,
       answered datetime
    )
    

    Suggested approach for recording answers.

    Every time your user answers a question, whack a row into user-questions to signify the fact that a user has answered the question.

    Under this structure, solving your specific problem, finding questions that haven't been answered yet, becomes trivial.

    -- Find a question that hasn't been answered by user id 22.
    SELECT
      q.* 
    FROM 
      `questions`
    LEFT OUTER JOIN `user-questions` uq
    ON q.question_id = uq.question_id
    -- Just a sample user ID
    AND uq.user_id = 22
    WHERE
      uq.question_id IS NULL
    

    I don't play day to day with MySQL, so please feel free to correct any typos, SO'ers. The approach is sound, though.

    qid & accept id: (14366759, 14368352) query: How do you perform a join to a table with "OR" conditions? soup:
    SELECT o.* \nFROM dbo.Orders o\nWHERE EXISTS ( SELECT *   FROM dbo.Transactions t1 \n               WHERE t1.OrderId = o.OrderId   AND t1.Code = 'TX33'\n             )\n  AND EXISTS ( SELECT *   FROM dbo.Transactions t2 \n               WHERE t2.OrderId = o.OrderId   AND t2.Code = 'TX34'\n             )\n  AND\n    (     EXISTS ( SELECT *   FROM dbo.Transactions t1 \n                   WHERE t1.OrderId = o.OrderId   AND t1.Code = 'TX35'\n                 )\n      AND EXISTS ( SELECT *   FROM dbo.Transactions t2 \n                   WHERE t2.OrderId = o.OrderId   AND t2.Code = 'TX36'\n\n    OR  EXISTS ( SELECT *   FROM dbo.Transactions t \n                 WHERE t.OrderId = o.OrderId    AND t.Code = 'TX37'\n               )\n\n    OR    EXISTS ( SELECT *   FROM dbo.Transactions t1 \n                   WHERE t1.OrderId = o.OrderId   AND t1.Code = 'TX38'\n                 )\n      AND EXISTS ( SELECT *   FROM dbo.Transactions t2 \n                   WHERE t2.OrderId = o.OrderId   AND t2.Code = 'TX39'\n                 )\n    ) ;\n
    \n
    \n

    You could also write it like this:

    \n
    SELECT o.* \nFROM dbo.Orders o\n  JOIN\n    ( SELECT OrderId\n      FROM dbo.Transactions\n      WHERE Code IN ('TX33', 'TX34', 'TX35', 'TX36', 'TX37', 'TX38', 'TX39')\n      GROUP BY OrderId\n      HAVING COUNT(DISTINCT CASE WHEN Code = 'TX33' THEN Code END) = 1\n         AND COUNT(DISTINCT CASE WHEN Code = 'TX34' THEN Code END) = 1\n         AND ( COUNT(DISTINCT \n                     CASE WHEN Code IN ('TX35', 'TX36') THEN Code END) = 2\n            OR COUNT(DISTINCT CASE WHEN Code = 'TX37' THEN Code END) = 1\n            OR COUNT(DISTINCT \n                     CASE WHEN Code IN ('TX38', 'TX39') THEN Code END) = 2\n             ) \n    ) t\n    ON t.OrderId = o.OrderId ;\n
    \n soup wrap:
    SELECT o.* 
    FROM dbo.Orders o
    WHERE EXISTS ( SELECT *   FROM dbo.Transactions t1 
                   WHERE t1.OrderId = o.OrderId   AND t1.Code = 'TX33'
                 )
      AND EXISTS ( SELECT *   FROM dbo.Transactions t2 
                   WHERE t2.OrderId = o.OrderId   AND t2.Code = 'TX34'
                 )
      AND
        (     EXISTS ( SELECT *   FROM dbo.Transactions t1 
                       WHERE t1.OrderId = o.OrderId   AND t1.Code = 'TX35'
                     )
          AND EXISTS ( SELECT *   FROM dbo.Transactions t2 
                       WHERE t2.OrderId = o.OrderId   AND t2.Code = 'TX36'
    
        OR  EXISTS ( SELECT *   FROM dbo.Transactions t 
                     WHERE t.OrderId = o.OrderId    AND t.Code = 'TX37'
                   )
    
        OR    EXISTS ( SELECT *   FROM dbo.Transactions t1 
                       WHERE t1.OrderId = o.OrderId   AND t1.Code = 'TX38'
                     )
          AND EXISTS ( SELECT *   FROM dbo.Transactions t2 
                       WHERE t2.OrderId = o.OrderId   AND t2.Code = 'TX39'
                     )
        ) ;
    

    You could also write it like this:

    SELECT o.* 
    FROM dbo.Orders o
      JOIN
        ( SELECT OrderId
          FROM dbo.Transactions
          WHERE Code IN ('TX33', 'TX34', 'TX35', 'TX36', 'TX37', 'TX38', 'TX39')
          GROUP BY OrderId
          HAVING COUNT(DISTINCT CASE WHEN Code = 'TX33' THEN Code END) = 1
             AND COUNT(DISTINCT CASE WHEN Code = 'TX34' THEN Code END) = 1
             AND ( COUNT(DISTINCT 
                         CASE WHEN Code IN ('TX35', 'TX36') THEN Code END) = 2
                OR COUNT(DISTINCT CASE WHEN Code = 'TX37' THEN Code END) = 1
                OR COUNT(DISTINCT 
                         CASE WHEN Code IN ('TX38', 'TX39') THEN Code END) = 2
                 ) 
        ) t
        ON t.OrderId = o.OrderId ;
    
    qid & accept id: (14372302, 14372345) query: Sql query to get result from 3 tables soup:

    You should use UNION. Try this (untested):

    \n
    SELECT t.title_name, s.source_name, t1.text_content, t1.added_date   \nFROM Table1 t1\nJOIN Title T \n   ON t1.TitleId = T.TitleId\nJOIN Source S \n   ON t1.SourceId = S.SourceId\nUNION\nSELECT t.title_name, s.source_name, t2.description, t2.added_date   \nFROM Table2 t2\nJOIN Title T \n   ON t2.TitleId = T.TitleId\nJOIN Source S \n   ON t2.SourceId = S.SourceId\nUNION\nSELECT t.title_name, s.source_name, t3.description, t3.added_date   \nFROM Table3 t3\nJOIN Title T \n   ON t3.TitleId = T.TitleId\nJOIN Source S \n   ON t3.SourceId = S.SourceId\n
    \n

    Well I just realized you don't have a SourceId or TitleId in your Table3. Not going to be able to get that information, but you could still do:

    \n
    SELECT DISTINCT Title_Name, Source_Name, Text_Content, Added_Date\nFROM \n(\n   SELECT t.title_name, s.source_name, t1.text_content, t1.added_date   \n   FROM Table1 t1\n   JOIN Title T \n     ON t1.TitleId = T.TitleId\n   JOIN Source S \n     ON t1.SourceId = S.SourceId\n   UNION\n   SELECT t.title_name, s.source_name, t2.description, t2.added_date   \n   FROM Table2 t2\n   JOIN Title T \n     ON t2.TitleId = T.TitleId\n   JOIN Source S \n     ON t2.SourceId = S.SourceId\n   UNION\n   SELECT t3.title, 'Unknown', t3.description, t3.added_date   \n   FROM Table3 t3\n) t\nORDER BY added_date\n
    \n soup wrap:

    You should use UNION. Try this (untested):

    SELECT t.title_name, s.source_name, t1.text_content, t1.added_date   
    FROM Table1 t1
    JOIN Title T 
       ON t1.TitleId = T.TitleId
    JOIN Source S 
       ON t1.SourceId = S.SourceId
    UNION
    SELECT t.title_name, s.source_name, t2.description, t2.added_date   
    FROM Table2 t2
    JOIN Title T 
       ON t2.TitleId = T.TitleId
    JOIN Source S 
       ON t2.SourceId = S.SourceId
    UNION
    SELECT t.title_name, s.source_name, t3.description, t3.added_date   
    FROM Table3 t3
    JOIN Title T 
       ON t3.TitleId = T.TitleId
    JOIN Source S 
       ON t3.SourceId = S.SourceId
    

    Well I just realized you don't have a SourceId or TitleId in your Table3. Not going to be able to get that information, but you could still do:

    SELECT DISTINCT Title_Name, Source_Name, Text_Content, Added_Date
    FROM 
    (
       SELECT t.title_name, s.source_name, t1.text_content, t1.added_date   
       FROM Table1 t1
       JOIN Title T 
         ON t1.TitleId = T.TitleId
       JOIN Source S 
         ON t1.SourceId = S.SourceId
       UNION
       SELECT t.title_name, s.source_name, t2.description, t2.added_date   
       FROM Table2 t2
       JOIN Title T 
         ON t2.TitleId = T.TitleId
       JOIN Source S 
         ON t2.SourceId = S.SourceId
       UNION
       SELECT t3.title, 'Unknown', t3.description, t3.added_date   
       FROM Table3 t3
    ) t
    ORDER BY added_date
    
    qid & accept id: (14374677, 14374705) query: Update a field just another one has some condition soup:

    you can use inline IF statement. eg

    \n
    UPDATE articles\nSET publishedDate = IF(published = 1, 'new date HERE', publishedDate)\n-- WHERE condition here\n
    \n

    this assumes that 1 = true, if you store boolean as string then IF(published = 'true',...

    \n

    UPDATE 1

    \n
    -- assumes 0 = false, 1 = true\nSET @status := 1;\nSET @newDate := CURDATE();\n\nUPDATE articles\nSET publishedDate = IF(1 = @status, @newDate, publishedDate),\n    published = @status\n-- WHERE condition here\n
    \n soup wrap:

    you can use inline IF statement. eg

    UPDATE articles
    SET publishedDate = IF(published = 1, 'new date HERE', publishedDate)
    -- WHERE condition here
    

    this assumes that 1 = true, if you store boolean as string then IF(published = 'true',...

    UPDATE 1

    -- assumes 0 = false, 1 = true
    SET @status := 1;
    SET @newDate := CURDATE();
    
    UPDATE articles
    SET publishedDate = IF(1 = @status, @newDate, publishedDate),
        published = @status
    -- WHERE condition here
    
    qid & accept id: (14385741, 14385817) query: Retrieve rows from a certain day but only in a certain hour soup:
    SELECT columns FROM dbo.table2\nWHERE \n    CONVERT(DATE, given_schedule) \n    = CONVERT(DATE, DATEADD(DAY, -3, CURRENT_TIMESTAMP))\nAND \n    DATEPART(HOUR, given_schedule) \n    = DATEPART(HOUR, CURRENT_TIMESTAMP);\n
    \n

    To address @Habo's point, you could also do:

    \n
    DECLARE @s SMALLDATETIME = CURRENT_TIMESTAMP;\n\nSET @s = DATEADD(DAY, -3, DATEADD(MINUTE, -DATEPART(MINUTE, @s), @s));\n\nSELECT columns FROM dbo.table2\n  WHERE given_schedule >= @s\n  AND given_schedule < DATEADD(HOUR, 1, @s);\n
    \n

    This is, of course, most useful if there is actually an index with given_schedule as the leading column.

    \n soup wrap:
    SELECT columns FROM dbo.table2
    WHERE 
        CONVERT(DATE, given_schedule) 
        = CONVERT(DATE, DATEADD(DAY, -3, CURRENT_TIMESTAMP))
    AND 
        DATEPART(HOUR, given_schedule) 
        = DATEPART(HOUR, CURRENT_TIMESTAMP);
    

    To address @Habo's point, you could also do:

    DECLARE @s SMALLDATETIME = CURRENT_TIMESTAMP;
    
    SET @s = DATEADD(DAY, -3, DATEADD(MINUTE, -DATEPART(MINUTE, @s), @s));
    
    SELECT columns FROM dbo.table2
      WHERE given_schedule >= @s
      AND given_schedule < DATEADD(HOUR, 1, @s);
    

    This is, of course, most useful if there is actually an index with given_schedule as the leading column.

    qid & accept id: (14400023, 14400480) query: Display columns that contain a carriage return soup:

    This sounds like a homework question. So, let me give you some hints:

    \n

    (1) You can generate a table using syntax, such as:

    \n
    select chr(13) as badchar from dual union all\nselect '!' . . .\n
    \n

    (2) You can cross join this into the table and use a very similar where clause.

    \n

    (3) You can then select the bad character from the table.

    \n

    (4) You'll need an aggregation.

    \n

    Actually, I would be inclined to drop the requirement of one row per student and instead have one row per student/bad character. Here is an approach:

    \n
    select a.id,\n       a.addr_1, a.addr_2, a.addr_3, a.addr_4, a.addr_5, a.addr_6, a.addr_7,\n       ((case when INSTR(a.addr_1, b.badChar) > 0 then 'addr_1,' else '' end) ||\n        (case when INSTR(a.addr_2, b.badChar) > 0 then 'addr_2,' else '' end) ||\n        (case when INSTR(a.addr_3, b.badChar) > 0 then 'addr_3,' else '' end) ||\n        (case when INSTR(a.addr_4, b.badChar) > 0 then 'addr_4,' else '' end) ||\n        (case when INSTR(a.addr_5, b.badChar) > 0 then 'addr_5,' else '' end) ||\n        (case when INSTR(a.addr_6, b.badChar) > 0 then 'addr_6,' else '' end) ||\n        (case when INSTR(a.addr_7, b.badChar) > 0 then 'addr_7,' else '' end)\n       ) as addrs,\n       b.badChar\nfrom a cross join\n     (select chr(13) as badChar from dual) as b\nWHERE INSTR(a.addr_1, b.badChar) > 0 OR\n      INSTR(a.addr_2, b.badChar) > 0 OR\n      INSTR(a.addr_3, b.badChar) > 0 OR\n      INSTR(a.addr_4, b.badChar) > 0 OR\n      INSTR(a.addr_5, b.badChar) > 0 OR\n      INSTR(a.addr_6, b.badChar) > 0 OR\n      INSTR(a.addr_7, b.badChar) > 0;\n
    \n

    It leaves an extra comma at the end of the column names. This can be removed by making this a subquery and doing string manipulations at the next level.

    \n

    To put all badchars on one line would require an aggregation. However, I am not clear what the 9th and 10th columns would contain in that case.

    \n soup wrap:

    This sounds like a homework question. So, let me give you some hints:

    (1) You can generate a table using syntax, such as:

    select chr(13) as badchar from dual union all
    select '!' . . .
    

    (2) You can cross join this into the table and use a very similar where clause.

    (3) You can then select the bad character from the table.

    (4) You'll need an aggregation.

    Actually, I would be inclined to drop the requirement of one row per student and instead have one row per student/bad character. Here is an approach:

    select a.id,
           a.addr_1, a.addr_2, a.addr_3, a.addr_4, a.addr_5, a.addr_6, a.addr_7,
           ((case when INSTR(a.addr_1, b.badChar) > 0 then 'addr_1,' else '' end) ||
            (case when INSTR(a.addr_2, b.badChar) > 0 then 'addr_2,' else '' end) ||
            (case when INSTR(a.addr_3, b.badChar) > 0 then 'addr_3,' else '' end) ||
            (case when INSTR(a.addr_4, b.badChar) > 0 then 'addr_4,' else '' end) ||
            (case when INSTR(a.addr_5, b.badChar) > 0 then 'addr_5,' else '' end) ||
            (case when INSTR(a.addr_6, b.badChar) > 0 then 'addr_6,' else '' end) ||
            (case when INSTR(a.addr_7, b.badChar) > 0 then 'addr_7,' else '' end)
           ) as addrs,
           b.badChar
    from a cross join
         (select chr(13) as badChar from dual) as b
    WHERE INSTR(a.addr_1, b.badChar) > 0 OR
          INSTR(a.addr_2, b.badChar) > 0 OR
          INSTR(a.addr_3, b.badChar) > 0 OR
          INSTR(a.addr_4, b.badChar) > 0 OR
          INSTR(a.addr_5, b.badChar) > 0 OR
          INSTR(a.addr_6, b.badChar) > 0 OR
          INSTR(a.addr_7, b.badChar) > 0;
    

    It leaves an extra comma at the end of the column names. This can be removed by making this a subquery and doing string manipulations at the next level.

    To put all badchars on one line would require an aggregation. However, I am not clear what the 9th and 10th columns would contain in that case.

    qid & accept id: (14416241, 14416569) query: ORACLE Parsing XML string into separate records soup:

    you can use XMLTABLE. as your XML document seems to be a fragment in the row, i've wrapped this in a element.

    \n
    select grp, substr(name, \n              instr(name, '/', -1) + 1,\n              instr(name, '@') - instr(name, '/', -1) - 1\n             ) name\n  from mytab m, \n       xmltable(xmlnamespaces('DAV:' as "D"), \n                '/root/D:href' passing xmltype(''||usr||'')\n                columns\n                name varchar2(200) path './text()');\n
    \n

    i've assumed a table where your xml column is stored as a clob/varchar2 called (usr) .

    \n

    example output for group1:

    \n
    SQL> select grp, substr(name,\n  2                instr(name, '/', -1) + 1,\n  3                instr(name, '@') - instr(name, '/', -1) - 1\n  4               ) name\n  5    from mytab m,\n  6         xmltable(xmlnamespaces('DAV:' as "D"),\n  7                  '/root/D:href' passing xmltype(''||usr||'')\n  8                  COLUMNS\n  9                  name VARCHAR2(200) path './text()');\n\nGRP    NAME\n------ ----------\ngroup1 admin\ngroup1 oracle\ngroup1 user1\n
    \n

    http://sqlfiddle.com/#!4/435cd/1

    \n soup wrap:

    you can use XMLTABLE. as your XML document seems to be a fragment in the row, i've wrapped this in a element.

    select grp, substr(name, 
                  instr(name, '/', -1) + 1,
                  instr(name, '@') - instr(name, '/', -1) - 1
                 ) name
      from mytab m, 
           xmltable(xmlnamespaces('DAV:' as "D"), 
                    '/root/D:href' passing xmltype(''||usr||'')
                    columns
                    name varchar2(200) path './text()');
    

    i've assumed a table where your xml column is stored as a clob/varchar2 called (usr) .

    example output for group1:

    SQL> select grp, substr(name,
      2                instr(name, '/', -1) + 1,
      3                instr(name, '@') - instr(name, '/', -1) - 1
      4               ) name
      5    from mytab m,
      6         xmltable(xmlnamespaces('DAV:' as "D"),
      7                  '/root/D:href' passing xmltype(''||usr||'')
      8                  COLUMNS
      9                  name VARCHAR2(200) path './text()');
    
    GRP    NAME
    ------ ----------
    group1 admin
    group1 oracle
    group1 user1
    

    http://sqlfiddle.com/#!4/435cd/1

    qid & accept id: (14442822, 14443077) query: Using a date field from a form in an access query soup:

    Use a PARAMETERS clause as the first line of your SQL to inform the db engine the form control contains a Date/Time value.

    \n
    PARAMETERS Forms!Frm_Start![Date] DateTime;\n
    \n

    Then use the parameter with DateAdd() in your WHERE clause:

    \n
    WHERE DateValue([TIMESTAMP])=DateAdd("d", 1, Forms!Frm_Start![Date])\n
    \n

    However, that will require running DateValue() for every row in the table. This should be faster with [TIMESTAMP] indexed:

    \n
    WHERE\n        [TIMESTAMP] >= DateAdd("d", 1, Forms!Frm_Start![Date])\n    AND [TIMESTAMP] < DateAdd("d", 2, Forms!Frm_Start![Date])\n
    \n soup wrap:

    Use a PARAMETERS clause as the first line of your SQL to inform the db engine the form control contains a Date/Time value.

    PARAMETERS Forms!Frm_Start![Date] DateTime;
    

    Then use the parameter with DateAdd() in your WHERE clause:

    WHERE DateValue([TIMESTAMP])=DateAdd("d", 1, Forms!Frm_Start![Date])
    

    However, that will require running DateValue() for every row in the table. This should be faster with [TIMESTAMP] indexed:

    WHERE
            [TIMESTAMP] >= DateAdd("d", 1, Forms!Frm_Start![Date])
        AND [TIMESTAMP] < DateAdd("d", 2, Forms!Frm_Start![Date])
    
    qid & accept id: (14446303, 14446434) query: Find records that have related records in the past soup:

    Try using NOT EXISTS instead of COUNT = 0. This should perform much better.

    \n
    SELECT  COUNT(*)\nFROM    log AS log_main\nWHERE   log_main.status=1 \nAND     NOT EXISTS\n        (   SELECT 1\n            FROM   log AS log_inner\n            WHERE   log_inner.fingerprint_id=log_main.fingerprint_id\n            AND     log_inner.status = 0\n            AND     log_inner.date < log_main.date \n            AND     log_inner.date >= (log_main.date - INTERVAL 35 SECOND)\n        );\n
    \n

    You should also ensure the table is properly indexed.

    \n

    EDIT

    \n

    I believe using LEFT JOIN/IS NULL is more efficient in MySQL than using NOT EXISTS, so this will perform better than the above (although perhaps not significantly):

    \n
    SELECT  COUNT(*)\nFROM    log AS log_main\n        LEFT JOIN log AS log_inner\n            ON log_inner.fingerprint_id=log_main.fingerprint_id\n            AND log_inner.status = 0\n            AND log_inner.date < log_main.date \n            AND log_inner.date >= (log_main.date - INTERVAL 35 SECOND)\nWHERE   log_main.status = 1 \nAND     Log_inner.fingerprint_id IS NULL;\n
    \n

    EDIT 2

    \n

    To get records with 1 or 2 attempts etc I would still use a JOIN, but like so:

    \n
    SELECT  COUNT(*)\nFROM    (   SELECT  log_Main.id\n            FROM    log AS log_main\n                    INNER JOIN log AS log_inner\n                        ON log_inner.fingerprint_id=log_main.fingerprint_id\n                        AND log_inner.status = 0\n                        AND log_inner.date < log_main.date \n                        AND log_inner.date >= (log_main.date - INTERVAL 35 SECOND)\n            WHERE   log_main.status = 1 \n            AND     Log_inner.fingerprint_id IS NULL\n            GROUP BY log_Main.id\n            HAVING COUNT(log_Inner.id) = 1\n        ) d\n
    \n soup wrap:

    Try using NOT EXISTS instead of COUNT = 0. This should perform much better.

    SELECT  COUNT(*)
    FROM    log AS log_main
    WHERE   log_main.status=1 
    AND     NOT EXISTS
            (   SELECT 1
                FROM   log AS log_inner
                WHERE   log_inner.fingerprint_id=log_main.fingerprint_id
                AND     log_inner.status = 0
                AND     log_inner.date < log_main.date 
                AND     log_inner.date >= (log_main.date - INTERVAL 35 SECOND)
            );
    

    You should also ensure the table is properly indexed.

    EDIT

    I believe using LEFT JOIN/IS NULL is more efficient in MySQL than using NOT EXISTS, so this will perform better than the above (although perhaps not significantly):

    SELECT  COUNT(*)
    FROM    log AS log_main
            LEFT JOIN log AS log_inner
                ON log_inner.fingerprint_id=log_main.fingerprint_id
                AND log_inner.status = 0
                AND log_inner.date < log_main.date 
                AND log_inner.date >= (log_main.date - INTERVAL 35 SECOND)
    WHERE   log_main.status = 1 
    AND     Log_inner.fingerprint_id IS NULL;
    

    EDIT 2

    To get records with 1 or 2 attempts etc I would still use a JOIN, but like so:

    SELECT  COUNT(*)
    FROM    (   SELECT  log_Main.id
                FROM    log AS log_main
                        INNER JOIN log AS log_inner
                            ON log_inner.fingerprint_id=log_main.fingerprint_id
                            AND log_inner.status = 0
                            AND log_inner.date < log_main.date 
                            AND log_inner.date >= (log_main.date - INTERVAL 35 SECOND)
                WHERE   log_main.status = 1 
                AND     Log_inner.fingerprint_id IS NULL
                GROUP BY log_Main.id
                HAVING COUNT(log_Inner.id) = 1
            ) d
    
    qid & accept id: (14469652, 14469784) query: How to generate rows for date range by key soup:

    in 10g/11g you can use the model clause for this.

    \n
    SQL> with emps as (select rownum id, name, start_date,\n  2                       end_date, trunc(end_date)-trunc(start_date) date_range\n  3                  from table1)\n  4  select name, the_date\n  5    from emps\n  6  model partition by(id as key)\n  7        dimension by(0 as f)\n  8        measures(name, start_date, cast(null as date) the_date, date_range)\n  9        rules (the_date [for f from 0 to date_range[0] increment 1]  = start_date[0] + cv(f),\n 10               name[any] = name[0]);\n\nNAME        THE_DATE\n----------- ----------\nDAVID SMITH 01-01-2001\nDAVID SMITH 01-02-2001\nDAVID SMITH 01-03-2001\nDAVID SMITH 01-04-2001\nDAVID SMITH 01-05-2001\nDAVID SMITH 01-06-2001\nJOHN SMITH  02-07-2012\nJOHN SMITH  02-08-2012\nJOHN SMITH  02-09-2012\n\n9 rows selected.\n
    \n

    ie your base query:

    \n
    select rownum id, name, start_date,\n       end_date, trunc(end_date)-trunc(start_date) date_range\n  from table1\n
    \n

    just defines the dates + the range (I used rownum id, but if you have a PK you can use that instead.

    \n

    the partition splits our calculations per ID(unique row):

    \n
    6  model partition by(id as key)\n
    \n

    the measures:

    \n
    8        measures(name, start_date, cast(null as date) the_date, date_range)\n
    \n

    defines the attributes we will be outputting/calculating. in this case, we're working with name, and the start_date plus the range of rows to generate. additionally i've defined a column the_date that will hold the calculated date (i.e we want to caluclate start_date + n where n is from 0 to the range.

    \n

    the rules define HOW we are going to populate our columns:

    \n
    9        rules (the_date [for f from 0 to date_range[0] increment 1]  = start_date[0] + cv(f),\n10               name[any] = name[0]);\n
    \n

    so with 

    \n
    the_date [for f from 0 to date_range[0] increment 1]\n
    \n

    we are saying that we will generate the number of rows that date_range holds+1 (ie 6 dates in total). the value of f can be referenced through the cv(current value) function.

    \n

    so on row 1 for david, we'd have the_date [0] = start_date+0 and subsequently on row 2, we'd have the_date [1] = start_date+1. all teh way up to start_date+5 (i.e the end_date)

    \n

    p.s. \nfor connect by you'd need to do something like this:

    \n
    select \n    A.EMPLOYEE_NAME,\n    A.START_DATE+(b.r-1) AS INDIVIDUAL_DAY,\n    TO_CHAR(A.START_DATE,'MM/DD/YYYY') START_DATE,\n    TO_CHAR(A.END_DATE,'MM/DD/YYYY') END_DATE\nFROM table1 A\n     cross join (select rownum r\n                   from (select max(end_date-start_date) d from table1)\n                  connect by level-1 <= d) b\n where A.START_DATE+(b.r-1) <= A.END_DATE\n order by 1, 2;\n
    \n

    i.e. isolate the connect by to a subquery, then filter out the rows where individual_day > end_date.

    \n

    but i WOULD NOT recommend this approach. its performance will be worse compared to the model approach (especially if the ranges get big).

    \n soup wrap:

    in 10g/11g you can use the model clause for this.

    SQL> with emps as (select rownum id, name, start_date,
      2                       end_date, trunc(end_date)-trunc(start_date) date_range
      3                  from table1)
      4  select name, the_date
      5    from emps
      6  model partition by(id as key)
      7        dimension by(0 as f)
      8        measures(name, start_date, cast(null as date) the_date, date_range)
      9        rules (the_date [for f from 0 to date_range[0] increment 1]  = start_date[0] + cv(f),
     10               name[any] = name[0]);
    
    NAME        THE_DATE
    ----------- ----------
    DAVID SMITH 01-01-2001
    DAVID SMITH 01-02-2001
    DAVID SMITH 01-03-2001
    DAVID SMITH 01-04-2001
    DAVID SMITH 01-05-2001
    DAVID SMITH 01-06-2001
    JOHN SMITH  02-07-2012
    JOHN SMITH  02-08-2012
    JOHN SMITH  02-09-2012
    
    9 rows selected.
    

    ie your base query:

    select rownum id, name, start_date,
           end_date, trunc(end_date)-trunc(start_date) date_range
      from table1
    

    just defines the dates + the range (I used rownum id, but if you have a PK you can use that instead.

    the partition splits our calculations per ID(unique row):

    6  model partition by(id as key)
    

    the measures:

    8        measures(name, start_date, cast(null as date) the_date, date_range)
    

    defines the attributes we will be outputting/calculating. in this case, we're working with name, and the start_date plus the range of rows to generate. additionally i've defined a column the_date that will hold the calculated date (i.e we want to caluclate start_date + n where n is from 0 to the range.

    the rules define HOW we are going to populate our columns:

    9        rules (the_date [for f from 0 to date_range[0] increment 1]  = start_date[0] + cv(f),
    10               name[any] = name[0]);
    

    so with 

    the_date [for f from 0 to date_range[0] increment 1]
    

    we are saying that we will generate the number of rows that date_range holds+1 (ie 6 dates in total). the value of f can be referenced through the cv(current value) function.

    so on row 1 for david, we'd have the_date [0] = start_date+0 and subsequently on row 2, we'd have the_date [1] = start_date+1. all teh way up to start_date+5 (i.e the end_date)

    p.s. for connect by you'd need to do something like this:

    select 
        A.EMPLOYEE_NAME,
        A.START_DATE+(b.r-1) AS INDIVIDUAL_DAY,
        TO_CHAR(A.START_DATE,'MM/DD/YYYY') START_DATE,
        TO_CHAR(A.END_DATE,'MM/DD/YYYY') END_DATE
    FROM table1 A
         cross join (select rownum r
                       from (select max(end_date-start_date) d from table1)
                      connect by level-1 <= d) b
     where A.START_DATE+(b.r-1) <= A.END_DATE
     order by 1, 2;
    

    i.e. isolate the connect by to a subquery, then filter out the rows where individual_day > end_date.

    but i WOULD NOT recommend this approach. its performance will be worse compared to the model approach (especially if the ranges get big).

    qid & accept id: (14479213, 14479574) query: Join 2 rows in same table sql query soup:

    Try this (Assuming 'HE' has a space on either side);

    \n
    select name, count\nfrom yourTable where charindex(' he ',name)=0\nunion\nselect 'HE' name, sum(count) as count\nfrom yourTable where charindex(' he ',name)>0\n
    \n

    Another way is;

    \n
    select A.name, sum(A.count) as count\nfrom (\n    select case charindex(' he ',name) \n           when 0 then name else 'HE' end name, count\n    from yourTable\n) A\ngroup by A.name\norder by A.name\n
    \n soup wrap:

    Try this (Assuming 'HE' has a space on either side);

    select name, count
    from yourTable where charindex(' he ',name)=0
    union
    select 'HE' name, sum(count) as count
    from yourTable where charindex(' he ',name)>0
    

    Another way is;

    select A.name, sum(A.count) as count
    from (
        select case charindex(' he ',name) 
               when 0 then name else 'HE' end name, count
        from yourTable
    ) A
    group by A.name
    order by A.name
    
    qid & accept id: (14482625, 14482648) query: Display full column name instead of shortened soup:

    SQL*Plus will format the column width to the size of the datatype. in the case of DUAL, DUMMY is a varchar2(1). you can control this with

    \n
    col DUMMY format a5\n
    \n

    ie:

    \n
    SQL> select * from dual;\n\nD\n-\nX\n\nSQL> col DUMMY format a5\nSQL> select * from dual;\n\nDUMMY\n-----\nX\n
    \n soup wrap:

    SQL*Plus will format the column width to the size of the datatype. in the case of DUAL, DUMMY is a varchar2(1). you can control this with

    col DUMMY format a5
    

    ie:

    SQL> select * from dual;
    
    D
    -
    X
    
    SQL> col DUMMY format a5
    SQL> select * from dual;
    
    DUMMY
    -----
    X
    
    qid & accept id: (14501440, 14501561) query: How to delete leading empty space in a SQL Database Table using MS SQL Server Managment Studio soup:

    This will remove leading and trailing spaces

    \n
    Update tablename set fieldName = ltrim(rtrim(fieldName));\n
    \n

    some versions of SQL Support

    \n
    Update tablename set fieldName = trim(fieldName);\n
    \n

    If you just want to remove leading

    \n
    update tablename set fieldName = LTRIM(fieldName);\n
    \n soup wrap:

    This will remove leading and trailing spaces

    Update tablename set fieldName = ltrim(rtrim(fieldName));
    

    some versions of SQL Support

    Update tablename set fieldName = trim(fieldName);
    

    If you just want to remove leading

    update tablename set fieldName = LTRIM(fieldName);
    
    qid & accept id: (14513314, 14513873) query: if statement using a query in sql soup:

    (1) Using a statement block

    \n
    IF \n(SELECT COUNT(*) FROM Production.Product WHERE Name LIKE 'Touring-3000%' ) > 5\nBEGIN\n   PRINT 'There are 5 Touring-3000 bikes.'\nEND\nELSE \nBEGIN\n   PRINT 'There are Less than 5 Touring-3000 bikes.'\nEND ;\n
    \n

    (2) Calling stored procedures.

    \n
    DECLARE @compareprice money, @cost money \nEXECUTE Production.uspGetList '%Bikes%', 700, \n    @compareprice OUT, \n    @cost OUTPUT\nIF @cost <= @compareprice \nBEGIN\n    PRINT 'These products can be purchased for less than \n    $'+RTRIM(CAST(@compareprice AS varchar(20)))+'.'\nEND\nELSE\n    PRINT 'The prices for all products in this category exceed \n    $'+ RTRIM(CAST(@compareprice AS varchar(20)))+'.'\n
    \n

    More Examples:

    \n

    MSDN 1\nMSDN 2

    \n soup wrap:

    (1) Using a statement block

    IF 
    (SELECT COUNT(*) FROM Production.Product WHERE Name LIKE 'Touring-3000%' ) > 5
    BEGIN
       PRINT 'There are 5 Touring-3000 bikes.'
    END
    ELSE 
    BEGIN
       PRINT 'There are Less than 5 Touring-3000 bikes.'
    END ;
    

    (2) Calling stored procedures.

    DECLARE @compareprice money, @cost money 
    EXECUTE Production.uspGetList '%Bikes%', 700, 
        @compareprice OUT, 
        @cost OUTPUT
    IF @cost <= @compareprice 
    BEGIN
        PRINT 'These products can be purchased for less than 
        $'+RTRIM(CAST(@compareprice AS varchar(20)))+'.'
    END
    ELSE
        PRINT 'The prices for all products in this category exceed 
        $'+ RTRIM(CAST(@compareprice AS varchar(20)))+'.'
    

    More Examples:

    MSDN 1 MSDN 2

    qid & accept id: (14537280, 14537430) query: SQL instead-of trigger soup:

    Something like this:

    \n
    CREATE trigger update_LateRating_title INSTEAD OF UPDATE OF title ON LateRating\nBEGIN\n  UPDATE Movie SET title = new.title WHERE movie.mID = old.mID;\nEND;\n
    \n

    As requested in the comment, here is a trigger to update only movies that have reviews greater than 2 in LateRating:

    \n
    CREATE trigger update_LateRating_title INSTEAD OF \nUPDATE OF title ON LateRating\nBEGIN\n  UPDATE Movie SET title = new.title \n  WHERE movie.mID = old.mID \n  AND movie.mID IN (SELECT mID FROM LateRating WHERE stars > 2);\nEND;\n
    \n

    (There are different ways to interpret this later request. Should title updates be allowed for the movie which has more than 2 stars somewhere or only for the record actually having more than 2 stars? My code is for the former choice).

    \n soup wrap:

    Something like this:

    CREATE trigger update_LateRating_title INSTEAD OF UPDATE OF title ON LateRating
    BEGIN
      UPDATE Movie SET title = new.title WHERE movie.mID = old.mID;
    END;
    

    As requested in the comment, here is a trigger to update only movies that have reviews greater than 2 in LateRating:

    CREATE trigger update_LateRating_title INSTEAD OF 
    UPDATE OF title ON LateRating
    BEGIN
      UPDATE Movie SET title = new.title 
      WHERE movie.mID = old.mID 
      AND movie.mID IN (SELECT mID FROM LateRating WHERE stars > 2);
    END;
    

    (There are different ways to interpret this later request. Should title updates be allowed for the movie which has more than 2 stars somewhere or only for the record actually having more than 2 stars? My code is for the former choice).

    qid & accept id: (14540736, 14541743) query: sql avoid cartesian product soup:

    So it looks like you want all records from each of tables that are identical, and then only those from each that are distinct. That means you need to UNION 3 sets of queries.

    \n

    Try something like this:

    \n
    SELECT t1.state, \n   t1.lname, \n   t1.fname, \n   t1.network as t1Network, \n   t2.network as t2Network\nFROM table1 t1 \n   INNER JOIN table2 t2 \n      ON t1.fname=t2.fname \n      AND t1.lname=t2.lname \n      AND t1.state=t2.state\n      AND t1.network=t2.network\nUNION \nSELECT t1.state, \n   t1.lname, \n   t1.fname, \n   t1.network as t1Network, \n   t2.network as t2Network\nFROM table1 t1 \n   LEFT JOIN table2 t2 \n      ON t1.fname=t2.fname \n      AND t1.lname=t2.lname \n      AND t1.state=t2.state\n      AND t1.network=t2.network\nWHERE t2.network IS NULL\nUNION \nSELECT t2.state, \n   t2.lname, \n   t2.fname, \n   t1.network as t1Network, \n   t2.network as t2Network\nFROM table2 t2 \n   LEFT JOIN table1 t1\n      ON t1.fname=t2.fname \n      AND t1.lname=t2.lname \n      AND t1.state=t2.state\n      AND t1.network=t2.network\nWHERE t1.network IS NULL\n
    \n

    This should give you your desired results.

    \n

    And here is the SQL Fiddle to confirm.

    \n

    --EDIT

    \n

    Not thinking today -- you don't really need that first query. You can remove the WHERE condition from the 2nd query and it works the same way. Tired :-)

    \n

    Here is the updated query -- both should work just fine though, this is just easier to read:

    \n
    SELECT t1.state, \n   t1.lname, \n   t1.fname, \n   t1.network as t1Network, \n   t2.network as t2Network\nFROM table1 t1 \n   LEFT JOIN table2 t2 \n      ON t1.fname=t2.fname \n      AND t1.lname=t2.lname \n      AND t1.state=t2.state\n      AND t1.network=t2.network\nUNION \nSELECT t2.state, \n   t2.lname, \n   t2.fname, \n   t1.network as t1Network, \n   t2.network as t2Network\nFROM table2 t2 \n   LEFT JOIN table1 t1\n      ON t1.fname=t2.fname \n      AND t1.lname=t2.lname \n      AND t1.state=t2.state\n      AND t1.network=t2.network\nWHERE t1.network IS NULL\n
    \n

    And the updated fiddle.

    \n

    BTW -- these should both work in MSAccess as it supports UNION.

    \n

    Good luck.

    \n soup wrap:

    So it looks like you want all records from each of tables that are identical, and then only those from each that are distinct. That means you need to UNION 3 sets of queries.

    Try something like this:

    SELECT t1.state, 
       t1.lname, 
       t1.fname, 
       t1.network as t1Network, 
       t2.network as t2Network
    FROM table1 t1 
       INNER JOIN table2 t2 
          ON t1.fname=t2.fname 
          AND t1.lname=t2.lname 
          AND t1.state=t2.state
          AND t1.network=t2.network
    UNION 
    SELECT t1.state, 
       t1.lname, 
       t1.fname, 
       t1.network as t1Network, 
       t2.network as t2Network
    FROM table1 t1 
       LEFT JOIN table2 t2 
          ON t1.fname=t2.fname 
          AND t1.lname=t2.lname 
          AND t1.state=t2.state
          AND t1.network=t2.network
    WHERE t2.network IS NULL
    UNION 
    SELECT t2.state, 
       t2.lname, 
       t2.fname, 
       t1.network as t1Network, 
       t2.network as t2Network
    FROM table2 t2 
       LEFT JOIN table1 t1
          ON t1.fname=t2.fname 
          AND t1.lname=t2.lname 
          AND t1.state=t2.state
          AND t1.network=t2.network
    WHERE t1.network IS NULL
    

    This should give you your desired results.

    And here is the SQL Fiddle to confirm.

    --EDIT

    Not thinking today -- you don't really need that first query. You can remove the WHERE condition from the 2nd query and it works the same way. Tired :-)

    Here is the updated query -- both should work just fine though, this is just easier to read:

    SELECT t1.state, 
       t1.lname, 
       t1.fname, 
       t1.network as t1Network, 
       t2.network as t2Network
    FROM table1 t1 
       LEFT JOIN table2 t2 
          ON t1.fname=t2.fname 
          AND t1.lname=t2.lname 
          AND t1.state=t2.state
          AND t1.network=t2.network
    UNION 
    SELECT t2.state, 
       t2.lname, 
       t2.fname, 
       t1.network as t1Network, 
       t2.network as t2Network
    FROM table2 t2 
       LEFT JOIN table1 t1
          ON t1.fname=t2.fname 
          AND t1.lname=t2.lname 
          AND t1.state=t2.state
          AND t1.network=t2.network
    WHERE t1.network IS NULL
    

    And the updated fiddle.

    BTW -- these should both work in MSAccess as it supports UNION.

    Good luck.

    qid & accept id: (14540917, 14540967) query: How can I create multiple rows from a single row (sql server 2008) soup:

    I'm a little confused by your question, but it sounds like you're trying to make your Company_X_Sales table have 3 rows instead of 1, just with varying quantities? If so, something like this should work:

    \n
    SELECT S.PO_Number, C.InterCO_PO_no, C.Sales_Order_No, C.Part_No, S.Qty\nFROM Company_X_Sales C\n   JOIN CPC_Sales S ON C.InterCO_PO_no = S.InterCO_SO_No\n
    \n

    Here is the SQL Fiddle.

    \n

    That will give you the 4 rows with the correct quantities. Then you can delete and reinsert accordingly.

    \n

    To get those rows into the table, you have a few options, but something like this should work:

    \n
    --Flag the rows for deletion\nUPDATE Company_X_Sales SET Qty = -1 -- Or some arbitrary value that does not exist in the table\n\n--Insert new correct rows\nINSERT INTO Company_X_Sales \nSELECT C.InterCO_PO_no, C.Sales_Order_No, C.Part_No, S.Qty\nFROM Company_X_Sales C\n   JOIN CPC_Sales S ON C.InterCO_PO_no = S.InterCO_SO_No\n\n--Cleanup flagged rows for deletion\nDELETE FROM Company_X_Sales  WHERE Qty = -1\n
    \n

    Good luck.

    \n soup wrap:

    I'm a little confused by your question, but it sounds like you're trying to make your Company_X_Sales table have 3 rows instead of 1, just with varying quantities? If so, something like this should work:

    SELECT S.PO_Number, C.InterCO_PO_no, C.Sales_Order_No, C.Part_No, S.Qty
    FROM Company_X_Sales C
       JOIN CPC_Sales S ON C.InterCO_PO_no = S.InterCO_SO_No
    

    Here is the SQL Fiddle.

    That will give you the 4 rows with the correct quantities. Then you can delete and reinsert accordingly.

    To get those rows into the table, you have a few options, but something like this should work:

    --Flag the rows for deletion
    UPDATE Company_X_Sales SET Qty = -1 -- Or some arbitrary value that does not exist in the table
    
    --Insert new correct rows
    INSERT INTO Company_X_Sales 
    SELECT C.InterCO_PO_no, C.Sales_Order_No, C.Part_No, S.Qty
    FROM Company_X_Sales C
       JOIN CPC_Sales S ON C.InterCO_PO_no = S.InterCO_SO_No
    
    --Cleanup flagged rows for deletion
    DELETE FROM Company_X_Sales  WHERE Qty = -1
    

    Good luck.

    qid & accept id: (14565788, 14566013) query: How to group by month from Date field using sql soup:

    I would use this:

    \n
    SELECT  Closing_Date = DATEADD(MONTH, DATEDIFF(MONTH, 0, Closing_Date), 0), \n        Category,  \n        COUNT(Status) TotalCount \nFROM    MyTable\nWHERE   Closing_Date >= '2012-02-01' \nAND     Closing_Date <= '2012-12-31'\nAND     Defect_Status1 IS NOT NULL\nGROUP BY DATEADD(MONTH, DATEDIFF(MONTH, 0, Closing_Date), 0), Category;\n
    \n

    This will group by the first of every month, so

    \n
    `DATEADD(MONTH, DATEDIFF(MONTH, 0, '20130128'), 0)` \n
    \n

    will give '20130101'. I generally prefer this method as it keeps dates as dates.

    \n

    Alternatively you could use something like this:

    \n
    SELECT  Closing_Year = DATEPART(YEAR, Closing_Date),\n        Closing_Month = DATEPART(MONTH, Closing_Date),\n        Category,  \n        COUNT(Status) TotalCount \nFROM    MyTable\nWHERE   Closing_Date >= '2012-02-01' \nAND     Closing_Date <= '2012-12-31'\nAND     Defect_Status1 IS NOT NULL\nGROUP BY DATEPART(YEAR, Closing_Date), DATEPART(MONTH, Closing_Date), Category;\n
    \n

    It really depends what your desired output is. (Closing Year is not necessary in your example, but if the date range crosses a year boundary it may be).

    \n soup wrap:

    I would use this:

    SELECT  Closing_Date = DATEADD(MONTH, DATEDIFF(MONTH, 0, Closing_Date), 0), 
            Category,  
            COUNT(Status) TotalCount 
    FROM    MyTable
    WHERE   Closing_Date >= '2012-02-01' 
    AND     Closing_Date <= '2012-12-31'
    AND     Defect_Status1 IS NOT NULL
    GROUP BY DATEADD(MONTH, DATEDIFF(MONTH, 0, Closing_Date), 0), Category;
    

    This will group by the first of every month, so

    `DATEADD(MONTH, DATEDIFF(MONTH, 0, '20130128'), 0)` 
    

    will give '20130101'. I generally prefer this method as it keeps dates as dates.

    Alternatively you could use something like this:

    SELECT  Closing_Year = DATEPART(YEAR, Closing_Date),
            Closing_Month = DATEPART(MONTH, Closing_Date),
            Category,  
            COUNT(Status) TotalCount 
    FROM    MyTable
    WHERE   Closing_Date >= '2012-02-01' 
    AND     Closing_Date <= '2012-12-31'
    AND     Defect_Status1 IS NOT NULL
    GROUP BY DATEPART(YEAR, Closing_Date), DATEPART(MONTH, Closing_Date), Category;
    

    It really depends what your desired output is. (Closing Year is not necessary in your example, but if the date range crosses a year boundary it may be).

    qid & accept id: (14610658, 14610810) query: Average or calculate average soup:

    You can do this in one step. A tested example may be found here: http://sqlfiddle.com/#!2/05760/12

    \n
    SELECT \n  COUNT(*) / \n  COUNT(DISTINCT cast(`date` as date)) avg_posts_per_day\nFROM \n  posts\n
    \n
    \n

    Or you can do this in two steps:

    \n
      \n
    1. get posts per day,
    2. \n
    3. average the result of step 1.
    4. \n
    \n

    A tested example may be found here: http://sqlfiddle.com/#!2/05760/4

    \n
    SELECT \n  AVG(posts_per_day) AS AVG_POSTS_PER_DAY\nFROM (    \n  SELECT \n    CAST(`date` as date), \n    COUNT(*) posts_per_day\n  FROM posts  \n  GROUP BY \n    CAST(`date` as date)\n) ppd\n
    \n soup wrap:

    You can do this in one step. A tested example may be found here: http://sqlfiddle.com/#!2/05760/12

    SELECT 
      COUNT(*) / 
      COUNT(DISTINCT cast(`date` as date)) avg_posts_per_day
    FROM 
      posts
    

    Or you can do this in two steps:

    1. get posts per day,
    2. average the result of step 1.

    A tested example may be found here: http://sqlfiddle.com/#!2/05760/4

    SELECT 
      AVG(posts_per_day) AS AVG_POSTS_PER_DAY
    FROM (    
      SELECT 
        CAST(`date` as date), 
        COUNT(*) posts_per_day
      FROM posts  
      GROUP BY 
        CAST(`date` as date)
    ) ppd
    
    qid & accept id: (14636287, 14697299) query: Convert local datetime from xml to datetime in sql soup:
    declare @XMLData xml = '\n\n  0008E02B66DD_\n  03.20\n  2\n  0001-01-01T00:00:00\n  \n    99\n    2012-02-03T13:00:00+13:00\n    \n  \n';\n\nselect T.N.value('substring((RecordedDate/text())[1], 1, 19)', 'datetime'),\n       T.N.value('(RecordedDate/text())[1]', 'datetime'),\n       T.N.value('(RecordedDate/text())[1]', 'datetimeoffset')\nfrom @XMLData.nodes('/Upload/Sessions') as T(N);\n
    \n

    Result:

    \n
    2012-02-03 13:00:00.000 \n2012-02-03 00:00:00.000 \n2012-02-03 13:00:00.0000000 +13:00\n
    \n soup wrap:
    declare @XMLData xml = '
    
      0008E02B66DD_
      03.20
      2
      0001-01-01T00:00:00
      
        99
        2012-02-03T13:00:00+13:00
        
      
    ';
    
    select T.N.value('substring((RecordedDate/text())[1], 1, 19)', 'datetime'),
           T.N.value('(RecordedDate/text())[1]', 'datetime'),
           T.N.value('(RecordedDate/text())[1]', 'datetimeoffset')
    from @XMLData.nodes('/Upload/Sessions') as T(N);
    

    Result:

    2012-02-03 13:00:00.000 
    2012-02-03 00:00:00.000 
    2012-02-03 13:00:00.0000000 +13:00
    
    qid & accept id: (14636901, 14637065) query: PostgreSQL ORDER BY with VIEWs soup:

    This is possible if you use row_number() over().

    \n

    Here is an example:

    \n
    SELECT\n    p.*\n    ,h.address\n    ,h.appraisal\nFROM (SELECT *, row_number() over() rn FROM people) p\nLEFT JOIN homes h\n    ON h.person_id = p.person_id\nORDER BY p.rn, h.appraisal;\n
    \n

    And here is the SQL Fiddle you can test with.

    \n

    As @Erwin Brandstetter correctly points out, using rank() will produce the correct results and allow for sorting on additional fields (in this case, appraisal).

    \n
    SELECT\n    p.*\n    ,h.address\n    ,h.appraisal\nFROM (SELECT *, rank() over() rn FROM people) p\nLEFT JOIN homes h\n    ON h.person_id = p.person_id\nORDER BY p.rn, h.appraisal;\n
    \n

    Think about it this way, using row_number(), it will always sort by that field only, regardless of any other sorting parameters. By using rank() where ties are the same, other fields can easily be search upon.

    \n

    Good luck.

    \n soup wrap:

    This is possible if you use row_number() over().

    Here is an example:

    SELECT
        p.*
        ,h.address
        ,h.appraisal
    FROM (SELECT *, row_number() over() rn FROM people) p
    LEFT JOIN homes h
        ON h.person_id = p.person_id
    ORDER BY p.rn, h.appraisal;
    

    And here is the SQL Fiddle you can test with.

    As @Erwin Brandstetter correctly points out, using rank() will produce the correct results and allow for sorting on additional fields (in this case, appraisal).

    SELECT
        p.*
        ,h.address
        ,h.appraisal
    FROM (SELECT *, rank() over() rn FROM people) p
    LEFT JOIN homes h
        ON h.person_id = p.person_id
    ORDER BY p.rn, h.appraisal;
    

    Think about it this way, using row_number(), it will always sort by that field only, regardless of any other sorting parameters. By using rank() where ties are the same, other fields can easily be search upon.

    Good luck.

    qid & accept id: (14672688, 14672737) query: How to Update a MYSQL Column Based On Varying Conditions soup:

    I'll prefer to use CASE here.

    \n
    UPDATE TAble1\nSET Result = CASE value\n                WHEN 1 THEN x\n                WHEN 2 THEN y\n                ....\n                ELSE z\n            END\n
    \n

    or

    \n
    UPDATE TAble1\nSET Result = CASE \n                WHEN value = 1 THEN x\n                WHEN value = 2 THEN y\n                ....\n                ELSE z\n            END\n
    \n soup wrap:

    I'll prefer to use CASE here.

    UPDATE TAble1
    SET Result = CASE value
                    WHEN 1 THEN x
                    WHEN 2 THEN y
                    ....
                    ELSE z
                END
    

    or

    UPDATE TAble1
    SET Result = CASE 
                    WHEN value = 1 THEN x
                    WHEN value = 2 THEN y
                    ....
                    ELSE z
                END
    
    qid & accept id: (14675304, 14675363) query: How to get (One Before Last) row in SQL Server 2005 soup:

    In SQL, tables are inherently unordered. So, let me assume that you have a column that specifies the ordering -- an id column, a date time, or something like that.

    \n

    The following does what you want:

    \n
    select top 4 *\nfrom (select top 5 *\n      from Article a\n      order by id desc\n     ) a\norder by id asc\n
    \n

    If for some reason you don't have an id, you can take your chances with the following query:

    \n
    select a.*\nfrom (select a.*, row_number() over (order by (select NULL)) as seqnum,\n             count(*) over () as totcnt\n      from Article a\n     ) a\nwhere seqnum between totcnt - 5 and totcnt - 1\n
    \n

    I want to emphasize that this is not guaranteed to work. In my experience, I have seen that definition of seqnum assign sequential number to rows in order. BUT THIS IS NOT GUARANTEED TO WORK, and will probably not work in a multi-threaded environment. But, you might get lucky (particularly if your rows fit on one data page).

    \n

    By the way, you can use the same idea with a real column:

    \n
    select a.*\nfrom (select a.*, row_number() over (order by id) as seqnum,\n             count(*) over () as totcnt\n      from Article a\n     ) a\nwhere seqnum between totcnt - 5 and totcnt - 1\n
    \n soup wrap:

    In SQL, tables are inherently unordered. So, let me assume that you have a column that specifies the ordering -- an id column, a date time, or something like that.

    The following does what you want:

    select top 4 *
    from (select top 5 *
          from Article a
          order by id desc
         ) a
    order by id asc
    

    If for some reason you don't have an id, you can take your chances with the following query:

    select a.*
    from (select a.*, row_number() over (order by (select NULL)) as seqnum,
                 count(*) over () as totcnt
          from Article a
         ) a
    where seqnum between totcnt - 5 and totcnt - 1
    

    I want to emphasize that this is not guaranteed to work. In my experience, I have seen that definition of seqnum assign sequential number to rows in order. BUT THIS IS NOT GUARANTEED TO WORK, and will probably not work in a multi-threaded environment. But, you might get lucky (particularly if your rows fit on one data page).

    By the way, you can use the same idea with a real column:

    select a.*
    from (select a.*, row_number() over (order by id) as seqnum,
                 count(*) over () as totcnt
          from Article a
         ) a
    where seqnum between totcnt - 5 and totcnt - 1
    
    qid & accept id: (14699703, 14700329) query: Loop through all rows and concat unique values in SQL table soup:

    You could concat as string aggregation using the format for your Table1,

    \n
    SELECT col1,\n     col2,\n     col3,\n     listagg(col4, ',') within GROUP(\nORDER BY col4) AS col4\nFROM agg_test\nGROUP BY col1,\n     col2,\n     col3;\n
    \n

    You could get the result as:

    \n
    col1    col2    col3    col4\n______________________________________    \nval1    val2    val3    val4,val5,val6\nvalx    valy    valz    val4,val5\n
    \n soup wrap:

    You could concat as string aggregation using the format for your Table1,

    SELECT col1,
         col2,
         col3,
         listagg(col4, ',') within GROUP(
    ORDER BY col4) AS col4
    FROM agg_test
    GROUP BY col1,
         col2,
         col3;
    

    You could get the result as:

    col1    col2    col3    col4
    ______________________________________    
    val1    val2    val3    val4,val5,val6
    valx    valy    valz    val4,val5
    
    qid & accept id: (14705215, 14705360) query: How to find rows in SQL / MySQL with ORDER BY soup:

    This will give current rank for user1:

    \n
    SELECT count(*) AS rank\nFROM user\nWHERE poin >= (SELECT poin FROM user WHERE name = 'user1')\n
    \n

    Small issue with this query is that if another user has the same points, it will be assigned the same rank - whether it is correct, it is questionable.

    \n

    If you want to simply add rank for every user, use this:

    \n
    SELECT\n    @rank:=@rank+1 AS rank,\n    name,\n    poin\nFROM user,\n    (SELECT @rank:=0) r\nORDER BY poin DESC\n
    \n

    You can use small variation of this query to get rank of single user, but avoid issue of the same ranking ambiguity:

    \n
    SELECT *\nFROM (\n    SELECT\n        @rank:=@rank+1 AS rank,\n        name,\n        poin\n    FROM user,\n        (SELECT @rank:=0) r\n    ORDER BY poin DESC\n) x\nWHERE name = 'user1'\n
    \n soup wrap:

    This will give current rank for user1:

    SELECT count(*) AS rank
    FROM user
    WHERE poin >= (SELECT poin FROM user WHERE name = 'user1')
    

    Small issue with this query is that if another user has the same points, it will be assigned the same rank - whether it is correct, it is questionable.

    If you want to simply add rank for every user, use this:

    SELECT
        @rank:=@rank+1 AS rank,
        name,
        poin
    FROM user,
        (SELECT @rank:=0) r
    ORDER BY poin DESC
    

    You can use small variation of this query to get rank of single user, but avoid issue of the same ranking ambiguity:

    SELECT *
    FROM (
        SELECT
            @rank:=@rank+1 AS rank,
            name,
            poin
        FROM user,
            (SELECT @rank:=0) r
        ORDER BY poin DESC
    ) x
    WHERE name = 'user1'
    
    qid & accept id: (14730469, 14730620) query: Row data to column soup:

    SQL Fiddle

    \n

    MS SQL Server 2008 Schema Setup:

    \n
    create table tblFile\n(\n  FileName varchar(10),\n  FileLocation varchar(30)\n)\n\ninsert into tblFile values\n('file1',                  '\\server1\folder1\file1'),\n('file1',                  '\\server2\folder1\file1'),\n('file2',                  '\\server1\folder1\file2'),\n('file2',                  '\\server2\folder1\file2')\n
    \n

    Query 1:

    \n
    select T1.FileName,\n       (\n       select ', '+T2.FileLocation\n       from tblFile as T2\n       where T1.FileName = T2.FileName\n       for xml path(''), type\n       ).value('substring(text()[1], 3)', 'varchar(max)') as FileLocations\nfrom tblFile as T1\ngroup by T1.FileName\n
    \n

    Results:

    \n
    | FILENAME |                                    FILELOCATIONS |\n---------------------------------------------------------------\n|    file1 | \\server1\folder1\file1, \\server2\folder1\file1 |\n|    file2 | \\server1\folder1\file2, \\server2\folder1\file2 |\n
    \n soup wrap:

    SQL Fiddle

    MS SQL Server 2008 Schema Setup:

    create table tblFile
    (
      FileName varchar(10),
      FileLocation varchar(30)
    )
    
    insert into tblFile values
    ('file1',                  '\\server1\folder1\file1'),
    ('file1',                  '\\server2\folder1\file1'),
    ('file2',                  '\\server1\folder1\file2'),
    ('file2',                  '\\server2\folder1\file2')
    

    Query 1:

    select T1.FileName,
           (
           select ', '+T2.FileLocation
           from tblFile as T2
           where T1.FileName = T2.FileName
           for xml path(''), type
           ).value('substring(text()[1], 3)', 'varchar(max)') as FileLocations
    from tblFile as T1
    group by T1.FileName
    

    Results:

    | FILENAME |                                    FILELOCATIONS |
    ---------------------------------------------------------------
    |    file1 | \\server1\folder1\file1, \\server2\folder1\file1 |
    |    file2 | \\server1\folder1\file2, \\server2\folder1\file2 |
    
    qid & accept id: (14732938, 14732970) query: Pivot on a single table soup:

    This type of data transformation is known as a PIVOT. Starting in SQL Server 2005 there is a function that can perform this data rotation for you. But this can be done many different ways.

    \n

    You can use an aggregate function and a CASE to pivot the data:

    \n
    select\n  name,\n  max(case when date = '2013-04-01' then city end) [City 04/01/2013],\n  max(case when date = '2013-05-01' then city end) [City 05/01/2013]\nfrom yourtable\ngroup by name\n
    \n

    See SQL Fiddle with Demo

    \n

    Or you can use the PIVOT function:

    \n
    select name, [2013-04-01] as [City 04/01/2013], [2013-05-01] as [City 05/01/2013]\nfrom\n(\n  select name, city, date\n  from yourtable\n) src\npivot\n(\n  max(city)\n  for date in ([2013-04-01], [2013-05-01])\n) piv\n
    \n

    See SQL Fiddle with Demo.

    \n

    This can even be done by joining on your table multiple times:

    \n
    select d1.name,\n  d1.city [City 04/01/2013], \n  d2.city [City 05/01/2013]\nfrom yourtable d1\nleft join yourtable d2\n  on d1.name = d2.name\n  and d2.date = '2013-05-01'\nwhere d1.date = '2013-04-01'\n
    \n

    See SQL Fiddle with Demo.

    \n

    The above queries will work great if you have known dates that you want to transform into columns. But if you have an unknown number of columns, then you will want to use dynamic sql:

    \n
    DECLARE @cols AS NVARCHAR(MAX),\n    @colNames AS NVARCHAR(MAX),\n    @query  AS NVARCHAR(MAX)\n\nselect @cols = STUFF((SELECT distinct ',' + QUOTENAME(convert(char(10), date, 120)) \n                    from yourtable\n            FOR XML PATH(''), TYPE\n            ).value('.', 'NVARCHAR(MAX)') \n        ,1,1,'')\n\nselect @colNames = STUFF((SELECT distinct ',' + QUOTENAME(convert(char(10), date, 120)) +' as '+ QUOTENAME('City '+convert(char(10), date, 120))\n                    from yourtable\n            FOR XML PATH(''), TYPE\n            ).value('.', 'NVARCHAR(MAX)') \n        ,1,1,'')\n\nset @query = 'SELECT name, ' + @colNames + ' from \n             (\n                select name, \n                  city, \n                  convert(char(10), date, 120) date\n                from yourtable\n            ) x\n            pivot \n            (\n                max(city)\n                for date in (' + @cols + ')\n            ) p '\n\nexecute(@query)\n
    \n

    See SQL Fiddle with Demo

    \n

    All of them give the result:

    \n
    |   NAME | CITY 04/01/2013 | CITY 05/01/2013 |\n----------------------------------------------\n|   Paul |           Milan |          Berlin |\n| Charls |            Rome |        El Cairo |\n|    Jim |           Tokyo |           Milan |\n| Justin |   San Francisco |           Paris |\n|   Bill |          London |          Madrid |\n
    \n soup wrap:

    This type of data transformation is known as a PIVOT. Starting in SQL Server 2005 there is a function that can perform this data rotation for you. But this can be done many different ways.

    You can use an aggregate function and a CASE to pivot the data:

    select
      name,
      max(case when date = '2013-04-01' then city end) [City 04/01/2013],
      max(case when date = '2013-05-01' then city end) [City 05/01/2013]
    from yourtable
    group by name
    

    See SQL Fiddle with Demo

    Or you can use the PIVOT function:

    select name, [2013-04-01] as [City 04/01/2013], [2013-05-01] as [City 05/01/2013]
    from
    (
      select name, city, date
      from yourtable
    ) src
    pivot
    (
      max(city)
      for date in ([2013-04-01], [2013-05-01])
    ) piv
    

    See SQL Fiddle with Demo.

    This can even be done by joining on your table multiple times:

    select d1.name,
      d1.city [City 04/01/2013], 
      d2.city [City 05/01/2013]
    from yourtable d1
    left join yourtable d2
      on d1.name = d2.name
      and d2.date = '2013-05-01'
    where d1.date = '2013-04-01'
    

    See SQL Fiddle with Demo.

    The above queries will work great if you have known dates that you want to transform into columns. But if you have an unknown number of columns, then you will want to use dynamic sql:

    DECLARE @cols AS NVARCHAR(MAX),
        @colNames AS NVARCHAR(MAX),
        @query  AS NVARCHAR(MAX)
    
    select @cols = STUFF((SELECT distinct ',' + QUOTENAME(convert(char(10), date, 120)) 
                        from yourtable
                FOR XML PATH(''), TYPE
                ).value('.', 'NVARCHAR(MAX)') 
            ,1,1,'')
    
    select @colNames = STUFF((SELECT distinct ',' + QUOTENAME(convert(char(10), date, 120)) +' as '+ QUOTENAME('City '+convert(char(10), date, 120))
                        from yourtable
                FOR XML PATH(''), TYPE
                ).value('.', 'NVARCHAR(MAX)') 
            ,1,1,'')
    
    set @query = 'SELECT name, ' + @colNames + ' from 
                 (
                    select name, 
                      city, 
                      convert(char(10), date, 120) date
                    from yourtable
                ) x
                pivot 
                (
                    max(city)
                    for date in (' + @cols + ')
                ) p '
    
    execute(@query)
    

    See SQL Fiddle with Demo

    All of them give the result:

    |   NAME | CITY 04/01/2013 | CITY 05/01/2013 |
    ----------------------------------------------
    |   Paul |           Milan |          Berlin |
    | Charls |            Rome |        El Cairo |
    |    Jim |           Tokyo |           Milan |
    | Justin |   San Francisco |           Paris |
    |   Bill |          London |          Madrid |
    
    qid & accept id: (14746540, 14781686) query: How to select from table where the table name is a local variable(informix) soup:

    Assuming you have a recent enough version of Informix (11.70), you should be able to use Dynamic SQL in SPL like this:

    \n
    BEGIN;\n\nCREATE TABLE rnmtask\n(\n    pdf_column VARCHAR(32) NOT NULL,\n    table_name VARCHAR(32) NOT NULL,\n    task_code  INTEGER NOT NULL PRIMARY KEY\n);\n\nINSERT INTO rnmtask VALUES("symbol", "elements", 1);\nINSERT INTO rnmtask VALUES("name", "elements", 2);\nINSERT INTO rnmtask VALUES("atomic_number", "elements", 3);\n\nCREATE PROCEDURE rmg_request_file(al_task_code INTEGER)\n    RETURNING VARCHAR(255) AS colval;\n\n    DEFINE ll_pdf_column    VARCHAR(50);\n    DEFINE ll_tb_name       VARCHAR(60);\n    DEFINE stmt             VARCHAR(255);\n    DEFINE result           VARCHAR(255);\n\n    SELECT pdf_column, table_name\n      INTO ll_pdf_column, ll_tb_name\n      FROM rnmtask\n     WHERE task_code = al_task_code;\n\n    LET stmt = "SELECT " || ll_pdf_column || " FROM " || ll_tb_name;\n    PREPARE p FROM stmt;\n    DECLARE C CURSOR FOR p;\n    OPEN C;\n    WHILE sqlcode = 0\n        FETCH C INTO result;\n        IF sqlcode != 0 THEN\n            EXIT WHILE;\n        END IF;\n        RETURN result WITH RESUME;\n    END WHILE;\n\n    CLOSE C;\n    FREE C;\n    FREE p;\n\nEND PROCEDURE;\n\nEXECUTE PROCEDURE rmg_request_file(1);\nEXECUTE PROCEDURE rmg_request_file(2);\nEXECUTE PROCEDURE rmg_request_file(3);\n\nROLLBACK;\n
    \n

    This assumes you have a convenient Table of Elements in your database:

    \n
    CREATE TABLE elements\n(\n    atomic_number   INTEGER NOT NULL PRIMARY KEY CONSTRAINT c1_elements\n                    CHECK (atomic_number > 0 AND atomic_number < 120),\n    symbol          CHAR(3) NOT NULL UNIQUE CONSTRAINT c2_elements,\n    name            CHAR(20) NOT NULL UNIQUE CONSTRAINT c3_elements,\n    atomic_weight   DECIMAL(8, 4) NOT NULL,\n    period          SMALLINT NOT NULL\n                    CHECK (period BETWEEN 1 AND 7),\n    group           CHAR(2) NOT NULL\n                    -- 'L' for Lanthanoids, 'A' for Actinoids\n                    CHECK (group IN ('1', '2', 'L', 'A', '3', '4', '5', '6',\n                                     '7', '8', '9', '10', '11', '12', '13',\n                                     '14', '15', '16', '17', '18')),\n    stable          CHAR(1) DEFAULT 'Y' NOT NULL\n                    CHECK (stable IN ('Y', 'N'))\n);\n\nINSERT INTO elements VALUES(  1, 'H',   'Hydrogen',        1.0079, 1, '1',  'Y');\nINSERT INTO elements VALUES(  2, 'He',  'Helium',          4.0026, 1, '18', 'Y');\nINSERT INTO elements VALUES(  3, 'Li',  'Lithium',         6.9410, 2, '1',  'Y');\nINSERT INTO elements VALUES(  4, 'Be',  'Beryllium',       9.0122, 2, '2',  'Y');\nINSERT INTO elements VALUES(  5, 'B',   'Boron',          10.8110, 2, '13', 'Y');\nINSERT INTO elements VALUES(  6, 'C',   'Carbon',         12.0110, 2, '14', 'Y');\nINSERT INTO elements VALUES(  7, 'N',   'Nitrogen',       14.0070, 2, '15', 'Y');\nINSERT INTO elements VALUES(  8, 'O',   'Oxygen',         15.9990, 2, '16', 'Y');\nINSERT INTO elements VALUES(  9, 'F',   'Fluorine',       18.9980, 2, '17', 'Y');\nINSERT INTO elements VALUES( 10, 'Ne',  'Neon',           20.1800, 2, '18', 'Y');\nINSERT INTO elements VALUES( 11, 'Na',  'Sodium',         22.9900, 3, '1',  'Y');\nINSERT INTO elements VALUES( 12, 'Mg',  'Magnesium',      24.3050, 3, '2',  'Y');\nINSERT INTO elements VALUES( 13, 'Al',  'Aluminium',      26.9820, 3, '13', 'Y');\nINSERT INTO elements VALUES( 14, 'Si',  'Silicon',        28.0860, 3, '14', 'Y');\nINSERT INTO elements VALUES( 15, 'P',   'Phosphorus',     30.9740, 3, '15', 'Y');\nINSERT INTO elements VALUES( 16, 'S',   'Sulphur',        32.0650, 3, '16', 'Y');\nINSERT INTO elements VALUES( 17, 'Cl',  'Chlorine',       35.4530, 3, '17', 'Y');\nINSERT INTO elements VALUES( 18, 'Ar',  'Argon',          39.9480, 3, '18', 'Y');\n
    \n soup wrap:

    Assuming you have a recent enough version of Informix (11.70), you should be able to use Dynamic SQL in SPL like this:

    BEGIN;
    
    CREATE TABLE rnmtask
    (
        pdf_column VARCHAR(32) NOT NULL,
        table_name VARCHAR(32) NOT NULL,
        task_code  INTEGER NOT NULL PRIMARY KEY
    );
    
    INSERT INTO rnmtask VALUES("symbol", "elements", 1);
    INSERT INTO rnmtask VALUES("name", "elements", 2);
    INSERT INTO rnmtask VALUES("atomic_number", "elements", 3);
    
    CREATE PROCEDURE rmg_request_file(al_task_code INTEGER)
        RETURNING VARCHAR(255) AS colval;
    
        DEFINE ll_pdf_column    VARCHAR(50);
        DEFINE ll_tb_name       VARCHAR(60);
        DEFINE stmt             VARCHAR(255);
        DEFINE result           VARCHAR(255);
    
        SELECT pdf_column, table_name
          INTO ll_pdf_column, ll_tb_name
          FROM rnmtask
         WHERE task_code = al_task_code;
    
        LET stmt = "SELECT " || ll_pdf_column || " FROM " || ll_tb_name;
        PREPARE p FROM stmt;
        DECLARE C CURSOR FOR p;
        OPEN C;
        WHILE sqlcode = 0
            FETCH C INTO result;
            IF sqlcode != 0 THEN
                EXIT WHILE;
            END IF;
            RETURN result WITH RESUME;
        END WHILE;
    
        CLOSE C;
        FREE C;
        FREE p;
    
    END PROCEDURE;
    
    EXECUTE PROCEDURE rmg_request_file(1);
    EXECUTE PROCEDURE rmg_request_file(2);
    EXECUTE PROCEDURE rmg_request_file(3);
    
    ROLLBACK;
    

    This assumes you have a convenient Table of Elements in your database:

    CREATE TABLE elements
    (
        atomic_number   INTEGER NOT NULL PRIMARY KEY CONSTRAINT c1_elements
                        CHECK (atomic_number > 0 AND atomic_number < 120),
        symbol          CHAR(3) NOT NULL UNIQUE CONSTRAINT c2_elements,
        name            CHAR(20) NOT NULL UNIQUE CONSTRAINT c3_elements,
        atomic_weight   DECIMAL(8, 4) NOT NULL,
        period          SMALLINT NOT NULL
                        CHECK (period BETWEEN 1 AND 7),
        group           CHAR(2) NOT NULL
                        -- 'L' for Lanthanoids, 'A' for Actinoids
                        CHECK (group IN ('1', '2', 'L', 'A', '3', '4', '5', '6',
                                         '7', '8', '9', '10', '11', '12', '13',
                                         '14', '15', '16', '17', '18')),
        stable          CHAR(1) DEFAULT 'Y' NOT NULL
                        CHECK (stable IN ('Y', 'N'))
    );
    
    INSERT INTO elements VALUES(  1, 'H',   'Hydrogen',        1.0079, 1, '1',  'Y');
    INSERT INTO elements VALUES(  2, 'He',  'Helium',          4.0026, 1, '18', 'Y');
    INSERT INTO elements VALUES(  3, 'Li',  'Lithium',         6.9410, 2, '1',  'Y');
    INSERT INTO elements VALUES(  4, 'Be',  'Beryllium',       9.0122, 2, '2',  'Y');
    INSERT INTO elements VALUES(  5, 'B',   'Boron',          10.8110, 2, '13', 'Y');
    INSERT INTO elements VALUES(  6, 'C',   'Carbon',         12.0110, 2, '14', 'Y');
    INSERT INTO elements VALUES(  7, 'N',   'Nitrogen',       14.0070, 2, '15', 'Y');
    INSERT INTO elements VALUES(  8, 'O',   'Oxygen',         15.9990, 2, '16', 'Y');
    INSERT INTO elements VALUES(  9, 'F',   'Fluorine',       18.9980, 2, '17', 'Y');
    INSERT INTO elements VALUES( 10, 'Ne',  'Neon',           20.1800, 2, '18', 'Y');
    INSERT INTO elements VALUES( 11, 'Na',  'Sodium',         22.9900, 3, '1',  'Y');
    INSERT INTO elements VALUES( 12, 'Mg',  'Magnesium',      24.3050, 3, '2',  'Y');
    INSERT INTO elements VALUES( 13, 'Al',  'Aluminium',      26.9820, 3, '13', 'Y');
    INSERT INTO elements VALUES( 14, 'Si',  'Silicon',        28.0860, 3, '14', 'Y');
    INSERT INTO elements VALUES( 15, 'P',   'Phosphorus',     30.9740, 3, '15', 'Y');
    INSERT INTO elements VALUES( 16, 'S',   'Sulphur',        32.0650, 3, '16', 'Y');
    INSERT INTO elements VALUES( 17, 'Cl',  'Chlorine',       35.4530, 3, '17', 'Y');
    INSERT INTO elements VALUES( 18, 'Ar',  'Argon',          39.9480, 3, '18', 'Y');
    
    qid & accept id: (14790098, 14790136) query: MySQL - SUM of a group of time differences soup:
    Select  SEC_TO_TIME(SUM(TIME_TO_SEC(timediff(timeOut, timeIn)))) AS totalhours\nFROM volHours \nWHERE username = 'skolcz'\n
    \n

    If not then maybe:

    \n
    Select  SEC_TO_TIME(SELECT SUM(TIME_TO_SEC(timediff(timeOut, timeIn))) \nFROM volHours \nWHERE username = 'skolcz') as totalhours\n
    \n soup wrap:
    Select  SEC_TO_TIME(SUM(TIME_TO_SEC(timediff(timeOut, timeIn)))) AS totalhours
    FROM volHours 
    WHERE username = 'skolcz'
    

    If not then maybe:

    Select  SEC_TO_TIME(SELECT SUM(TIME_TO_SEC(timediff(timeOut, timeIn))) 
    FROM volHours 
    WHERE username = 'skolcz') as totalhours
    
    qid & accept id: (14792677, 14792810) query: MySQL upsert with extra check soup:

    Simple. Don't use VALUES() (you're already doing it to refer to the existing value of check_status):

    \n
    INSERT INTO some_table ('description', 'comment', 'some_unique_key')\nVALUES ('some description', 'some comment', 32)\nON DUPLICATE KEY UPDATE\ndescription = IF(check_status = 1, description, 'some description')\ncomment = IF(check_status = 1, comment, 'some comment')\n
    \n

    Or use it to set the new content rather than repeating yourself:

    \n
    INSERT INTO some_table ('description', 'comment', 'some_unique_key')\nVALUES ('some description', 'some comment', 32)\nON DUPLICATE KEY UPDATE\ndescription = IF(check_status = 1, description, VALUES(description))\ncomment = IF(check_status = 1, comment, VALUES(comment))\n
    \n soup wrap:

    Simple. Don't use VALUES() (you're already doing it to refer to the existing value of check_status):

    INSERT INTO some_table ('description', 'comment', 'some_unique_key')
    VALUES ('some description', 'some comment', 32)
    ON DUPLICATE KEY UPDATE
    description = IF(check_status = 1, description, 'some description')
    comment = IF(check_status = 1, comment, 'some comment')
    

    Or use it to set the new content rather than repeating yourself:

    INSERT INTO some_table ('description', 'comment', 'some_unique_key')
    VALUES ('some description', 'some comment', 32)
    ON DUPLICATE KEY UPDATE
    description = IF(check_status = 1, description, VALUES(description))
    comment = IF(check_status = 1, comment, VALUES(comment))
    
    qid & accept id: (14830410, 14830905) query: Multiple Table Joins with WHERE clause soup:

    It seems like the following query is what you need. Notice that the filter for memberid = 200 has been moved to the join condition:

    \n
    select s.section_id,\n  s.title,\n  s.description,\n  m.status\nfrom Sections s\nleft join SectionMembers sm\n  on s.section_id = sm.section_id\n  and sm.memberid = 200\nleft join MemberStatus m\n  on sm.status_code = m.status_code\nwhere s.section_ownerid = 100;\n
    \n

    Note: while your desired result shows that section_id=2 has a status of ActiveMember there is no way in your sample data to make this value link to section 2.

    \n

    This query gives the result:

    \n
    | SECTION_ID |  TITLE | DESCRIPTION |         STATUS |\n------------------------------------------------------\n|          1 | title1 |       desc1 |  PendingMember |\n|          2 | title2 |       desc2 | MemberRejected |\n|          3 | title3 |       desc3 | MemberRejected |\n|          4 | title4 |       desc4 |   ActiveMember |\n|          5 | title5 |       desc5 |         (null) |\n|          6 | title6 |       desc6 |         (null) |\n
    \n soup wrap:

    It seems like the following query is what you need. Notice that the filter for memberid = 200 has been moved to the join condition:

    select s.section_id,
      s.title,
      s.description,
      m.status
    from Sections s
    left join SectionMembers sm
      on s.section_id = sm.section_id
      and sm.memberid = 200
    left join MemberStatus m
      on sm.status_code = m.status_code
    where s.section_ownerid = 100;
    

    Note: while your desired result shows that section_id=2 has a status of ActiveMember there is no way in your sample data to make this value link to section 2.

    This query gives the result:

    | SECTION_ID |  TITLE | DESCRIPTION |         STATUS |
    ------------------------------------------------------
    |          1 | title1 |       desc1 |  PendingMember |
    |          2 | title2 |       desc2 | MemberRejected |
    |          3 | title3 |       desc3 | MemberRejected |
    |          4 | title4 |       desc4 |   ActiveMember |
    |          5 | title5 |       desc5 |         (null) |
    |          6 | title6 |       desc6 |         (null) |
    
    qid & accept id: (14838374, 14838929) query: TSQL dynamic filters on one column soup:

    I'm making 4 assumptions here:

    \n
      \n
    1. You have SQL-Server 2008 or later (tag is only sql-server)
    2. \n
    3. Your criteria will always be in the format name = Y and value >=10 and value <= 25
    4. \n
    5. Your values column is actually an int column (based on your where\nclause)
    6. \n
    7. Your separate criteria should be separated by OR not and (since in\nyour example you have WHERE (Name = 'x' ..) AND (Name = 'y'...)\nwhich will never evaluate to true)
    8. \n
    \n

    Assuming the above is true then you can use table valued parameters. The first step would be to create your parameter:

    \n
    CREATE TYPE dbo.TableFilter AS TABLE \n(   Name        VARCHAR(50), \n    LowerValue  INT, \n    UpperValue  INT\n);\n
    \n

    Then you can create a procedure to get your filtered results

    \n
    CREATE PROCEDURE dbo.CustomTableFilter @Filter dbo.TableFilter READONLY\nAS\n    SELECT  T.*\n    FROM    T\n    WHERE   EXISTS\n            (   SELECT  1\n                FROM    @Filter f\n                WHERE   T.Name = f.Name\n                AND     T.Value >= f.LowerValue \n                AND     T.Value <= f.UpperValue\n            )\n
    \n

    Then you can call your procedure using something like:

    \n
    DECLARE @Filter dbo.TableFilter;\nINSERT @Filter VALUES ('X', 1, 5), ('Y', 10, 25);\n\nEXECUTE dbo.CustomTableFilter @Filter;\n
    \n

    Example on SQL Fiddle

    \n soup wrap:

    I'm making 4 assumptions here:

    1. You have SQL-Server 2008 or later (tag is only sql-server)
    2. Your criteria will always be in the format name = Y and value >=10 and value <= 25
    3. Your values column is actually an int column (based on your where clause)
    4. Your separate criteria should be separated by OR not and (since in your example you have WHERE (Name = 'x' ..) AND (Name = 'y'...) which will never evaluate to true)

    Assuming the above is true then you can use table valued parameters. The first step would be to create your parameter:

    CREATE TYPE dbo.TableFilter AS TABLE 
    (   Name        VARCHAR(50), 
        LowerValue  INT, 
        UpperValue  INT
    );
    

    Then you can create a procedure to get your filtered results

    CREATE PROCEDURE dbo.CustomTableFilter @Filter dbo.TableFilter READONLY
    AS
        SELECT  T.*
        FROM    T
        WHERE   EXISTS
                (   SELECT  1
                    FROM    @Filter f
                    WHERE   T.Name = f.Name
                    AND     T.Value >= f.LowerValue 
                    AND     T.Value <= f.UpperValue
                )
    

    Then you can call your procedure using something like:

    DECLARE @Filter dbo.TableFilter;
    INSERT @Filter VALUES ('X', 1, 5), ('Y', 10, 25);
    
    EXECUTE dbo.CustomTableFilter @Filter;
    

    Example on SQL Fiddle

    qid & accept id: (14841239, 14842425) query: sql normalize a table soup:

    If I understand you correctly, you're working with columns that contain multiple, delimited values (like the PICK database) :

    \n
    `For multiple parts, this character | is added and the structure is repeated.`\n
    \n

    Typically, in a normalized database, one would have:

    \n
    UNIT  (something that might need service or repair)\nUnitId  PK\nUnitDescription\n\nPARTS  (repair / replacement parts)\nPartId PK\nPartDescription\n\nUNIT_SERVICES  (instances of repair visits/ service)\nServiceID   int primary key\nUnitId      foreign key references UNIT\nServiceDate\nTechnicianID\netc\n\n\nSERVICE_PART   (part used in the service)\nID          primary key\nServiceID   foreign key references SERVICE\nPartID      foreign key references PART\nQuantity\n
    \n

    There could be zero, one, or multiple UNIT_SERVICES associated with a UNIT.\nThere could be zero, one, or multiple SERVICE_PARTS associated with a SERVICE.

    \n

    In a normalized database, each part used in the servicing of a unit would exist in its own row in the SERVICE_PART table. We would not find two or more parts in the same SERVICE_PART tuple, separated by some delimiter, as was commonly done in so-called multivalued databases, which were precursors to the modern OODBMS.

    \n soup wrap:

    If I understand you correctly, you're working with columns that contain multiple, delimited values (like the PICK database) :

    `For multiple parts, this character | is added and the structure is repeated.`
    

    Typically, in a normalized database, one would have:

    UNIT  (something that might need service or repair)
    UnitId  PK
    UnitDescription
    
    PARTS  (repair / replacement parts)
    PartId PK
    PartDescription
    
    UNIT_SERVICES  (instances of repair visits/ service)
    ServiceID   int primary key
    UnitId      foreign key references UNIT
    ServiceDate
    TechnicianID
    etc
    
    
    SERVICE_PART   (part used in the service)
    ID          primary key
    ServiceID   foreign key references SERVICE
    PartID      foreign key references PART
    Quantity
    

    There could be zero, one, or multiple UNIT_SERVICES associated with a UNIT. There could be zero, one, or multiple SERVICE_PARTS associated with a SERVICE.

    In a normalized database, each part used in the servicing of a unit would exist in its own row in the SERVICE_PART table. We would not find two or more parts in the same SERVICE_PART tuple, separated by some delimiter, as was commonly done in so-called multivalued databases, which were precursors to the modern OODBMS.

    qid & accept id: (14849316, 14849699) query: How to fetch consecutive pairs of records in Oracle soup:

    Like other commenters I'm not entirely sure I follow, but if you only want to look at IDs 4 and 5 and want to match them up in date order, you can do something like this:

    \n
    with t as (\n    select id, dt, row_number() over (partition by id order by dt) as rn\n    from t42\n    where id in (4, 5)\n)\nselect t4.id as id4, t4.dt as date4, t5.id as id5, t5.dt as date5,\n    case t4.rn when 1 then 'First' when 2 then 'Second' when 3 then 'Third' end\n        || ' set of 4 and 5' as "Comment"\nfrom t t4\njoin t t5 on t5.rn = t4.rn\nwhere t4.id = 4\nand t5.id = 5\norder by t4.rn;\n\n       ID4 DATE4            ID5 DATE5     Comment             \n---------- --------- ---------- --------- ---------------------\n         4 02-JAN-13          5 05-JAN-13 First set of 4 and 5  \n         4 08-JAN-13          5 12-JAN-13 Second set of 4 and 5 \n
    \n

    I'm not sure now if you actually want the 'comment' to be returned/displayed... probably not, which would simplify it slightly.

    \n
    \n

    For modified requirements:

    \n
    with t as (\n    select id, dt, lead(dt) over (partition by id order by dt) as next_dt\n    from t42\n    where id in (4, 5)\n)\nselect t4.id as id4, t4.dt as date4, t5.id as id5, min(t5.dt) as date5\nfrom t t4\njoin t t5 on t5.dt > t4.dt and (t4.next_dt is null or t5.dt <= t4.next_dt)\nwhere t4.id = 4\nand t5.id = 5\ngroup by t4.id, t4.dt, t5.id\norder by t4.dt;\n\n       ID4 DATE4                        ID5 DATE5               \n---------- --------------------- ---------- ---------------------\n         4 16.03.2012 17:49:28            5 10.05.2012 09:38:56   \n         4 12.06.2012 08:47:52            5 02.08.2012 11:27:43   \n         4 03.08.2012 13:24:54            5 03.08.2012 14:14:07   \n
    \n

    The CTE uses LEAD to peek at the next date for each ID, which is only really relevant for when ID is 4; and that can be null if there isn't an extra ID 4 without matches at the end. The join then only looks for ID 5 records that fall between two ID 4 dates (or after the last ID 4 date). If you want the alternate (later) ID 5 date in the first result just use MAX instead of MIN. (I'm not 100% about the > and <= matching; I've tried to interpret what you said, but you might need to tweak that if it isn't quite right).

    \n
    \n

    To work around what appears to be a 9i bug (probably fixed in 9.2.0.3 or 9.2.0.6 according to MOS, but depends exectly which bug you're hitting):

    \n
    select t4.id as id4, t4.dt as date4, t5.id as id5, min(t5.dt) as date5\nfrom (\n    select id, dt, lead(dt) over (partition by id order by dt) as next_dt\n    from t42\n    where id = 4\n) t4\njoin (select id, dt\n    from t42\n    where id = 5\n) t5 on t5.dt > t4.dt and (t4.next_dt is null or t5.dt <= t4.next_dt)\ngroup by t4.id, t4.dt, t5.id\norder by t4.dt;\n
    \n

    I don't have an old enough version to test this against unfortunately. You don't have to use the t5 subselect, you could just join your main table straight to t4, but I think this is a little clearer.

    \n soup wrap:

    Like other commenters I'm not entirely sure I follow, but if you only want to look at IDs 4 and 5 and want to match them up in date order, you can do something like this:

    with t as (
        select id, dt, row_number() over (partition by id order by dt) as rn
        from t42
        where id in (4, 5)
    )
    select t4.id as id4, t4.dt as date4, t5.id as id5, t5.dt as date5,
        case t4.rn when 1 then 'First' when 2 then 'Second' when 3 then 'Third' end
            || ' set of 4 and 5' as "Comment"
    from t t4
    join t t5 on t5.rn = t4.rn
    where t4.id = 4
    and t5.id = 5
    order by t4.rn;
    
           ID4 DATE4            ID5 DATE5     Comment             
    ---------- --------- ---------- --------- ---------------------
             4 02-JAN-13          5 05-JAN-13 First set of 4 and 5  
             4 08-JAN-13          5 12-JAN-13 Second set of 4 and 5 
    

    I'm not sure now if you actually want the 'comment' to be returned/displayed... probably not, which would simplify it slightly.


    For modified requirements:

    with t as (
        select id, dt, lead(dt) over (partition by id order by dt) as next_dt
        from t42
        where id in (4, 5)
    )
    select t4.id as id4, t4.dt as date4, t5.id as id5, min(t5.dt) as date5
    from t t4
    join t t5 on t5.dt > t4.dt and (t4.next_dt is null or t5.dt <= t4.next_dt)
    where t4.id = 4
    and t5.id = 5
    group by t4.id, t4.dt, t5.id
    order by t4.dt;
    
           ID4 DATE4                        ID5 DATE5               
    ---------- --------------------- ---------- ---------------------
             4 16.03.2012 17:49:28            5 10.05.2012 09:38:56   
             4 12.06.2012 08:47:52            5 02.08.2012 11:27:43   
             4 03.08.2012 13:24:54            5 03.08.2012 14:14:07   
    

    The CTE uses LEAD to peek at the next date for each ID, which is only really relevant for when ID is 4; and that can be null if there isn't an extra ID 4 without matches at the end. The join then only looks for ID 5 records that fall between two ID 4 dates (or after the last ID 4 date). If you want the alternate (later) ID 5 date in the first result just use MAX instead of MIN. (I'm not 100% about the > and <= matching; I've tried to interpret what you said, but you might need to tweak that if it isn't quite right).


    To work around what appears to be a 9i bug (probably fixed in 9.2.0.3 or 9.2.0.6 according to MOS, but depends exectly which bug you're hitting):

    select t4.id as id4, t4.dt as date4, t5.id as id5, min(t5.dt) as date5
    from (
        select id, dt, lead(dt) over (partition by id order by dt) as next_dt
        from t42
        where id = 4
    ) t4
    join (select id, dt
        from t42
        where id = 5
    ) t5 on t5.dt > t4.dt and (t4.next_dt is null or t5.dt <= t4.next_dt)
    group by t4.id, t4.dt, t5.id
    order by t4.dt;
    

    I don't have an old enough version to test this against unfortunately. You don't have to use the t5 subselect, you could just join your main table straight to t4, but I think this is a little clearer.

    qid & accept id: (14856663, 14857244) query: Datagrid textbox search C# soup:

    This will give you the gridview row index for the value:

    \n
    String searchValue = "somestring";\nint rowIndex = -1;\nforeach(DataGridViewRow row in DataGridView1.Rows)\n{\n    if(row.Cells[1].Value.ToString().Equals(searchValue))\n    {\n        rowIndex = row.Index;\n        break;\n    }\n}\n
    \n

    Or a LINQ query

    \n
        int rowIndex = -1;\n\n    DataGridViewRow row = dgv.Rows\n        .Cast()\n        .Where(r => r.Cells["SystemId"].Value.ToString().Equals(searchValue))\n        .First();\n\n    rowIndex = row.Index;\n
    \n

    then you can do:

    \n
     dataGridView1.Rows[rowIndex].Selected = true;\n
    \n soup wrap:

    This will give you the gridview row index for the value:

    String searchValue = "somestring";
    int rowIndex = -1;
    foreach(DataGridViewRow row in DataGridView1.Rows)
    {
        if(row.Cells[1].Value.ToString().Equals(searchValue))
        {
            rowIndex = row.Index;
            break;
        }
    }
    

    Or a LINQ query

        int rowIndex = -1;
    
        DataGridViewRow row = dgv.Rows
            .Cast()
            .Where(r => r.Cells["SystemId"].Value.ToString().Equals(searchValue))
            .First();
    
        rowIndex = row.Index;
    

    then you can do:

     dataGridView1.Rows[rowIndex].Selected = true;
    
    qid & accept id: (14860852, 14860906) query: Set column to automatically pull data from referenced table soup:

    You could create a view, a view is basically a SQL statement that is stored on the MySQL server and acts like a table

    \n
    CREATE VIEW ViewName AS\nSELECT tbl1.data, tbl2.speeding\nFROM tbl1\nINNER JOIN tbl2 ON tbl2.key = tbl1.key;\n
    \n

    http://dev.mysql.com/doc/refman/5.0/en/create-view.html

    \n

    You then use the view as you would use any table

    \n
    SELECT data, speeding\nFROM ViewName\n
    \n soup wrap:

    You could create a view, a view is basically a SQL statement that is stored on the MySQL server and acts like a table

    CREATE VIEW ViewName AS
    SELECT tbl1.data, tbl2.speeding
    FROM tbl1
    INNER JOIN tbl2 ON tbl2.key = tbl1.key;
    

    http://dev.mysql.com/doc/refman/5.0/en/create-view.html

    You then use the view as you would use any table

    SELECT data, speeding
    FROM ViewName
    
    qid & accept id: (14866797, 14866820) query: cartesian product - SUM two columns in the same table soup:

    you can use CASE on this,

    \n
    SELECT  SUM(arc_baseEventCount) 'total event count', \n        SUM(CASE WHEN arc_name = 'Connector Raw Event Statistics' THEN arc_baseEventCount ELSE NULL END) 'Connector Raw Event Statistics'\nFROM    Events\n
    \n\n

    UPDATE 1

    \n
    SELECT  SUM(arc_baseEventCount) 'total event count', \n        SUM(CASE WHEN arc_name = 'Connector Raw Event Statistics' THEN arc_baseEventCount ELSE NULL END) 'total_1',\n        SUM(CASE WHEN name = 'Connector Raw Event Statistics' THEN arc_deviceCustomNumber3 ELSE NULL END) 'total_2'\nFROM    Events\n
    \n soup wrap:

    you can use CASE on this,

    SELECT  SUM(arc_baseEventCount) 'total event count', 
            SUM(CASE WHEN arc_name = 'Connector Raw Event Statistics' THEN arc_baseEventCount ELSE NULL END) 'Connector Raw Event Statistics'
    FROM    Events
    

    UPDATE 1

    SELECT  SUM(arc_baseEventCount) 'total event count', 
            SUM(CASE WHEN arc_name = 'Connector Raw Event Statistics' THEN arc_baseEventCount ELSE NULL END) 'total_1',
            SUM(CASE WHEN name = 'Connector Raw Event Statistics' THEN arc_deviceCustomNumber3 ELSE NULL END) 'total_2'
    FROM    Events
    
    qid & accept id: (14903899, 14904003) query: sub query with comma delimited output in one column soup:

    You can use the following:

    \n
    select t1.col1,\n  t1.col2, \n  t1.col3,\n  left(t2.col4, len(t2.col4)-1) col4\nfrom table1 t1\ncross apply\n(\n  select cast(t2.Col4 as varchar(10)) + ', '\n  from Table2 t2\n  where t1.col1 = t2.col1\n  FOR XML PATH('')\n) t2 (col4)\n
    \n

    See SQL Fiddle with Demo.

    \n

    Or you can use:

    \n
    select t1.col1,\n  t1.col2, \n  t1.col3,\n  STUFF(\n         (SELECT ', ' + cast(t2.Col4 as varchar(10))\n          FROM Table2 t2\n          where t1.col1 = t2.col1\n          FOR XML PATH (''))\n          , 1, 1, '')  AS col4\nfrom table1 t1\n
    \n

    See SQL Fiddle with Demo

    \n soup wrap:

    You can use the following:

    select t1.col1,
      t1.col2, 
      t1.col3,
      left(t2.col4, len(t2.col4)-1) col4
    from table1 t1
    cross apply
    (
      select cast(t2.Col4 as varchar(10)) + ', '
      from Table2 t2
      where t1.col1 = t2.col1
      FOR XML PATH('')
    ) t2 (col4)
    

    See SQL Fiddle with Demo.

    Or you can use:

    select t1.col1,
      t1.col2, 
      t1.col3,
      STUFF(
             (SELECT ', ' + cast(t2.Col4 as varchar(10))
              FROM Table2 t2
              where t1.col1 = t2.col1
              FOR XML PATH (''))
              , 1, 1, '')  AS col4
    from table1 t1
    

    See SQL Fiddle with Demo

    qid & accept id: (14930630, 14930654) query: How to select attributes in a relational database where I have to check multiple attributes? soup:

    You are missing the FROM clause, and the string literals must be in '' instead of double quotes. If the age is of data type numeric, remove the quotes around it, if not use ''. Something like:

    \n
    Select person1.*\nFROM person1\nwhere person1.age    = 42 \n  and person1.job    = 'bng' \n  and person1.gender = 'f';\n
    \n

    SQL Fiddle Demo.

    \n

    This should give you the row:

    \n
    | PERSON1 | AGE | JOB | GENDER |\n--------------------------------\n|      p2 |  42 | bng |      f |\n
    \n soup wrap:

    You are missing the FROM clause, and the string literals must be in '' instead of double quotes. If the age is of data type numeric, remove the quotes around it, if not use ''. Something like:

    Select person1.*
    FROM person1
    where person1.age    = 42 
      and person1.job    = 'bng' 
      and person1.gender = 'f';
    

    SQL Fiddle Demo.

    This should give you the row:

    | PERSON1 | AGE | JOB | GENDER |
    --------------------------------
    |      p2 |  42 | bng |      f |
    
    qid & accept id: (14952911, 14953106) query: Postgres: How to create reference cell? soup:

    RDBMS uses a different approach. There are queries and data. When you query something, it is natural to perform extra calculations on the data. In your case this is a simple arythmetic function.

    \n

    Say, you have a table:\n

    \n
    CREATE TABLE tab (\n  id  integer PRIMARY KEY,\n  a1  integer\n);\n
    \n

    Now, to achieve your case you can do the following:\n

    \n
    SELECT id,\n       a1,\n       a1+1 AS a2\n  FROM tab;\n
    \n

    As you can see, I'm using existing columns in the formula and assign the result a new alias a2.

    \n

    I really recommend you to read the Tutorial and SQL Basics from the official PostgreSQL documentation, along with some SQL introduction book.

    \n soup wrap:

    RDBMS uses a different approach. There are queries and data. When you query something, it is natural to perform extra calculations on the data. In your case this is a simple arythmetic function.

    Say, you have a table:

    CREATE TABLE tab (
      id  integer PRIMARY KEY,
      a1  integer
    );
    

    Now, to achieve your case you can do the following:

    SELECT id,
           a1,
           a1+1 AS a2
      FROM tab;
    

    As you can see, I'm using existing columns in the formula and assign the result a new alias a2.

    I really recommend you to read the Tutorial and SQL Basics from the official PostgreSQL documentation, along with some SQL introduction book.

    qid & accept id: (14961787, 14962034) query: SQL Server 2005: Insert one to many (1 Order-Many Charges) results into @table soup:

    This should work:

    \n
    SELECT O.OrderId, C.ChargeId\nFROM Orders O\n  JOIN Charges C ON O.CustomerId = C.CustomerId AND\n    (C.ProductId = O.ProductId OR C.ProductId = 0)\nORDER BY O.OrderId, C.ChargeId\n
    \n

    Here is the sample Fiddle.

    \n

    And it produces these results:

    \n
    ORDERID   CHARGEID\n1         1\n1         2\n2         3\n2         4\n3         5\n4         1\n
    \n soup wrap:

    This should work:

    SELECT O.OrderId, C.ChargeId
    FROM Orders O
      JOIN Charges C ON O.CustomerId = C.CustomerId AND
        (C.ProductId = O.ProductId OR C.ProductId = 0)
    ORDER BY O.OrderId, C.ChargeId
    

    Here is the sample Fiddle.

    And it produces these results:

    ORDERID   CHARGEID
    1         1
    1         2
    2         3
    2         4
    3         5
    4         1
    
    qid & accept id: (14964462, 14966382) query: JPA Query for toggling a boolean in a UPDATE soup:

    That can be done with the case expression:

    \n
    UPDATE FOO a \nSET a.bar = \n  CASE a.bar \n    WHEN TRUE THEN FALSE\n    ELSE TRUE END\nWHERE a.id in :ids\n
    \n

    For nullable Boolean bit more is needed:

    \n
    UPDATE FOO a \nSET a.bar = \n  CASE a.bar \n    WHEN TRUE THEN FALSE\n    WHEN FALSE THEN TRUE\n    ELSE a.bar END\nWHERE a.id in :ids\n
    \n soup wrap:

    That can be done with the case expression:

    UPDATE FOO a 
    SET a.bar = 
      CASE a.bar 
        WHEN TRUE THEN FALSE
        ELSE TRUE END
    WHERE a.id in :ids
    

    For nullable Boolean bit more is needed:

    UPDATE FOO a 
    SET a.bar = 
      CASE a.bar 
        WHEN TRUE THEN FALSE
        WHEN FALSE THEN TRUE
        ELSE a.bar END
    WHERE a.id in :ids
    
    qid & accept id: (14965566, 14966674) query: How to copy in field if query returns it blank? soup:

    Making some assumptions about your data as in comments, particularly about how to match and pick a substitute name value; and with some dummy data that I think matches yours:

    \n
    create table tablea(out_num number,\n    equip_name varchar2(5),\n    event_type varchar2(10),\n    comments varchar2(10),\n    timestamp date, feed_id number);\n\ncreate table tableb(id number, name varchar2(10));\n\nalter session set nls_date_format = 'MM/DD/YYYY HH24:MI';\n\ninsert into tablea values (12345, null, 'abcd', null, to_date('02/11/2013 11:12'), 1);\ninsert into tablea values (12345, null, 'abcd', null, to_date('02/11/2013 11:11'), 1);\ninsert into tablea values (12345, null, 'abcd', null, to_date('02/11/2013 11:06'), 1);\ninsert into tablea values (12345, null, 'abcd', null, to_date('02/11/2013 11:06'), 1);\ninsert into tablea values (12345, null, 'SUB', null, to_date('02/11/2013 11:11'), 2);\ninsert into tablea values (12345, null, 'SUB', null, to_date('02/11/2013 11:12'), 2);\ninsert into tablea values (12345, null, 'XYZ', null, to_date('02/11/2013 11:13'), 3);\ninsert into tablea values (12345, null, 'XYZ', null, to_date('02/11/2013 11:13'), 3);\ninsert into tablea values (12345, null, 'XYZ', null, to_date('02/11/2013 11:13'), 3);\ninsert into tablea values (12345, null, 'XYZ', null, to_date('02/11/2013 11:13'), 3);\ninsert into tablea values (12345, null, 'XYZ', null, to_date('02/11/2013 11:13'), 3);\ninsert into tablea values (12345, null, 'XYZ', null, to_date('02/11/2013 11:03'), 3);\ninsert into tablea values (12345, null, 'CAUSE', 'APPLE', to_date('02/11/2013 11:13'), 4);\ninsert into tablea values (12345, null, 'CAUSE', 'APPLE', to_date('02/11/2013 11:13'), 4);\ninsert into tablea values (12345, null, 'CAUSE', 'APPLE', to_date('02/11/2013 11:13'), 4);\ninsert into tablea values (12345, null, 'STATUS', 'BOOKS', to_date('02/11/2013 11:13'), 5);\ninsert into tablea values (12345, null, 'STATUS', 'BOOKS', to_date('02/11/2013 11:13'), 5);\ninsert into tablea values (12345, null, 'STATUS', 'BOOKS', to_date('02/11/2013 11:03'), 5);\n\ninsert into tableb values(3, 'LION');\n
    \n

    This gets your result:

    \n
    select * from (\n    select a.out_num,\n        a.timestamp,\n        a.equip_name,\n        a.event_type,\n        a.comments,\n        coalesce(b.name,\n            first_value(b.name)\n                over (partition by a.out_num\n                    order by b.name nulls last)) as name\n    from tablea a\n    left outer join tableb b on a.feed_id = b.id\n    where a.out_num = '12345'\n    and a.event_type in ('CAUSE', 'STATUS', 'XYZ')\n)\nwhere event_type in ('CAUSE', 'STATUS');\n\n   OUT_NUM TIMESTAMP          EQUIP_NAME EVENT_TYPE COMMENTS   NAME     \n---------- ------------------ ---------- ---------- ---------- ----------\n     12345 02/11/2013 11:03              STATUS     BOOKS      LION       \n     12345 02/11/2013 11:13              STATUS     BOOKS      LION       \n     12345 02/11/2013 11:13              STATUS     BOOKS      LION       \n     12345 02/11/2013 11:13              CAUSE      APPLE      LION       \n     12345 02/11/2013 11:13              CAUSE      APPLE      LION       \n     12345 02/11/2013 11:13              CAUSE      APPLE      LION       \n
    \n

    The inner query includes XYZ and uses the analytic first_value() function to pick a name if the directly matched value is null - the coalesce may not be necessary if there really will never be a direct match. (You might also need to adjust the partition by or order by clauses if the assumptions are wrong). The outer query just strips out the XYZ records since you don't want those.

    \n
    \n

    If you want to get a name value from any matching record then just remove the filter in the inner query.

    \n

    But now you're perhaps more likely to have more than one non-null record; this will give you one that matches a.feed_id if it exists, or the 'first' one (alphabetically, ish) for that out_num if it doesn't. You could order by b.id instead, or any other column in tableb; ordering by anything in tablea would need a different solution. If you'll only have one possible match anyway then it doesn't really matter and you can leave out the order by, though it's better to have it anyway.

    \n

    If I add some more data for a different out_num:

    \n
    insert into tablea values (12346, null, 'abcd', null, to_date('02/11/2013 11:11'), 1);\ninsert into tablea values (12346, null, 'SUB', null, to_date('02/11/2013 11:12'), 2);\ninsert into tablea values (12346, null, 'XYZ', null, to_date('02/11/2013 11:13'), 6);\ninsert into tablea values (12346, null, 'CAUSE', 'APPLE', to_date('02/11/2013 11:14'), 4);\ninsert into tablea values (12346, null, 'STATUS', 'BOOKS', to_date('02/11/2013 11:15'), 5);\n\ninsert into tableb values(1, 'TIGER');\n
    \n

    ...then this - which just has the filter dropped, and I've left out the coalesce this time - gives the same answer for 12345, and this for 12346:

    \n
    select * from (\n    select a.out_num,\n        a.timestamp,\n        a.equip_name,\n        a.event_type,\n        a.comments,\n        first_value(b.name)\n            over (partition by a.out_num\n                order by b.name nulls last) as name\n    from tablea a\n    left outer join tableb b on a.feed_id = b.id\n)\nwhere out_num = '12346'\nand event_type in ('CAUSE', 'STATUS');\n\n   OUT_NUM TIMESTAMP          EQUIP_NAME EVENT_TYPE COMMENTS   NAME     \n---------- ------------------ ---------- ---------- ---------- ----------\n     12346 02/11/2013 11:14              CAUSE      APPLE      TIGER      \n     12346 02/11/2013 11:15              STATUS     BOOKS      TIGER      \n
    \n

    ... where TIGER is linked to abcd, not XYZ.

    \n soup wrap:

    Making some assumptions about your data as in comments, particularly about how to match and pick a substitute name value; and with some dummy data that I think matches yours:

    create table tablea(out_num number,
        equip_name varchar2(5),
        event_type varchar2(10),
        comments varchar2(10),
        timestamp date, feed_id number);
    
    create table tableb(id number, name varchar2(10));
    
    alter session set nls_date_format = 'MM/DD/YYYY HH24:MI';
    
    insert into tablea values (12345, null, 'abcd', null, to_date('02/11/2013 11:12'), 1);
    insert into tablea values (12345, null, 'abcd', null, to_date('02/11/2013 11:11'), 1);
    insert into tablea values (12345, null, 'abcd', null, to_date('02/11/2013 11:06'), 1);
    insert into tablea values (12345, null, 'abcd', null, to_date('02/11/2013 11:06'), 1);
    insert into tablea values (12345, null, 'SUB', null, to_date('02/11/2013 11:11'), 2);
    insert into tablea values (12345, null, 'SUB', null, to_date('02/11/2013 11:12'), 2);
    insert into tablea values (12345, null, 'XYZ', null, to_date('02/11/2013 11:13'), 3);
    insert into tablea values (12345, null, 'XYZ', null, to_date('02/11/2013 11:13'), 3);
    insert into tablea values (12345, null, 'XYZ', null, to_date('02/11/2013 11:13'), 3);
    insert into tablea values (12345, null, 'XYZ', null, to_date('02/11/2013 11:13'), 3);
    insert into tablea values (12345, null, 'XYZ', null, to_date('02/11/2013 11:13'), 3);
    insert into tablea values (12345, null, 'XYZ', null, to_date('02/11/2013 11:03'), 3);
    insert into tablea values (12345, null, 'CAUSE', 'APPLE', to_date('02/11/2013 11:13'), 4);
    insert into tablea values (12345, null, 'CAUSE', 'APPLE', to_date('02/11/2013 11:13'), 4);
    insert into tablea values (12345, null, 'CAUSE', 'APPLE', to_date('02/11/2013 11:13'), 4);
    insert into tablea values (12345, null, 'STATUS', 'BOOKS', to_date('02/11/2013 11:13'), 5);
    insert into tablea values (12345, null, 'STATUS', 'BOOKS', to_date('02/11/2013 11:13'), 5);
    insert into tablea values (12345, null, 'STATUS', 'BOOKS', to_date('02/11/2013 11:03'), 5);
    
    insert into tableb values(3, 'LION');
    

    This gets your result:

    select * from (
        select a.out_num,
            a.timestamp,
            a.equip_name,
            a.event_type,
            a.comments,
            coalesce(b.name,
                first_value(b.name)
                    over (partition by a.out_num
                        order by b.name nulls last)) as name
        from tablea a
        left outer join tableb b on a.feed_id = b.id
        where a.out_num = '12345'
        and a.event_type in ('CAUSE', 'STATUS', 'XYZ')
    )
    where event_type in ('CAUSE', 'STATUS');
    
       OUT_NUM TIMESTAMP          EQUIP_NAME EVENT_TYPE COMMENTS   NAME     
    ---------- ------------------ ---------- ---------- ---------- ----------
         12345 02/11/2013 11:03              STATUS     BOOKS      LION       
         12345 02/11/2013 11:13              STATUS     BOOKS      LION       
         12345 02/11/2013 11:13              STATUS     BOOKS      LION       
         12345 02/11/2013 11:13              CAUSE      APPLE      LION       
         12345 02/11/2013 11:13              CAUSE      APPLE      LION       
         12345 02/11/2013 11:13              CAUSE      APPLE      LION       
    

    The inner query includes XYZ and uses the analytic first_value() function to pick a name if the directly matched value is null - the coalesce may not be necessary if there really will never be a direct match. (You might also need to adjust the partition by or order by clauses if the assumptions are wrong). The outer query just strips out the XYZ records since you don't want those.


    If you want to get a name value from any matching record then just remove the filter in the inner query.

    But now you're perhaps more likely to have more than one non-null record; this will give you one that matches a.feed_id if it exists, or the 'first' one (alphabetically, ish) for that out_num if it doesn't. You could order by b.id instead, or any other column in tableb; ordering by anything in tablea would need a different solution. If you'll only have one possible match anyway then it doesn't really matter and you can leave out the order by, though it's better to have it anyway.

    If I add some more data for a different out_num:

    insert into tablea values (12346, null, 'abcd', null, to_date('02/11/2013 11:11'), 1);
    insert into tablea values (12346, null, 'SUB', null, to_date('02/11/2013 11:12'), 2);
    insert into tablea values (12346, null, 'XYZ', null, to_date('02/11/2013 11:13'), 6);
    insert into tablea values (12346, null, 'CAUSE', 'APPLE', to_date('02/11/2013 11:14'), 4);
    insert into tablea values (12346, null, 'STATUS', 'BOOKS', to_date('02/11/2013 11:15'), 5);
    
    insert into tableb values(1, 'TIGER');
    

    ...then this - which just has the filter dropped, and I've left out the coalesce this time - gives the same answer for 12345, and this for 12346:

    select * from (
        select a.out_num,
            a.timestamp,
            a.equip_name,
            a.event_type,
            a.comments,
            first_value(b.name)
                over (partition by a.out_num
                    order by b.name nulls last) as name
        from tablea a
        left outer join tableb b on a.feed_id = b.id
    )
    where out_num = '12346'
    and event_type in ('CAUSE', 'STATUS');
    
       OUT_NUM TIMESTAMP          EQUIP_NAME EVENT_TYPE COMMENTS   NAME     
    ---------- ------------------ ---------- ---------- ---------- ----------
         12346 02/11/2013 11:14              CAUSE      APPLE      TIGER      
         12346 02/11/2013 11:15              STATUS     BOOKS      TIGER      
    

    ... where TIGER is linked to abcd, not XYZ.

    qid & accept id: (15002034, 15002117) query: SQL to group more than one records of a joined table? soup:

    In MySQL you will want to use the GROUP_CONCAT() function which will concatenate the multiple rows into a single row. Since this is an aggregate function, you will also use a GROUP BY clause on the query:

    \n
    select p.id,\n  p.name,\n  group_concat(c.id order by c.id) ChildrenIds,\n  group_concat(c.name order by c.id) ChildrenNames\nfrom parent p\nleft join children c\n  on p.id = c.parent_id\ngroup by p.id, p.name\n
    \n

    See SQL Fiddle with Demo.

    \n

    The result is:

    \n
    | ID |     NAME | CHILDRENIDS |                    CHILDRENNAMES |\n------------------------------------------------------------------\n|  1 | Parent 1 |         1,2 |            Child P1 1,Child P1 2 |\n|  2 | Parent 2 |       3,4,5 | Child P2 1,Child P2 2,Child P2 3 |\n
    \n soup wrap:

    In MySQL you will want to use the GROUP_CONCAT() function which will concatenate the multiple rows into a single row. Since this is an aggregate function, you will also use a GROUP BY clause on the query:

    select p.id,
      p.name,
      group_concat(c.id order by c.id) ChildrenIds,
      group_concat(c.name order by c.id) ChildrenNames
    from parent p
    left join children c
      on p.id = c.parent_id
    group by p.id, p.name
    

    See SQL Fiddle with Demo.

    The result is:

    | ID |     NAME | CHILDRENIDS |                    CHILDRENNAMES |
    ------------------------------------------------------------------
    |  1 | Parent 1 |         1,2 |            Child P1 1,Child P1 2 |
    |  2 | Parent 2 |       3,4,5 | Child P2 1,Child P2 2,Child P2 3 |
    
    qid & accept id: (15034144, 15034170) query: Is it possible to join two tables of multiple rows by only the first ID in each table? soup:

    You can do this by using row_number() to create a fake join column:

    \n
    select coalesce(a.id, b.id) as id, a.colors, b.states\nfrom (select a.*, row_number() over (order by id) as seqnum\n      from a\n     ) a full outer join\n     (select b.*, row_number() over (order by id) as seqnum\n      from b\n     ) b\n     on b.seqnum = a.seqnum\n
    \n

    Actually, in Oracle, you can also just use rownum:

    \n
    select coalesce(a.id, b.id) as id, a.colors, b.states\nfrom (select a.*, rownum as seqnum\n      from a\n     ) a full outer join\n     (select b.*, rownum as seqnum\n      from b\n     ) b\n     on b.seqnum = a.seqnum\n
    \n soup wrap:

    You can do this by using row_number() to create a fake join column:

    select coalesce(a.id, b.id) as id, a.colors, b.states
    from (select a.*, row_number() over (order by id) as seqnum
          from a
         ) a full outer join
         (select b.*, row_number() over (order by id) as seqnum
          from b
         ) b
         on b.seqnum = a.seqnum
    

    Actually, in Oracle, you can also just use rownum:

    select coalesce(a.id, b.id) as id, a.colors, b.states
    from (select a.*, rownum as seqnum
          from a
         ) a full outer join
         (select b.*, rownum as seqnum
          from b
         ) b
         on b.seqnum = a.seqnum
    
    qid & accept id: (15066914, 15129850) query: SQL return Information by not existing rows soup:

    Hope to explain it now more clearly:

    \n

    The original code what I have now is:

    \n
    select distinct username, name, surname\nfrom users u, accounts a\nwhere u.user_nr = a.user_nr\nand username in (\n'existing_user',\n'not_existing_user'\n) order by username;\n
    \n

    and it gives me:

    \n
    USERNAME                  NAME            SURNAME  \n------------------------- --------------- ---------------\nexisting_user              Hello           All\n\n1 row selected.\n
    \n

    and I need:

    \n
    USERNAME                  NAME            SURNAME  \n------------------------- --------------- ---------------\nexisting_user             Hello           All\nnot_existing_user     Not Exists      Not Exists\n\n2 row selected.\n
    \n

    The Problem: the user not_existing_user is not existing in the DataBase, \nbut the query has to show him anyway from the code\nwith the Info - User not in the DB. \nFor 500 Users I can not check everyone separate :/

    \n soup wrap:

    Hope to explain it now more clearly:

    The original code what I have now is:

    select distinct username, name, surname
    from users u, accounts a
    where u.user_nr = a.user_nr
    and username in (
    'existing_user',
    'not_existing_user'
    ) order by username;
    

    and it gives me:

    USERNAME                  NAME            SURNAME  
    ------------------------- --------------- ---------------
    existing_user              Hello           All
    
    1 row selected.
    

    and I need:

    USERNAME                  NAME            SURNAME  
    ------------------------- --------------- ---------------
    existing_user             Hello           All
    not_existing_user     Not Exists      Not Exists
    
    2 row selected.
    

    The Problem: the user not_existing_user is not existing in the DataBase, but the query has to show him anyway from the code with the Info - User not in the DB. For 500 Users I can not check everyone separate :/

    qid & accept id: (15100101, 15100456) query: UNPIVOT on an indeterminate number of columns soup:

    It sounds like you want to unpivot the table (pivoting would involve going from many rows and 2 columns to 1 row with many columns). You would most likely need to use dynamic SQL to generate the query and then use the DBMS_SQL package (or potentially EXECUTE IMMEDIATE) to execute it. You should also be able to construct a pipelined table function that did the unpivoting. You'd need to use dynamic SQL within the pipelined table function as well but it would potentially be less code. I'd expect a pure dynamic SQL statement using UNPIVOT to be more efficient, though.

    \n

    An inefficient approach, but one that is relatively easy to follow, would be something like

    \n
    SQL> ed\nWrote file afiedt.buf\n\n  1  create or replace type emp_unpivot_type\n  2  as object (\n  3    empno number,\n  4    col   varchar2(4000)\n  5* );\nSQL> /\n\nType created.\n\nSQL> create or replace type emp_unpivot_tbl\n  2  as table of emp_unpivot_type;\n  3  /\n\nType created.\n\nSQL> ed\nWrote file afiedt.buf\n\n  1  create or replace function unpivot_emp\n  2  ( p_empno in number )\n  3    return emp_unpivot_tbl\n  4    pipelined\n  5  is\n  6    l_val varchar2(4000);\n  7  begin\n  8    for cols in (select column_name from user_tab_columns where table_name = 'EMP')\n  9    loop\n 10      execute immediate 'select ' || cols.column_name || ' from emp where empno = :empno'\n 11         into l_val\n 12       using p_empno;\n 13      pipe row( emp_unpivot_type( p_empno, l_val ));\n 14    end loop;\n 15    return;\n 16* end;\nSQL> /\n\nFunction created.\n
    \n

    You can then call that in a SQL statement (I would think that you'd want at least a third column with the column name)

    \n
    SQL> ed\nWrote file afiedt.buf\n\n  1  select *\n  2*   from table( unpivot_emp( 7934 ))\nSQL> /\n\n     EMPNO COL\n---------- ----------------------------------------\n      7934 7934\n      7934 MILLER\n      7934 CLERK\n      7934 7782\n      7934 23-JAN-82\n      7934 1301\n      7934\n      7934 10\n\n8 rows selected.\n
    \n

    A more efficient approach would be to adapt Tom Kyte's show_table pipelined table function.

    \n soup wrap:

    It sounds like you want to unpivot the table (pivoting would involve going from many rows and 2 columns to 1 row with many columns). You would most likely need to use dynamic SQL to generate the query and then use the DBMS_SQL package (or potentially EXECUTE IMMEDIATE) to execute it. You should also be able to construct a pipelined table function that did the unpivoting. You'd need to use dynamic SQL within the pipelined table function as well but it would potentially be less code. I'd expect a pure dynamic SQL statement using UNPIVOT to be more efficient, though.

    An inefficient approach, but one that is relatively easy to follow, would be something like

    SQL> ed
    Wrote file afiedt.buf
    
      1  create or replace type emp_unpivot_type
      2  as object (
      3    empno number,
      4    col   varchar2(4000)
      5* );
    SQL> /
    
    Type created.
    
    SQL> create or replace type emp_unpivot_tbl
      2  as table of emp_unpivot_type;
      3  /
    
    Type created.
    
    SQL> ed
    Wrote file afiedt.buf
    
      1  create or replace function unpivot_emp
      2  ( p_empno in number )
      3    return emp_unpivot_tbl
      4    pipelined
      5  is
      6    l_val varchar2(4000);
      7  begin
      8    for cols in (select column_name from user_tab_columns where table_name = 'EMP')
      9    loop
     10      execute immediate 'select ' || cols.column_name || ' from emp where empno = :empno'
     11         into l_val
     12       using p_empno;
     13      pipe row( emp_unpivot_type( p_empno, l_val ));
     14    end loop;
     15    return;
     16* end;
    SQL> /
    
    Function created.
    

    You can then call that in a SQL statement (I would think that you'd want at least a third column with the column name)

    SQL> ed
    Wrote file afiedt.buf
    
      1  select *
      2*   from table( unpivot_emp( 7934 ))
    SQL> /
    
         EMPNO COL
    ---------- ----------------------------------------
          7934 7934
          7934 MILLER
          7934 CLERK
          7934 7782
          7934 23-JAN-82
          7934 1301
          7934
          7934 10
    
    8 rows selected.
    

    A more efficient approach would be to adapt Tom Kyte's show_table pipelined table function.

    qid & accept id: (15108987, 15115503) query: Mysql get a number of before and afer rows soup:

    This is really easy with union. Try this:

    \n
    (select t.* from t where t.col <= YOURNAME\n order by t.col desc\n limit 6\n)\nunion all\n(select t.* from t where t.col > YOURNAME\n order by t.col\n limit 5\n)\norder by t.col\n
    \n

    The first part of the query returns the five before. The second returns the five after.

    \n

    By the way, if you have duplicates, you might want this instead:

    \n
    (select t.* from t where t.col = YOURNAME)\nunion all\n(select t.* from t where t.col < YOURNAME\n order by t.col desc\n limit 5\n)\nunion all\n(select t.* from t where t.col > YOURNAME\n order by t.col\n limit 5\n)\norder by t.col\n
    \n soup wrap:

    This is really easy with union. Try this:

    (select t.* from t where t.col <= YOURNAME
     order by t.col desc
     limit 6
    )
    union all
    (select t.* from t where t.col > YOURNAME
     order by t.col
     limit 5
    )
    order by t.col
    

    The first part of the query returns the five before. The second returns the five after.

    By the way, if you have duplicates, you might want this instead:

    (select t.* from t where t.col = YOURNAME)
    union all
    (select t.* from t where t.col < YOURNAME
     order by t.col desc
     limit 5
    )
    union all
    (select t.* from t where t.col > YOURNAME
     order by t.col
     limit 5
    )
    order by t.col
    
    qid & accept id: (15117826, 15118331) query: selecting multiple counts when tables not directly co-relate soup:

    You need to do your counts in subqueries, or count distinct, as your multiple 1 to many relationships are causing cross joining. I don't know your data but imagine this scenario:

    \n

    Users:

    \n
    User_ID |   Source_ID\n--------+--------------\n  1     |      1  \n
    \n

    White_Rules

    \n
    Victim_ID | Rule_ID\n----------+-------------\n   1      |    1\n   1      |    2\n
    \n

    Black_Rules

    \n
    Victim_ID | Rule_ID\n----------+-------------\n   1      |    3\n   1      |    4\n
    \n

    If you run

    \n
    SELECT  Users.User_ID, \n        Users.Source_ID, \n        White_Rules.Rule_ID AS WhiteRuleID, \n        Black_Rules.Rule_ID AS BlackRuleID\nFROM    Users\n        LEFT JOIN White_Rules\n            ON White_Rules.Victim_ID = Users.User_ID\n        LEFT JOIN Black_Rules\n            ON Black_Rules.Victim_ID = Users.User_ID\n
    \n

    You will get all combinations of White_Rules.Rule_ID and Black_Rules.Rule_ID:

    \n
    User_ID | Source_ID | WhiteRuleID | BlackRuleID\n--------+-----------+-------------+-------------\n  1     |    1      |      1      |      3\n  1     |    1      |      2      |      4\n  1     |    1      |      1      |      3\n  1     |    1      |      2      |      4\n
    \n

    So counting the results will return 4 white rules and 4 black rules, even though there are only 2 of each.

    \n

    You should get the required results if you change your query to this:

    \n
    SELECT  Users.Source_ID,\n        SUM(COALESCE(w.TotalWhite, 0)) AS TotalWhite,\n        SUM(COALESCE(b.TotalBlack, 0)) AS TotalBlack,\n        SUM(COALESCE(g.TotalGeneral, 0)) AS TotalGeneral\nFROM    Users\n        LEFT JOIN\n        (   SELECT  Victim_ID, COUNT(*) AS TotalWhite\n            FROM    White_Rules\n            GROUP BY Victim_ID\n        ) w\n            ON w.Victim_ID = Users.User_ID\n        LEFT JOIN\n        (   SELECT  Victim_ID, COUNT(*) AS TotalBlack\n            FROM    Black_Rules\n            GROUP BY Victim_ID\n        ) b\n            ON b.Victim_ID = Users.User_ID\n        LEFT JOIN\n        (   SELECT  Victim_ID, COUNT(*) AS TotalGeneral\n            FROM    General_Rules\n            GROUP BY Victim_ID\n        ) g\n            ON g.Victim_ID = Users.User_ID\nWHERE   Deleted = 'f'\nAND     Source IS NOT NULL\nGROUP BY Users.Source_ID\n
    \n

    Example on SQL Fiddle

    \n

    An alternative would be:

    \n
    SELECT  Users.Source_ID,\n        COUNT(Rules.TotalWhite) AS TotalWhite,\n        COUNT(Rules.TotalBlack) AS TotalBlack,\n        COUNT(Rules.TotalGeneral) AS TotalGeneral\nFROM    Users\n        LEFT JOIN\n        (   SELECT  Victim_ID, 1 AS TotalWhite, NULL AS TotalBlack, NULL AS TotalGeneral\n            FROM    White_Rules\n            UNION ALL\n            SELECT  Victim_ID, NULL AS TotalWhite, 1 AS TotalBlack, NULL AS TotalGeneral\n            FROM    Black_Rules\n            UNION ALL\n            SELECT  Victim_ID, NULL AS TotalWhite, NULL AS TotalBlack, 1 AS TotalGeneral\n            FROM    General_Rules\n        ) Rules\n            ON Rules.Victim_ID = Users.User_ID\nWHERE   Deleted = 'f'\nAND     Source IS NOT NULL\nGROUP BY Users.Source_ID\n
    \n

    Example on SQL Fiddle

    \n soup wrap:

    You need to do your counts in subqueries, or count distinct, as your multiple 1 to many relationships are causing cross joining. I don't know your data but imagine this scenario:

    Users:

    User_ID |   Source_ID
    --------+--------------
      1     |      1  
    

    White_Rules

    Victim_ID | Rule_ID
    ----------+-------------
       1      |    1
       1      |    2
    

    Black_Rules

    Victim_ID | Rule_ID
    ----------+-------------
       1      |    3
       1      |    4
    

    If you run

    SELECT  Users.User_ID, 
            Users.Source_ID, 
            White_Rules.Rule_ID AS WhiteRuleID, 
            Black_Rules.Rule_ID AS BlackRuleID
    FROM    Users
            LEFT JOIN White_Rules
                ON White_Rules.Victim_ID = Users.User_ID
            LEFT JOIN Black_Rules
                ON Black_Rules.Victim_ID = Users.User_ID
    

    You will get all combinations of White_Rules.Rule_ID and Black_Rules.Rule_ID:

    User_ID | Source_ID | WhiteRuleID | BlackRuleID
    --------+-----------+-------------+-------------
      1     |    1      |      1      |      3
      1     |    1      |      2      |      4
      1     |    1      |      1      |      3
      1     |    1      |      2      |      4
    

    So counting the results will return 4 white rules and 4 black rules, even though there are only 2 of each.

    You should get the required results if you change your query to this:

    SELECT  Users.Source_ID,
            SUM(COALESCE(w.TotalWhite, 0)) AS TotalWhite,
            SUM(COALESCE(b.TotalBlack, 0)) AS TotalBlack,
            SUM(COALESCE(g.TotalGeneral, 0)) AS TotalGeneral
    FROM    Users
            LEFT JOIN
            (   SELECT  Victim_ID, COUNT(*) AS TotalWhite
                FROM    White_Rules
                GROUP BY Victim_ID
            ) w
                ON w.Victim_ID = Users.User_ID
            LEFT JOIN
            (   SELECT  Victim_ID, COUNT(*) AS TotalBlack
                FROM    Black_Rules
                GROUP BY Victim_ID
            ) b
                ON b.Victim_ID = Users.User_ID
            LEFT JOIN
            (   SELECT  Victim_ID, COUNT(*) AS TotalGeneral
                FROM    General_Rules
                GROUP BY Victim_ID
            ) g
                ON g.Victim_ID = Users.User_ID
    WHERE   Deleted = 'f'
    AND     Source IS NOT NULL
    GROUP BY Users.Source_ID
    

    Example on SQL Fiddle

    An alternative would be:

    SELECT  Users.Source_ID,
            COUNT(Rules.TotalWhite) AS TotalWhite,
            COUNT(Rules.TotalBlack) AS TotalBlack,
            COUNT(Rules.TotalGeneral) AS TotalGeneral
    FROM    Users
            LEFT JOIN
            (   SELECT  Victim_ID, 1 AS TotalWhite, NULL AS TotalBlack, NULL AS TotalGeneral
                FROM    White_Rules
                UNION ALL
                SELECT  Victim_ID, NULL AS TotalWhite, 1 AS TotalBlack, NULL AS TotalGeneral
                FROM    Black_Rules
                UNION ALL
                SELECT  Victim_ID, NULL AS TotalWhite, NULL AS TotalBlack, 1 AS TotalGeneral
                FROM    General_Rules
            ) Rules
                ON Rules.Victim_ID = Users.User_ID
    WHERE   Deleted = 'f'
    AND     Source IS NOT NULL
    GROUP BY Users.Source_ID
    

    Example on SQL Fiddle

    qid & accept id: (15122065, 15122150) query: SQL query for displaying specific data soup:

    For that, you need to look at all the numbers. The best way is using group by and having:

    \n
    select personid\nfrom person\ngroup by personid\nhaving sum(case when code not in ('1', '2', '3', '4', '5') then 1 else 0 end) = 0\n
    \n

    The having clause counts the number of records that are not those codes. If the count is 0, then the record is returned.

    \n

    If you want to be sure that all 5 codes are selected, then use this condition:

    \n
    having sum(case when code not in ('1', '2', '3', '4', '5') then 1 else 0 end) = 0 and\n       count(distinct code) = 5\n
    \n soup wrap:

    For that, you need to look at all the numbers. The best way is using group by and having:

    select personid
    from person
    group by personid
    having sum(case when code not in ('1', '2', '3', '4', '5') then 1 else 0 end) = 0
    

    The having clause counts the number of records that are not those codes. If the count is 0, then the record is returned.

    If you want to be sure that all 5 codes are selected, then use this condition:

    having sum(case when code not in ('1', '2', '3', '4', '5') then 1 else 0 end) = 0 and
           count(distinct code) = 5
    
    qid & accept id: (15150057, 15150095) query: Adding a Date column based on the next row date value soup:

    The easiest way to do this is with a correlated subquery:

    \n
    select t.*,\n       (select top 1 dateadd(day, -1, startDate )\n        from tbl_temp t2\n        where t2.aid = t.aid and\n              t2.uid = t.uid and\n              t2.startdate > t.startdate\n       ) as endDate\nfrom tbl_temp t\n
    \n

    To get the current date, use isnull():

    \n
    select t.*,\n       isnull((select top 1 dateadd(day, -1, startDate )\n               from tbl_temp t2\n               where t2.aid = t.aid and\n                     t2.uid = t.uid and\n                     t2.startdate > t.startdate\n               ), getdate()\n              ) as endDate\nfrom tbl_temp t\n
    \n

    Normally, I would recommend coalesce() over isnull(). However, there is a bug in some versions of SQL Server where it evaluates the first argument twice. Normally, this doesn't make a difference, but with a subquery it does.

    \n

    And finally, the use of sysdate makes me think of Oracle. The same approach will work there too.

    \n soup wrap:

    The easiest way to do this is with a correlated subquery:

    select t.*,
           (select top 1 dateadd(day, -1, startDate )
            from tbl_temp t2
            where t2.aid = t.aid and
                  t2.uid = t.uid and
                  t2.startdate > t.startdate
           ) as endDate
    from tbl_temp t
    

    To get the current date, use isnull():

    select t.*,
           isnull((select top 1 dateadd(day, -1, startDate )
                   from tbl_temp t2
                   where t2.aid = t.aid and
                         t2.uid = t.uid and
                         t2.startdate > t.startdate
                   ), getdate()
                  ) as endDate
    from tbl_temp t
    

    Normally, I would recommend coalesce() over isnull(). However, there is a bug in some versions of SQL Server where it evaluates the first argument twice. Normally, this doesn't make a difference, but with a subquery it does.

    And finally, the use of sysdate makes me think of Oracle. The same approach will work there too.

    qid & accept id: (15187839, 15187881) query: MYSQL How do I Select all emails from a table but limit number of emails with the same domain soup:
    SELECT\n  MIN(email) AS address1\n  IF(MAX(email)==MIN(email),NULL,MAX(email)) AS address2\nFROM emaillist\nGROUP BY substring_index(email, '@', -1);\n
    \n

    and if you want them in one column

    \n
    SELECT MIN(email) AS address1\nFROM emaillist\nGROUP BY substring_index(email, '@', -1)\nUNION\nSELECT MAX(email) AS address1\nFROM emaillist\nGROUP BY substring_index(email, '@', -1)\n
    \n soup wrap:
    SELECT
      MIN(email) AS address1
      IF(MAX(email)==MIN(email),NULL,MAX(email)) AS address2
    FROM emaillist
    GROUP BY substring_index(email, '@', -1);
    

    and if you want them in one column

    SELECT MIN(email) AS address1
    FROM emaillist
    GROUP BY substring_index(email, '@', -1)
    UNION
    SELECT MAX(email) AS address1
    FROM emaillist
    GROUP BY substring_index(email, '@', -1)
    
    qid & accept id: (15203058, 15203349) query: Group the rows that are having the same value in specific field in MySQL soup:

    I'm not particullarly proud of this solution because it is not very clear, but at least it's fast and simple. If all of the items have "done" = 1 then the sum will be equal to the count SUM = COUNT

    \n
    SELECT query_id, SUM(done) AS doneSum, COUNT(done) AS doneCnt \nFROM tbl \nGROUP BY query_id\n
    \n

    And if you add a having clause you get the items that are "done".

    \n
    HAVING doneSum = doneCnt\n
    \n

    I'll let you format the solution properly, you can do a DIFERENCE to get the "not done" items or doneSum <> doneCnt.

    \n

    Btw, SQL fiddle here.

    \n soup wrap:

    I'm not particullarly proud of this solution because it is not very clear, but at least it's fast and simple. If all of the items have "done" = 1 then the sum will be equal to the count SUM = COUNT

    SELECT query_id, SUM(done) AS doneSum, COUNT(done) AS doneCnt 
    FROM tbl 
    GROUP BY query_id
    

    And if you add a having clause you get the items that are "done".

    HAVING doneSum = doneCnt
    

    I'll let you format the solution properly, you can do a DIFERENCE to get the "not done" items or doneSum <> doneCnt.

    Btw, SQL fiddle here.

    qid & accept id: (15208232, 15208343) query: How to see if a field entry has a corresponding entry in another field? soup:

    Assuming there are no additional columns besides the 3 pairs listed, this can be done with a simple WHERE clause that tests for a non-NULL start date in each column along with a corresponding NULL end. If any of the three conditions is met, the Company will be returned.

    \n
    SELECT DISTINCT Company\nFROM Table1\nWHERE\n  (Start1 IS NOT NULL AND End1 IS NULL)\n  OR (Start2 IS NOT NULL AND End2 IS NULL)\n  OR (Start3 IS NOT NULL AND End3 IS NULL)\n
    \n

    If your empty fields are actually empty strings '' instead of NULL, substitute the empty string as in:

    \n
    (Start1 <> '' AND End1 = '')\n
    \n

    Note, the DISTINCT isn't needed if the Company column is a unique or primary key.

    \n soup wrap:

    Assuming there are no additional columns besides the 3 pairs listed, this can be done with a simple WHERE clause that tests for a non-NULL start date in each column along with a corresponding NULL end. If any of the three conditions is met, the Company will be returned.

    SELECT DISTINCT Company
    FROM Table1
    WHERE
      (Start1 IS NOT NULL AND End1 IS NULL)
      OR (Start2 IS NOT NULL AND End2 IS NULL)
      OR (Start3 IS NOT NULL AND End3 IS NULL)
    

    If your empty fields are actually empty strings '' instead of NULL, substitute the empty string as in:

    (Start1 <> '' AND End1 = '')
    

    Note, the DISTINCT isn't needed if the Company column is a unique or primary key.

    qid & accept id: (15237740, 15237755) query: select users have more than one distinct records in mysql soup:

    just add having clause

    \n
    SELECT userId, COUNT(DISTINCT webpageId) AS count \nFROM visits \nGROUP BY userId\nHAVING COUNT(DISTINCT webpageId) > 1\n
    \n

    but if you only what the ID

    \n
    SELECT userId\nFROM visits \nGROUP BY userId\nHAVING COUNT(DISTINCT webpageId) > 1\n
    \n\n

    the reason why you are filtering on HAVING clause and not on WHERE is because, WHERE clause cannot support columns that where aggregated.

    \n soup wrap:

    just add having clause

    SELECT userId, COUNT(DISTINCT webpageId) AS count 
    FROM visits 
    GROUP BY userId
    HAVING COUNT(DISTINCT webpageId) > 1
    

    but if you only what the ID

    SELECT userId
    FROM visits 
    GROUP BY userId
    HAVING COUNT(DISTINCT webpageId) > 1
    

    the reason why you are filtering on HAVING clause and not on WHERE is because, WHERE clause cannot support columns that where aggregated.

    qid & accept id: (15243399, 15243631) query: Select all table names from Oracle DB soup:

    Try this

    \n
    SELECT 'Existing Tables: ' || wm_concat(table_name) tablenames \n  FROM user_tables;\n
    \n

    For the sample Oracle HR database it returns

    \n
    TABLENAMES\n------------------------------------------------------------------------------------\nExisting Tables: REGIONS,LOCATIONS,DEPARTMENTS,JOBS,EMPLOYEES,JOB_HISTORY,COUNTRIES\n
    \n

    UPDATE: Example with LISTAGG()

    \n
    SELECT 'Existing Tables: ' || LISTAGG(table_name, ',') \n        WITHIN GROUP (ORDER BY table_name) tablenames \n  FROM user_tables;\n
    \n soup wrap:

    Try this

    SELECT 'Existing Tables: ' || wm_concat(table_name) tablenames 
      FROM user_tables;
    

    For the sample Oracle HR database it returns

    TABLENAMES
    ------------------------------------------------------------------------------------
    Existing Tables: REGIONS,LOCATIONS,DEPARTMENTS,JOBS,EMPLOYEES,JOB_HISTORY,COUNTRIES
    

    UPDATE: Example with LISTAGG()

    SELECT 'Existing Tables: ' || LISTAGG(table_name, ',') 
            WITHIN GROUP (ORDER BY table_name) tablenames 
      FROM user_tables;
    
    qid & accept id: (15264563, 15264928) query: Aggregate on Datetime Column for Pivot soup:

    You could get it like this:

    \n
    SELECT  l1.EmpID\n        , l1.LoginTime [SignIn]\n        , l2.LoginTime [SignOut]\nFROM    Login l1\nLEFT JOIN   \n        Login l2 ON \n        l2.EmpID = l1.EmpID\nAND     CAST(l2.LoginTime AS DATE) = CAST(l1.LoginTime AS DATE)\nAND     l2.status = 'SignOut'\nWHERE   l1.status = 'SignIn'\n
    \n

    Note that in case if you had more than one signin/signout per day for an employee and you wanted to get his first SignIn and last SignOut for a day, you would have to change the query:

    \n
    SELECT  l1.EmpID\n        , MIN(l1.LoginTime) [SignIn]\n        , MAX(l2.LoginTime) [SignOut]\nFROM    Login l1\nLEFT JOIN   \n        Login l2 ON \n        l2.EmpID = l1.EmpID\nAND     CAST(l2.LoginTime AS DATE) = CAST(l1.LoginTime AS DATE)\nAND     l2.status = 'SignOut'\nWHERE   l1.status = 'SignIn'\nGROUP BY\n        l1.EmpID, CAST(l1.LoginTime AS DATE)\n
    \n

    And here is another query that also works for multiple signin/signouts of a user during the same day. This will list all of his signin/signouts in a day:

    \n
    ;WITH cte1 AS\n(\n    SELECT  *\n            , ROW_NUMBER() OVER \n                (PARTITION BY EmpID, CAST(LoginTime AS DATE) ORDER BY LoginTime) \n                AS num\n    FROM    Login\n)\n\nSELECT  l1.EmpID\n        , l1.LoginTime [SignIn]\n        , l2.LoginTime [SignOut]\nFROM    cte1 l1\nLEFT JOIN   \n        cte1 l2 ON \n        l2.EmpID = l1.EmpID\nAND     CAST(l2.LoginTime AS DATE) = CAST(l1.LoginTime AS DATE)\nAND     l2.num = l1.num + 1\nWHERE   l1.status = 'SignIn'\n
    \n

    Here is SQL Fiddle for last two queries that handle multiple signin/signout scenarios of a user in a single day, for that purpose I added user with EmpID 102 to sample data.

    \n soup wrap:

    You could get it like this:

    SELECT  l1.EmpID
            , l1.LoginTime [SignIn]
            , l2.LoginTime [SignOut]
    FROM    Login l1
    LEFT JOIN   
            Login l2 ON 
            l2.EmpID = l1.EmpID
    AND     CAST(l2.LoginTime AS DATE) = CAST(l1.LoginTime AS DATE)
    AND     l2.status = 'SignOut'
    WHERE   l1.status = 'SignIn'
    

    Note that in case if you had more than one signin/signout per day for an employee and you wanted to get his first SignIn and last SignOut for a day, you would have to change the query:

    SELECT  l1.EmpID
            , MIN(l1.LoginTime) [SignIn]
            , MAX(l2.LoginTime) [SignOut]
    FROM    Login l1
    LEFT JOIN   
            Login l2 ON 
            l2.EmpID = l1.EmpID
    AND     CAST(l2.LoginTime AS DATE) = CAST(l1.LoginTime AS DATE)
    AND     l2.status = 'SignOut'
    WHERE   l1.status = 'SignIn'
    GROUP BY
            l1.EmpID, CAST(l1.LoginTime AS DATE)
    

    And here is another query that also works for multiple signin/signouts of a user during the same day. This will list all of his signin/signouts in a day:

    ;WITH cte1 AS
    (
        SELECT  *
                , ROW_NUMBER() OVER 
                    (PARTITION BY EmpID, CAST(LoginTime AS DATE) ORDER BY LoginTime) 
                    AS num
        FROM    Login
    )
    
    SELECT  l1.EmpID
            , l1.LoginTime [SignIn]
            , l2.LoginTime [SignOut]
    FROM    cte1 l1
    LEFT JOIN   
            cte1 l2 ON 
            l2.EmpID = l1.EmpID
    AND     CAST(l2.LoginTime AS DATE) = CAST(l1.LoginTime AS DATE)
    AND     l2.num = l1.num + 1
    WHERE   l1.status = 'SignIn'
    

    Here is SQL Fiddle for last two queries that handle multiple signin/signout scenarios of a user in a single day, for that purpose I added user with EmpID 102 to sample data.

    qid & accept id: (15302356, 15302788) query: How can I use XSLT to combine two XML docs, similar to a SQL JOIN soup:

    Accomplishing this in XSLT is not quite as straightforward as it would be in SQL, but assuming you assembled the two input files into a single document ahead of time (which I would recommend if it's not problematic for you):

    \n
    \n    \n        \n        \n        \n        \n    \n    \n        \n        \n        \n        \n        \n        \n        \n        \n        \n    \n\n
    \n

    This XSLT can be used to join the data together:

    \n
    \n  \n  \n  \n\n  \n  \n    \n      \n    \n  \n\n  \n    \n      \n      \n\n      \n      \n    \n  \n\n  \n  \n\n
    \n

    When this is run on the input XML above, it produces:

    \n
    \n  \n    \n      \n      \n    \n    \n      \n      \n    \n    \n      \n      \n      \n    \n    \n      \n      \n    \n  \n\n
    \n

    And if you can change your XML a little to indicate the outer and inner group, and which attribute to match on, like this:

    \n
    \n  \n     ....\n  \n  \n     ....\n  \n\n
    \n

    Then you could use this more generic XSLT which, while less efficient, should work for any input similar to the above:

    \n
    \n  \n\n  \n  \n\n  \n  \n    \n      \n    \n  \n\n  \n    \n      \n      \n\n      \n      \n      \n    \n  \n\n  \n    \n      \n    \n  \n\n  \n  \n\n
    \n soup wrap:

    Accomplishing this in XSLT is not quite as straightforward as it would be in SQL, but assuming you assembled the two input files into a single document ahead of time (which I would recommend if it's not problematic for you):

    
        
            
            
            
            
        
        
            
            
            
            
            
            
            
            
            
        
    
    

    This XSLT can be used to join the data together:

    
      
      
      
    
      
      
        
          
        
      
    
      
        
          
          
    
          
          
        
      
    
      
      
    
    

    When this is run on the input XML above, it produces:

    
      
        
          
          
        
        
          
          
        
        
          
          
          
        
        
          
          
        
      
    
    

    And if you can change your XML a little to indicate the outer and inner group, and which attribute to match on, like this:

    
      
         ....
      
      
         ....
      
    
    

    Then you could use this more generic XSLT which, while less efficient, should work for any input similar to the above:

    
      
    
      
      
    
      
      
        
          
        
      
    
      
        
          
          
    
          
          
          
        
      
    
      
        
          
        
      
    
      
      
    
    
    qid & accept id: (15357576, 15357628) query: select where.... electrical status is required in ms sql 2005 soup:

    You can simply do this:

    \n
    SELECT DISTINCT SONO,  ElectricalStatus\nFROM tablename\nWHERE  ElectricalStatus = 'Required';\n
    \n

    SQL Fiddle Demo

    \n

    this will give you:

    \n
    | SONO | ELECTRICALSTATUS |\n---------------------------\n|    1 |         Required |\n|    2 |         Required |\n
    \n soup wrap:

    You can simply do this:

    SELECT DISTINCT SONO,  ElectricalStatus
    FROM tablename
    WHERE  ElectricalStatus = 'Required';
    

    SQL Fiddle Demo

    this will give you:

    | SONO | ELECTRICALSTATUS |
    ---------------------------
    |    1 |         Required |
    |    2 |         Required |
    
    qid & accept id: (15359303, 15359581) query: Change table contents to match a query without deleting all rows soup:
    delete from tblA where\n  (col1, col2, ...) not in (queryB);\n\ninsert into tblA \n  (queryB) minus (select * from tblA);\n
    \n
    \n

    EDIT :
    \nYou can calculate queryB once if small temporary table will be created (which will contain < 10% of rows of table tblA).
    \nIt is assumed that queryB.col1 is never null

    \n
    create table diff as\n   select \n      ta.rowid ta_rid, \n      tb.*\n   from tblA ta \n      full join (queryB) tb \n         on ta.col1 = tb.col1 \n         and ta.col2 = tb.col2 \n         and ta.col3 = tb.col3 \n   where \n      ta.rowid is null or tb.col1 is null; \n\ndelete from tblA ta \n  where ta.rowid in (select d.ta_rid from diff d where d.ta_rid is not null);\ninsert into tblA ta \n  select d.col1, d.col2, d.col3 from diff d where d.ta_rid is null;      \n
    \n soup wrap:
    delete from tblA where
      (col1, col2, ...) not in (queryB);
    
    insert into tblA 
      (queryB) minus (select * from tblA);
    

    EDIT :
    You can calculate queryB once if small temporary table will be created (which will contain < 10% of rows of table tblA).
    It is assumed that queryB.col1 is never null

    create table diff as
       select 
          ta.rowid ta_rid, 
          tb.*
       from tblA ta 
          full join (queryB) tb 
             on ta.col1 = tb.col1 
             and ta.col2 = tb.col2 
             and ta.col3 = tb.col3 
       where 
          ta.rowid is null or tb.col1 is null; 
    
    delete from tblA ta 
      where ta.rowid in (select d.ta_rid from diff d where d.ta_rid is not null);
    insert into tblA ta 
      select d.col1, d.col2, d.col3 from diff d where d.ta_rid is null;      
    
    qid & accept id: (15376335, 15376369) query: Pivoting two colums leaving other columns in a table unchanged soup:
    SELECT   Product_ID, Date, Colour, Size, Material\nFROM\n        (\n            SELECT  Product_ID, Date, Attribute, Value\n            FROM    Table1\n        ) org\n        PIVOT\n        (\n            MAX(Value)\n            FOR Attribute IN (Colour, Size, Material)\n        ) pivotHeader\n
    \n\n

    OUTPUT

    \n
    ╔════════════╦══════╦════════╦════════╦══════════╗\n║ PRODUCT_ID ║ DATE ║ COLOUR ║  SIZE  ║ MATERIAL ║\n╠════════════╬══════╬════════╬════════╬══════════╣\n║   10025135 ║ 2009 ║ Red    ║ 20 cm  ║ Steel    ║\n║   10025135 ║ 2010 ║ Green  ║ (null) ║ Alloy    ║\n║   10025136 ║ 2009 ║ Black  ║ 30cm   ║ (null)   ║\n╚════════════╩══════╩════════╩════════╩══════════╝\n
    \n

    The other way of doing this is by using MAX() and CASE

    \n
    SELECT  Product_ID, DATE,\n        MAX(CASE WHEN Attribute = 'Colour' THEN Value END ) Colour,\n        MAX(CASE WHEN Attribute = 'Size' THEN Value END ) Size,\n        MAX(CASE WHEN Attribute = 'Material' THEN Value END ) Material\nFROM    Table1\nGROUP   BY Product_ID, DATE\n
    \n\n soup wrap:
    SELECT   Product_ID, Date, Colour, Size, Material
    FROM
            (
                SELECT  Product_ID, Date, Attribute, Value
                FROM    Table1
            ) org
            PIVOT
            (
                MAX(Value)
                FOR Attribute IN (Colour, Size, Material)
            ) pivotHeader
    

    OUTPUT

    ╔════════════╦══════╦════════╦════════╦══════════╗
    ║ PRODUCT_ID ║ DATE ║ COLOUR ║  SIZE  ║ MATERIAL ║
    ╠════════════╬══════╬════════╬════════╬══════════╣
    ║   10025135 ║ 2009 ║ Red    ║ 20 cm  ║ Steel    ║
    ║   10025135 ║ 2010 ║ Green  ║ (null) ║ Alloy    ║
    ║   10025136 ║ 2009 ║ Black  ║ 30cm   ║ (null)   ║
    ╚════════════╩══════╩════════╩════════╩══════════╝
    

    The other way of doing this is by using MAX() and CASE

    SELECT  Product_ID, DATE,
            MAX(CASE WHEN Attribute = 'Colour' THEN Value END ) Colour,
            MAX(CASE WHEN Attribute = 'Size' THEN Value END ) Size,
            MAX(CASE WHEN Attribute = 'Material' THEN Value END ) Material
    FROM    Table1
    GROUP   BY Product_ID, DATE
    
    qid & accept id: (15387808, 15387854) query: MySQL Join two tables count and sum from second table soup:

    You could use two sub-queries:

    \n
    SELECT  a.*\n      , (SELECT Count(b.id) FROM inquiries I1 WHERE I1.dealer_id = a.id) as counttotal\n      , (SELECT SUM(b.cost) FROM inquiries I2 WHERE I2.dealer_id = a.id) as turnover\nFROM dealers a\nORDER BY name ASC\n
    \n

    Or

    \n
    SELECT  a.*\n     , COALESCE(T.counttotal, 0) as counttotal   -- use coalesce or equiv. to turn NULLs to 0\n     , COALESCE(T.turnover, 0) as turnover       -- use coalesce or equiv. to turn NULLs to 0\n FROM dealers a\n LEFT OUTER JOIN (SELECT a.id, Count(b.id) as counttotal, SUM(b.cost) as turnover\n               FROM dealers a1 \n               INNER JOIN inquiries b ON a1.id = b.dealer_id\n              GROUP BY a.id) T\n         ON a.id = T.id\nORDER BY a.name\n
    \n soup wrap:

    You could use two sub-queries:

    SELECT  a.*
          , (SELECT Count(b.id) FROM inquiries I1 WHERE I1.dealer_id = a.id) as counttotal
          , (SELECT SUM(b.cost) FROM inquiries I2 WHERE I2.dealer_id = a.id) as turnover
    FROM dealers a
    ORDER BY name ASC
    

    Or

    SELECT  a.*
         , COALESCE(T.counttotal, 0) as counttotal   -- use coalesce or equiv. to turn NULLs to 0
         , COALESCE(T.turnover, 0) as turnover       -- use coalesce or equiv. to turn NULLs to 0
     FROM dealers a
     LEFT OUTER JOIN (SELECT a.id, Count(b.id) as counttotal, SUM(b.cost) as turnover
                   FROM dealers a1 
                   INNER JOIN inquiries b ON a1.id = b.dealer_id
                  GROUP BY a.id) T
             ON a.id = T.id
    ORDER BY a.name
    
    qid & accept id: (15400897, 15401004) query: How to change date in database soup:

    Try this -

    \n
    UPDATE TABLE set fieldname =  DATE_ADD( fieldname, INTERVAL 3 YEAR ) \n
    \n

    For more information and play part with dates you can check this link :-

    \n

    function_date-add

    \n

    Working Fiddle -- http://sqlfiddle.com/#!2/9c669/1

    \n

    EDIT

    \n

    This solution updates date type is VARCHAR and structure of date like - 2 January 2001

    \n

    It will update date to 2 January 2004 by the interval of 3

    \n

    Although the best way to handle date is use date DATATYPEs(ex timestamp, datetime etc) instead of saving it in VARCHARs

    \n

    Tested code --

    \n
    UPDATE date \nSET `varchardate`= DATE_FORMAT(DATE_ADD(  str_to_date(`varchardate`, '%d %M %Y'), INTERVAL 3 YEAR ) , '%d %M %Y')\n
    \n soup wrap:

    Try this -

    UPDATE TABLE set fieldname =  DATE_ADD( fieldname, INTERVAL 3 YEAR ) 
    

    For more information and play part with dates you can check this link :-

    function_date-add

    Working Fiddle -- http://sqlfiddle.com/#!2/9c669/1

    EDIT

    This solution updates date type is VARCHAR and structure of date like - 2 January 2001

    It will update date to 2 January 2004 by the interval of 3

    Although the best way to handle date is use date DATATYPEs(ex timestamp, datetime etc) instead of saving it in VARCHARs

    Tested code --

    UPDATE date 
    SET `varchardate`= DATE_FORMAT(DATE_ADD(  str_to_date(`varchardate`, '%d %M %Y'), INTERVAL 3 YEAR ) , '%d %M %Y')
    
    qid & accept id: (15414398, 15414661) query: Merge two or more columns dynamically based on table columns? soup:

    You will want to use the PIVOT function to transform the data from columns into rows. If you are going to have an unknown number of values that need to be columns, then you will need to use dynamic SQL.

    \n

    It is easier to see a static or hard-coded version first and then convert it into a dynamic SQL version. A static version is used when you have a known number of values:

    \n
    select *\nfrom\n(\n  select e.employeeid,\n    s.subsection +'_'+s.sectioncode+'_Cost' Section,\n    e.cost\n  from employee e\n  inner join sectionnames s\n    on e.sectionid = s.sectionid\n) src\npivot\n(\n  max(cost)\n  for section in (Individual_xYz_Cost, Family_xYz_Cost,\n                  Friends_CYD_Cost, level1_PCPO_Cost,\n                  level2_PCPO_Cost, level3_PCPO_Cost)\n) piv;\n
    \n

    See SQL Fiddle with Demo.

    \n

    If you need the query to be flexible, then you will convert this to use dynamic SQL:

    \n
    DECLARE @cols AS NVARCHAR(MAX),\n    @query  AS NVARCHAR(MAX)\n\nselect @cols = STUFF((SELECT ',' + QUOTENAME(subsection +'_'+sectioncode+'_Cost') \n                    from SectionNames\n                    group by subsection, sectioncode, sectionid\n                    order by sectionid\n            FOR XML PATH(''), TYPE\n            ).value('.', 'NVARCHAR(MAX)') \n        ,1,1,'')\n\nset @query = 'SELECT employeeid,' + @cols + ' \n              from \n             (\n                select e.employeeid,\n                  s.subsection +''_''+s.sectioncode+''_Cost'' Section,\n                  e.cost\n                from employee e\n                inner join sectionnames s\n                  on e.sectionid = s.sectionid\n            ) x\n            pivot \n            (\n                max(cost)\n                for section in (' + @cols + ')\n            ) p '\n\nexecute(@query)\n
    \n

    See SQL Fiddle with Demo

    \n

    The result of both is:

    \n
    | EMPLOYEEID | INDIVIDUAL_XYZ_COST | FAMILY_XYZ_COST | FRIENDS_CYD_COST | LEVEL1_PCPO_COST | LEVEL2_PCPO_COST | LEVEL3_PCPO_COST |\n----------------------------------------------------------------------------------------------------------------------------------\n|          1 |                $200 |            $300 |              $40 |              $10 |         No Level |         No Level |\n
    \n soup wrap:

    You will want to use the PIVOT function to transform the data from columns into rows. If you are going to have an unknown number of values that need to be columns, then you will need to use dynamic SQL.

    It is easier to see a static or hard-coded version first and then convert it into a dynamic SQL version. A static version is used when you have a known number of values:

    select *
    from
    (
      select e.employeeid,
        s.subsection +'_'+s.sectioncode+'_Cost' Section,
        e.cost
      from employee e
      inner join sectionnames s
        on e.sectionid = s.sectionid
    ) src
    pivot
    (
      max(cost)
      for section in (Individual_xYz_Cost, Family_xYz_Cost,
                      Friends_CYD_Cost, level1_PCPO_Cost,
                      level2_PCPO_Cost, level3_PCPO_Cost)
    ) piv;
    

    See SQL Fiddle with Demo.

    If you need the query to be flexible, then you will convert this to use dynamic SQL:

    DECLARE @cols AS NVARCHAR(MAX),
        @query  AS NVARCHAR(MAX)
    
    select @cols = STUFF((SELECT ',' + QUOTENAME(subsection +'_'+sectioncode+'_Cost') 
                        from SectionNames
                        group by subsection, sectioncode, sectionid
                        order by sectionid
                FOR XML PATH(''), TYPE
                ).value('.', 'NVARCHAR(MAX)') 
            ,1,1,'')
    
    set @query = 'SELECT employeeid,' + @cols + ' 
                  from 
                 (
                    select e.employeeid,
                      s.subsection +''_''+s.sectioncode+''_Cost'' Section,
                      e.cost
                    from employee e
                    inner join sectionnames s
                      on e.sectionid = s.sectionid
                ) x
                pivot 
                (
                    max(cost)
                    for section in (' + @cols + ')
                ) p '
    
    execute(@query)
    

    See SQL Fiddle with Demo

    The result of both is:

    | EMPLOYEEID | INDIVIDUAL_XYZ_COST | FAMILY_XYZ_COST | FRIENDS_CYD_COST | LEVEL1_PCPO_COST | LEVEL2_PCPO_COST | LEVEL3_PCPO_COST |
    ----------------------------------------------------------------------------------------------------------------------------------
    |          1 |                $200 |            $300 |              $40 |              $10 |         No Level |         No Level |
    
    qid & accept id: (15420689, 15422128) query: How do you do a PostgreSQL fulltext search on encoded or encrypted data? soup:

    Encrypted values

    \n

    For encrypted values you can't. Even if you created the tsvector client-side, the tsvector would contain a form of the encrypted text so it wouldn't be acceptable for most applications. Observe:

    \n
    regress=> SELECT to_tsvector('my secret password is CandyStrip3r');\n               to_tsvector                \n------------------------------------------\n 'candystrip3r':5 'password':3 'secret':2\n(1 row)\n
    \n

    ... whoops. It doesn't matter if you create that value client side instead of using to_tsvector, it'll still have your password in cleartext. You could encrypt the tsvector, but then you couldn't use it for fulltext seach.

    \n

    Sure, given the encrypted value:

    \n
    CREATE EXTENSION pgcrypto;\n\nregress=> SELECT encrypt( convert_to('my s3kritPassw1rd','utf-8'), '\xdeadbeef', 'aes');\n                              encrypt                               \n--------------------------------------------------------------------\n \x10441717bfc843677d2b76ac357a55ac5566ffe737105332552f98c2338480ff\n(1 row)\n
    \n

    you can (but shouldn't) do something like this:

    \n
    regress=> SELECT to_tsvector( convert_from(decrypt('\x10441717bfc843677d2b76ac357a55ac5566ffe737105332552f98c2338480ff', '\xdeadbeef', 'aes'), 'utf-8') );\n    to_tsvector     \n--------------------\n 's3kritpassw1rd':2\n(1 row)\n
    \n

    ... but if the problems with that aren't immediately obvious after scrolling right in the code display box then you should really be getting somebody else to do your security design for you ;-)

    \n

    There's been tons of research on ways to perform operations on encrypted values without decrypting them, like adding two encrypted numbers together to produce a result that's encrypted with the same key, so the process doing the adding doesn't need the ability to decrypt the inputs in order to get the output. It's possible some of this could be applied to fts - but it's way beyond my level of expertise in the area and likely to be horribly inefficient and/or cryptographically weak anyway.

    \n

    Base64-encoded values

    \n

    For base64 you decode the base64 before feeding it into to_tsvector. Because decode returns a bytea and you know the encoded data is text you need to use convert_from to decode the bytea into text in the database encoding, eg:

    \n
    regress=> SELECT encode(convert_to('some text to search','utf-8'), 'base64');\n            encode            \n------------------------------\n c29tZSB0ZXh0IHRvIHNlYXJjaA==\n(1 row)\n\nregress=> SELECT to_tsvector(convert_from( decode('c29tZSB0ZXh0IHRvIHNlYXJjaA==', 'base64'), getdatabaseencoding() ));\n     to_tsvector     \n---------------------\n 'search':4 'text':2\n(1 row)\n
    \n

    In this case I've used the database encoding as the input to convert_from, but you need to make sure you use the encoding that the underlying base64 encoded text was in. Your application is responsible for getting this right. I suggest either storing the encoding in a 2nd column or ensuring that your application always encodes the text as utf-8 before applying base64 encoding.

    \n soup wrap:

    Encrypted values

    For encrypted values you can't. Even if you created the tsvector client-side, the tsvector would contain a form of the encrypted text so it wouldn't be acceptable for most applications. Observe:

    regress=> SELECT to_tsvector('my secret password is CandyStrip3r');
                   to_tsvector                
    ------------------------------------------
     'candystrip3r':5 'password':3 'secret':2
    (1 row)
    

    ... whoops. It doesn't matter if you create that value client side instead of using to_tsvector, it'll still have your password in cleartext. You could encrypt the tsvector, but then you couldn't use it for fulltext seach.

    Sure, given the encrypted value:

    CREATE EXTENSION pgcrypto;
    
    regress=> SELECT encrypt( convert_to('my s3kritPassw1rd','utf-8'), '\xdeadbeef', 'aes');
                                  encrypt                               
    --------------------------------------------------------------------
     \x10441717bfc843677d2b76ac357a55ac5566ffe737105332552f98c2338480ff
    (1 row)
    

    you can (but shouldn't) do something like this:

    regress=> SELECT to_tsvector( convert_from(decrypt('\x10441717bfc843677d2b76ac357a55ac5566ffe737105332552f98c2338480ff', '\xdeadbeef', 'aes'), 'utf-8') );
        to_tsvector     
    --------------------
     's3kritpassw1rd':2
    (1 row)
    

    ... but if the problems with that aren't immediately obvious after scrolling right in the code display box then you should really be getting somebody else to do your security design for you ;-)

    There's been tons of research on ways to perform operations on encrypted values without decrypting them, like adding two encrypted numbers together to produce a result that's encrypted with the same key, so the process doing the adding doesn't need the ability to decrypt the inputs in order to get the output. It's possible some of this could be applied to fts - but it's way beyond my level of expertise in the area and likely to be horribly inefficient and/or cryptographically weak anyway.

    Base64-encoded values

    For base64 you decode the base64 before feeding it into to_tsvector. Because decode returns a bytea and you know the encoded data is text you need to use convert_from to decode the bytea into text in the database encoding, eg:

    regress=> SELECT encode(convert_to('some text to search','utf-8'), 'base64');
                encode            
    ------------------------------
     c29tZSB0ZXh0IHRvIHNlYXJjaA==
    (1 row)
    
    regress=> SELECT to_tsvector(convert_from( decode('c29tZSB0ZXh0IHRvIHNlYXJjaA==', 'base64'), getdatabaseencoding() ));
         to_tsvector     
    ---------------------
     'search':4 'text':2
    (1 row)
    

    In this case I've used the database encoding as the input to convert_from, but you need to make sure you use the encoding that the underlying base64 encoded text was in. Your application is responsible for getting this right. I suggest either storing the encoding in a 2nd column or ensuring that your application always encodes the text as utf-8 before applying base64 encoding.

    qid & accept id: (15428168, 15428204) query: SQL Server - Create a copy of a database table and place it in the same database? soup:

    Use SELECT ... INTO:

    \n
    SELECT *\nINTO ABC_1\nFROM ABC;\n
    \n

    This will create a new table ABC_1 that has the same column structure as ABC and contains the same data. Constraints (e.g. keys, default values), however, are -not- copied.

    \n

    You can run this query multiple times with a different table name each time.

    \n
    \n

    If you don't need to copy the data, only to create a new empty table with the same column structure, add a WHERE clause with a falsy expression:

    \n
    SELECT *\nINTO ABC_1\nFROM ABC\nWHERE 1 <> 1;\n
    \n soup wrap:

    Use SELECT ... INTO:

    SELECT *
    INTO ABC_1
    FROM ABC;
    

    This will create a new table ABC_1 that has the same column structure as ABC and contains the same data. Constraints (e.g. keys, default values), however, are -not- copied.

    You can run this query multiple times with a different table name each time.


    If you don't need to copy the data, only to create a new empty table with the same column structure, add a WHERE clause with a falsy expression:

    SELECT *
    INTO ABC_1
    FROM ABC
    WHERE 1 <> 1;
    
    qid & accept id: (15436509, 15436649) query: SQL: Copy some field values to another record inside the same table soup:
    UPDATE data a\n       INNER JOIN data b\n          ON a.originalid = b.id\nSET a.data = b.data\n
    \n\n

    OUTPUT

    \n
    ╔════╦════════════╦════════════╗\n║ ID ║ ORIGINALID ║   STRING   ║\n╠════╬════════════╬════════════╣\n║  1 ║ (null)     ║ original 1 ║\n║  2 ║ (null)     ║ original 2 ║\n║  3 ║ 1          ║ original 1 ║\n║  4 ║ 2          ║ original 2 ║\n║  5 ║ 2          ║ original 2 ║\n╚════╩════════════╩════════════╝\n
    \n soup wrap:
    UPDATE data a
           INNER JOIN data b
              ON a.originalid = b.id
    SET a.data = b.data
    

    OUTPUT

    ╔════╦════════════╦════════════╗
    ║ ID ║ ORIGINALID ║   STRING   ║
    ╠════╬════════════╬════════════╣
    ║  1 ║ (null)     ║ original 1 ║
    ║  2 ║ (null)     ║ original 2 ║
    ║  3 ║ 1          ║ original 1 ║
    ║  4 ║ 2          ║ original 2 ║
    ║  5 ║ 2          ║ original 2 ║
    ╚════╩════════════╩════════════╝
    
    qid & accept id: (15445216, 15445327) query: How to get last day of a month from a given date? soup:

    Oracle has a last_day() function:

    \n
    SELECT LAST_DAY(to_date('04/04/1924','MM/DD/YYYY')) from dual;\n\nSELECT LAST_DAY(ADD_MONTHS(to_date('04/04/1924','MM/DD/YYYY'), -1)) from dual;\n\nSELECT LAST_DAY(ADD_MONTHS(to_date('04/04/1924','MM/DD/YYYY'), -2)) from dual;\n
    \n

    Results:

    \n
    April, 30 1924 00:00:00+0000\n\nMarch, 31 1924 00:00:00+0000\n\nFebruary, 29 1924 00:00:00+0000\n
    \n

    Use Add_Months() on your date to get the appropriate month, and then apply last_day().

    \n soup wrap:

    Oracle has a last_day() function:

    SELECT LAST_DAY(to_date('04/04/1924','MM/DD/YYYY')) from dual;
    
    SELECT LAST_DAY(ADD_MONTHS(to_date('04/04/1924','MM/DD/YYYY'), -1)) from dual;
    
    SELECT LAST_DAY(ADD_MONTHS(to_date('04/04/1924','MM/DD/YYYY'), -2)) from dual;
    

    Results:

    April, 30 1924 00:00:00+0000
    
    March, 31 1924 00:00:00+0000
    
    February, 29 1924 00:00:00+0000
    

    Use Add_Months() on your date to get the appropriate month, and then apply last_day().

    qid & accept id: (15448705, 15448712) query: Maximum of the count of the grouped elements soup:

    just add TOP to limit the number of results

    \n
    select TOP 1 COUNT(*) as 'Number of times a product is sold at same quantity' \nfrom  Sales.SalesOrderDetail \ngroup by  OrderQty, ProductID \norder by  COUNT(*) desc\n
    \n\n
    \n

    UPDATE 1

    \n
    WITH results \nAS\n(\n  select COUNT(*) as [Number of times a product is sold at same quantity],\n         DENSE_RANK() OVER (ORDER BY COUNT(*) DESC) rank_no \n  from   Sales.SalesOrderDetail \n  group   by OrderQty, ProductID \n)\nSELECT [Number of times a product is sold at same quantity]\nFROM   results\nWHERE  rank_no = 2\n
    \n\n soup wrap:

    just add TOP to limit the number of results

    select TOP 1 COUNT(*) as 'Number of times a product is sold at same quantity' 
    from  Sales.SalesOrderDetail 
    group by  OrderQty, ProductID 
    order by  COUNT(*) desc
    

    UPDATE 1

    WITH results 
    AS
    (
      select COUNT(*) as [Number of times a product is sold at same quantity],
             DENSE_RANK() OVER (ORDER BY COUNT(*) DESC) rank_no 
      from   Sales.SalesOrderDetail 
      group   by OrderQty, ProductID 
    )
    SELECT [Number of times a product is sold at same quantity]
    FROM   results
    WHERE  rank_no = 2
    
    qid & accept id: (15512015, 15513671) query: Update PostgreSQL table with values from self soup:

    Correlated subqueries are infamous for abysmal performance. Doesn't matter much for small tables, matters a lot for big tables. Use one of these instead, preferably the second:

    \n

    Query 1

    \n
    WITH cte AS (\n   SELECT *, dense_rank() OVER (ORDER BY dob) AS drk\n   FROM   person\n    )\nUPDATE person p\nSET    younger_sibling_name = y.name\n      ,younger_sibling_dob  = y.dob\nFROM   cte x\nJOIN   (SELECT DISTINCT ON (drk) * FROM cte) y ON y.drk = x.drk + 1\nWHERE  x.pid = p.pid;\n
    \n

    -> SQLfiddle (with extended test case)

    \n
      \n
    • In the CTE cte use the window function dense_rank() to get a rank without gaps according to the dop for every person.

    • \n
    • Join cte to itself, but remove duplicates on dob from the second instance. Thereby everybody gets exactly one UPDATE. If more than one person share the same dop, the same one is selected as younger sibling for all persons on the next dob. I do this with:

      \n
      (SELECT DISTINCT ON (rnk) * FROM cte)\n
      \n

      Add ORDER BY rnk, ... if you want to pick a particular person for every dob.

    • \n
    • If no younger person exists, no UPDATE happens and the columns stay NULL.

    • \n
    • Indices on dob and pid make this fast.

    • \n
    \n

    Query 2

    \n
    WITH cte AS (\n   SELECT dob, min(name) AS name\n         ,row_number() OVER (ORDER BY dob) rn\n   FROM   person p\n   GROUP  BY dob\n   )\nUPDATE person p\nSET    younger_sibling_name = y.name\n      ,younger_sibling_dob  = y.dob\nFROM   cte x\nJOIN   cte y ON y.rn = x.rn + 1\nWHERE  x.dob = p.dob;\n
    \n

    -> SQLfiddle

    \n
      \n
    • This works, because aggregate functions are applied before window functions. And it should be very fast, since both operations agree on the sort order.

    • \n
    • Obviates the need for a later DISTINCT like in query 1.

    • \n
    • Result is the same as query 1, exactly.
      \nAgain, you can add more columns to ORDER BY to pick a particular person for every dob.

    • \n
    • Only needs an index on dob to be fast.

    • \n
    \n soup wrap:

    Correlated subqueries are infamous for abysmal performance. Doesn't matter much for small tables, matters a lot for big tables. Use one of these instead, preferably the second:

    Query 1

    WITH cte AS (
       SELECT *, dense_rank() OVER (ORDER BY dob) AS drk
       FROM   person
        )
    UPDATE person p
    SET    younger_sibling_name = y.name
          ,younger_sibling_dob  = y.dob
    FROM   cte x
    JOIN   (SELECT DISTINCT ON (drk) * FROM cte) y ON y.drk = x.drk + 1
    WHERE  x.pid = p.pid;
    

    -> SQLfiddle (with extended test case)

    • In the CTE cte use the window function dense_rank() to get a rank without gaps according to the dop for every person.

    • Join cte to itself, but remove duplicates on dob from the second instance. Thereby everybody gets exactly one UPDATE. If more than one person share the same dop, the same one is selected as younger sibling for all persons on the next dob. I do this with:

      (SELECT DISTINCT ON (rnk) * FROM cte)
      

      Add ORDER BY rnk, ... if you want to pick a particular person for every dob.

    • If no younger person exists, no UPDATE happens and the columns stay NULL.

    • Indices on dob and pid make this fast.

    Query 2

    WITH cte AS (
       SELECT dob, min(name) AS name
             ,row_number() OVER (ORDER BY dob) rn
       FROM   person p
       GROUP  BY dob
       )
    UPDATE person p
    SET    younger_sibling_name = y.name
          ,younger_sibling_dob  = y.dob
    FROM   cte x
    JOIN   cte y ON y.rn = x.rn + 1
    WHERE  x.dob = p.dob;
    

    -> SQLfiddle

    • This works, because aggregate functions are applied before window functions. And it should be very fast, since both operations agree on the sort order.

    • Obviates the need for a later DISTINCT like in query 1.

    • Result is the same as query 1, exactly.
      Again, you can add more columns to ORDER BY to pick a particular person for every dob.

    • Only needs an index on dob to be fast.

    qid & accept id: (15532084, 15532181) query: How do I add a calculated column in sql workbench / j soup:

    You can do this by :

    \n
        ALTER TABLE table_one\n    ADD COLUMN test_column VARCHAR(100) NULL;\n\n    GO;\n
    \n

    then update all rows by :

    \n
    UPDATE table_one\nSET test_column = (CASE WHEN LEFT(name,3) = "Ads" THEN "ok" ELSE "no" END) \n
    \n soup wrap:

    You can do this by :

        ALTER TABLE table_one
        ADD COLUMN test_column VARCHAR(100) NULL;
    
        GO;
    

    then update all rows by :

    UPDATE table_one
    SET test_column = (CASE WHEN LEFT(name,3) = "Ads" THEN "ok" ELSE "no" END) 
    
    qid & accept id: (15541196, 15541225) query: how to fetch all data from one table in mysql? soup:

    Use LEFT JOIN instead:

    \n
    SELECT \n  m.medianame,\n  IFNULL(COUNT(ad.id), 0) AS Total \nFROM a_mediatype as m\nLEFT JOIN a_advertise   AS a   ON a.mediaTypeId    = m.mediaId\nLEFT JOIN a_ad_display  AS ad  ON ad.advId         = a.advId\nLEFT JOIN organization_ AS o   ON a.organizationId = o.organizationId\nLEFT JOIN organization_ AS p   ON o.organizationId = p.organizationId \n                              AND p.organizationId = '37423'  \n                              AND o.treePath       LIKE CONCAT( p.treePath, '%')\nGROUP BY m.medianame;\n
    \n

    SQL Fiddle Demo

    \n

    This will give you:

    \n
    | MEDIANAME | TOTAL |\n---------------------\n| animation |    13 |\n|     image |     2 |\n|     video |     0 |\n
    \n soup wrap:

    Use LEFT JOIN instead:

    SELECT 
      m.medianame,
      IFNULL(COUNT(ad.id), 0) AS Total 
    FROM a_mediatype as m
    LEFT JOIN a_advertise   AS a   ON a.mediaTypeId    = m.mediaId
    LEFT JOIN a_ad_display  AS ad  ON ad.advId         = a.advId
    LEFT JOIN organization_ AS o   ON a.organizationId = o.organizationId
    LEFT JOIN organization_ AS p   ON o.organizationId = p.organizationId 
                                  AND p.organizationId = '37423'  
                                  AND o.treePath       LIKE CONCAT( p.treePath, '%')
    GROUP BY m.medianame;
    

    SQL Fiddle Demo

    This will give you:

    | MEDIANAME | TOTAL |
    ---------------------
    | animation |    13 |
    |     image |     2 |
    |     video |     0 |
    
    qid & accept id: (15543977, 15546165) query: MS SQL Server 2008 :Getting start date and end date of the week to next 8 weeks soup:

    Try this:

    \n
    DECLARE @startDate DATETIME\nDECLARE @currentDate DATETIME\nDECLARE @numberOfWeeks INT\n\nDECLARE @dates TABLE(\n    StartDate DateTime,\n    EndDate DateTime \n)\n\nSET @startDate = GETDATE()--'2012-01-01' -- Put whatever you want here\nSET @numberOfWeeks = 8 -- Choose number of weeks here\nSET @currentDate = @startDate\n\nwhile @currentDate < dateadd(week, @numberOfWeeks, @startDate)\nbegin\n    INSERT INTO @Dates(StartDate, EndDate) VALUES (@currentDate, dateadd(day, 6, @currentDate))\n    set @currentDate = dateadd(day, 7, @currentDate);\nend\n\nSELECT * FROM @dates\n
    \n

    This will give you something like this:

    \n
    StartDate           EndDate \n21/03/2013 11:22:46 27/03/2013 11:22:46 \n28/03/2013 11:22:46 03/04/2013 11:22:46 \n04/04/2013 11:22:46 10/04/2013 11:22:46 \n11/04/2013 11:22:46 17/04/2013 11:22:46 \n18/04/2013 11:22:46 24/04/2013 11:22:46 \n25/04/2013 11:22:46 01/05/2013 11:22:46 \n02/05/2013 11:22:46 08/05/2013 11:22:46 \n09/05/2013 11:22:46 15/05/2013 11:22:46 \n
    \n

    Or you could tweak the final select if you don't want the time component, like this:

    \n
    SELECT CONVERT(VARCHAR, StartDate, 103), CONVERT(VARCHAR, EndDate, 103) FROM @dates\n
    \n soup wrap:

    Try this:

    DECLARE @startDate DATETIME
    DECLARE @currentDate DATETIME
    DECLARE @numberOfWeeks INT
    
    DECLARE @dates TABLE(
        StartDate DateTime,
        EndDate DateTime 
    )
    
    SET @startDate = GETDATE()--'2012-01-01' -- Put whatever you want here
    SET @numberOfWeeks = 8 -- Choose number of weeks here
    SET @currentDate = @startDate
    
    while @currentDate < dateadd(week, @numberOfWeeks, @startDate)
    begin
        INSERT INTO @Dates(StartDate, EndDate) VALUES (@currentDate, dateadd(day, 6, @currentDate))
        set @currentDate = dateadd(day, 7, @currentDate);
    end
    
    SELECT * FROM @dates
    

    This will give you something like this:

    StartDate           EndDate 
    21/03/2013 11:22:46 27/03/2013 11:22:46 
    28/03/2013 11:22:46 03/04/2013 11:22:46 
    04/04/2013 11:22:46 10/04/2013 11:22:46 
    11/04/2013 11:22:46 17/04/2013 11:22:46 
    18/04/2013 11:22:46 24/04/2013 11:22:46 
    25/04/2013 11:22:46 01/05/2013 11:22:46 
    02/05/2013 11:22:46 08/05/2013 11:22:46 
    09/05/2013 11:22:46 15/05/2013 11:22:46 
    

    Or you could tweak the final select if you don't want the time component, like this:

    SELECT CONVERT(VARCHAR, StartDate, 103), CONVERT(VARCHAR, EndDate, 103) FROM @dates
    
    qid & accept id: (15559090, 15560009) query: Combine two tables into a new one so that select rows from the other one are ignored soup:

    According to your description, the query could look like this:
    \nI use LEFT JOIN / IS NULL to exclude rows from the second table for the same location and date. NOT EXISTS would be the other good option.
    \nUNION simply doesn't do what you describe.

    \n
    CREATE TABLE AS \nSELECT date, location_code, product_code, quantity\nFROM   transactions_kitchen k\n\nUNION  ALL\nSELECT h.date, h.location_code, h.product_code, h.quantity\nFROM   transactions_admin h\nLEFT   JOIN transactions_kitchen k USING (location_code, date)\nWHERE  k.location_code IS NULL;\n
    \n

    Use CREATE TABLE AS instead of SELECT INTO.
    \nI quote the manual on SELECT INTO:

    \n
    \n

    CREATE TABLE AS is functionally similar to SELECT INTO. CREATE TABLE AS\n is the recommended syntax, since this form of SELECT INTO is not\n available in ECPG or PL/pgSQL, because they interpret the INTO clause\n differently. Furthermore, CREATE TABLE AS offers a superset of the\n functionality provided by SELECT INTO.

    \n
    \n

    Or, if the target table already exists:

    \n
    INSERT INTO transactions_combined ()\nSELECT ...\n
    \n

    I would advise not to use date as column name. It's a reserved word in every SQL standard and a function and data type name in PostgreSQL.

    \n soup wrap:

    According to your description, the query could look like this:
    I use LEFT JOIN / IS NULL to exclude rows from the second table for the same location and date. NOT EXISTS would be the other good option.
    UNION simply doesn't do what you describe.

    CREATE TABLE AS 
    SELECT date, location_code, product_code, quantity
    FROM   transactions_kitchen k
    
    UNION  ALL
    SELECT h.date, h.location_code, h.product_code, h.quantity
    FROM   transactions_admin h
    LEFT   JOIN transactions_kitchen k USING (location_code, date)
    WHERE  k.location_code IS NULL;
    

    Use CREATE TABLE AS instead of SELECT INTO.
    I quote the manual on SELECT INTO:

    CREATE TABLE AS is functionally similar to SELECT INTO. CREATE TABLE AS is the recommended syntax, since this form of SELECT INTO is not available in ECPG or PL/pgSQL, because they interpret the INTO clause differently. Furthermore, CREATE TABLE AS offers a superset of the functionality provided by SELECT INTO.

    Or, if the target table already exists:

    INSERT INTO transactions_combined ()
    SELECT ...
    

    I would advise not to use date as column name. It's a reserved word in every SQL standard and a function and data type name in PostgreSQL.

    qid & accept id: (15616278, 15632566) query: SQL convert Seconds to Minutes to Hours soup:

    With the help of Steoleary i have managed a solution

    \n
    DECLARE @SecondsToConvert int\nSET @SecondsToConvert = (SELECT (SUM(DATEDIFF(hour,InviteTime,EndTime) * 3600) + SUM(DATEDIFF(minute,InviteTime,EndTime) * 60) + SUM(DATEDIFF(second,InviteTime,EndTime) * 1)) AS [Seconds] \n FROM [LcsCDR].[dbo].[SessionDetailsView]\nWHERE FromUri LIKE '%robert%'\nAND (CAST([InviteTime] AS date)) BETWEEN '2012-12-27' AND '2013-01-28'\nAND MediaTypes = '16'\nGROUP BY FromUri)\n\n-- Declare variables\n DECLARE @Hours int\n DECLARE @Minutes int\n DECLARE @Seconds int\n\n-- Set the calculations for hour, minute and second\nSET @Hours = @SecondsToConvert/3600\nSET @Minutes = (@SecondsToConvert % 3600) / 60\nSET @Seconds = @SecondsToConvert % 60\n\nSELECT COUNT(*) AS 'Aantal gesprekken'\n,FromUri AS 'Medewerker'\n,@Hours AS 'Uren' ,@Minutes AS 'Minuten' , @Seconds AS 'Seconden'\n FROM [LcsCDR].[dbo].[SessionDetailsView]\nWHERE FromUri LIKE '%robert%'\nAND (CAST([InviteTime] AS date)) BETWEEN '2012-12-27' AND '2013-01-28'\nAND MediaTypes = '16'\nGROUP BY FromUri\n
    \n

    As a result, i now get the accurate time.

    \n
    302 robert  28  19  56\n
    \n

    28 hours, 19 minutes and 56 seconds, just like it should be :)

    \n soup wrap:

    With the help of Steoleary i have managed a solution

    DECLARE @SecondsToConvert int
    SET @SecondsToConvert = (SELECT (SUM(DATEDIFF(hour,InviteTime,EndTime) * 3600) + SUM(DATEDIFF(minute,InviteTime,EndTime) * 60) + SUM(DATEDIFF(second,InviteTime,EndTime) * 1)) AS [Seconds] 
     FROM [LcsCDR].[dbo].[SessionDetailsView]
    WHERE FromUri LIKE '%robert%'
    AND (CAST([InviteTime] AS date)) BETWEEN '2012-12-27' AND '2013-01-28'
    AND MediaTypes = '16'
    GROUP BY FromUri)
    
    -- Declare variables
     DECLARE @Hours int
     DECLARE @Minutes int
     DECLARE @Seconds int
    
    -- Set the calculations for hour, minute and second
    SET @Hours = @SecondsToConvert/3600
    SET @Minutes = (@SecondsToConvert % 3600) / 60
    SET @Seconds = @SecondsToConvert % 60
    
    SELECT COUNT(*) AS 'Aantal gesprekken'
    ,FromUri AS 'Medewerker'
    ,@Hours AS 'Uren' ,@Minutes AS 'Minuten' , @Seconds AS 'Seconden'
     FROM [LcsCDR].[dbo].[SessionDetailsView]
    WHERE FromUri LIKE '%robert%'
    AND (CAST([InviteTime] AS date)) BETWEEN '2012-12-27' AND '2013-01-28'
    AND MediaTypes = '16'
    GROUP BY FromUri
    

    As a result, i now get the accurate time.

    302 robert  28  19  56
    

    28 hours, 19 minutes and 56 seconds, just like it should be :)

    qid & accept id: (15616638, 15616794) query: How to remove duplicate rows from a join query in mysql soup:

    Basically, you can filter the result from the product of the two tables via a.Name < b.Name

    \n
    SELECT  a.Name Name1, b.Name Name2\nFROM    TableName a, TableName b\nWHERE   a.Name < b.Name\nORDER   BY Name1, Name2\n
    \n\n

    OUTPUT

    \n
    ╔═══════╦═════════╗\n║ NAME1 ║  NAME2  ║\n╠═══════╬═════════╣\n║ Amit  ║ Bhagi   ║\n║ Amit  ║ Chinmoy ║\n║ Bhagi ║ Chinmoy ║\n╚═══════╩═════════╝\n
    \n soup wrap:

    Basically, you can filter the result from the product of the two tables via a.Name < b.Name

    SELECT  a.Name Name1, b.Name Name2
    FROM    TableName a, TableName b
    WHERE   a.Name < b.Name
    ORDER   BY Name1, Name2
    

    OUTPUT

    ╔═══════╦═════════╗
    ║ NAME1 ║  NAME2  ║
    ╠═══════╬═════════╣
    ║ Amit  ║ Bhagi   ║
    ║ Amit  ║ Chinmoy ║
    ║ Bhagi ║ Chinmoy ║
    ╚═══════╩═════════╝
    
    qid & accept id: (15621609, 15621718) query: T-SQL Conditional Order By soup:

    CASE is an expression that returns a value. It is not for control-of-flow, like IF. And you can't use IF within a query.

    \n

    Unfortunately, there are some limitations with CASE expressions that make it cumbersome to do what you want. For example, all of the branches in a CASE expression must return the same type, or be implicitly convertible to the same type. I wouldn't try that with strings and dates. You also can't use CASE to specify sort direction.

    \n
    SELECT column_list_please\nFROM dbo.Product -- dbo prefix please\nORDER BY \n  CASE WHEN @sortDir = 'asc' AND @sortOrder = 'name' THEN name END,\n  CASE WHEN @sortDir = 'asc' AND @sortOrder = 'created_date' THEN created_date END,\n  CASE WHEN @sortDir = 'desc' AND @sortOrder = 'name' THEN name END DESC,\n  CASE WHEN @sortDir = 'desc' AND @sortOrder = 'created_date' THEN created_date END DESC;\n
    \n

    An arguably easier solution (especially if this gets more complex) is to use dynamic SQL. To thwart SQL injection you can test the values:

    \n
    IF @sortDir NOT IN ('asc', 'desc')\n  OR @sortOrder NOT IN ('name', 'created_date')\nBEGIN\n  RAISERROR('Invalid params', 11, 1);\n  RETURN;\nEND\n\nDECLARE @sql NVARCHAR(MAX) = N'SELECT column_list_please\n  FROM dbo.Product ORDER BY ' + @sortOrder + ' ' + @sortDir;\n\nEXEC sp_executesql @sql;\n
    \n

    Another plus for dynamic SQL, in spite of all the fear-mongering that is spread about it: you can get the best plan for each sort variation, instead of one single plan that will optimize to whatever sort variation you happened to use first. It also performed best universally in a recent performance comparison I ran:

    \n

    http://sqlperformance.com/conditional-order-by

    \n soup wrap:

    CASE is an expression that returns a value. It is not for control-of-flow, like IF. And you can't use IF within a query.

    Unfortunately, there are some limitations with CASE expressions that make it cumbersome to do what you want. For example, all of the branches in a CASE expression must return the same type, or be implicitly convertible to the same type. I wouldn't try that with strings and dates. You also can't use CASE to specify sort direction.

    SELECT column_list_please
    FROM dbo.Product -- dbo prefix please
    ORDER BY 
      CASE WHEN @sortDir = 'asc' AND @sortOrder = 'name' THEN name END,
      CASE WHEN @sortDir = 'asc' AND @sortOrder = 'created_date' THEN created_date END,
      CASE WHEN @sortDir = 'desc' AND @sortOrder = 'name' THEN name END DESC,
      CASE WHEN @sortDir = 'desc' AND @sortOrder = 'created_date' THEN created_date END DESC;
    

    An arguably easier solution (especially if this gets more complex) is to use dynamic SQL. To thwart SQL injection you can test the values:

    IF @sortDir NOT IN ('asc', 'desc')
      OR @sortOrder NOT IN ('name', 'created_date')
    BEGIN
      RAISERROR('Invalid params', 11, 1);
      RETURN;
    END
    
    DECLARE @sql NVARCHAR(MAX) = N'SELECT column_list_please
      FROM dbo.Product ORDER BY ' + @sortOrder + ' ' + @sortDir;
    
    EXEC sp_executesql @sql;
    

    Another plus for dynamic SQL, in spite of all the fear-mongering that is spread about it: you can get the best plan for each sort variation, instead of one single plan that will optimize to whatever sort variation you happened to use first. It also performed best universally in a recent performance comparison I ran:

    http://sqlperformance.com/conditional-order-by

    qid & accept id: (15622474, 15623355) query: SQL Rolling Total up to a certain date soup:

    Unfortunately with your table structure of points you will have to unpivot the data. An unpivot takes the data from the multiple columns into rows. Once the data is in the rows, it will be much easier to join, filter the data and total the points for each account. The code to unpivot the data will be similar to this:

    \n
    select account,\n  cast(cast(year as varchar(4))+'-'+replace(month_col, 'M', '')+'-01' as date) full_date,\n  pts\nfrom points\nunpivot\n(\n  pts\n  for month_col in ([M01], [M02], [M03], [M04], [M05], [M06], [M07], [M08], [M09], [M10], [M11], [M12])\n) unpiv\n
    \n

    See SQL Fiddle with Demo. The query gives a result similar to this:

    \n
    | ACCOUNT |  FULL_DATE | PTS |\n------------------------------\n|     123 | 2011-01-01 |  10 |\n|     123 | 2011-02-01 |   0 |\n|     123 | 2011-03-01 |   0 |\n|     123 | 2011-04-01 |   0 |\n|     123 | 2011-05-01 |  10 |\n
    \n

    Once the data is in this format, you can join the Customers table to get the total points for each account, so the code will be similar to the following:

    \n
    select \n  c.account, sum(pts) TotalPoints\nfrom customers c\ninner join \n(\n  select account,\n      cast(cast(year as varchar(4))+'-'+replace(month_col, 'M', '')+'-01' as date) full_date,\n    pts\n  from points\n  unpivot\n  (\n    pts\n    for month_col in ([M01], [M02], [M03], [M04], [M05], [M06], [M07], [M08], [M09], [M10], [M11], [M12])\n  ) unpiv\n) p\n  on c.account = p.account\nwhere \n(\n  c.enddate = '9999-12-31'\n  and full_date >= dateadd(year, -1, getdate()) \n  and full_date <= getdate()  \n)\nor\n(\n  c.enddate <> '9999-12-31'\n  and dateadd(year, -1, [enddate]) <= full_date\n  and full_date <= [enddate]\n)\ngroup by c.account\n
    \n

    See SQL Fiddle with Demo

    \n soup wrap:

    Unfortunately with your table structure of points you will have to unpivot the data. An unpivot takes the data from the multiple columns into rows. Once the data is in the rows, it will be much easier to join, filter the data and total the points for each account. The code to unpivot the data will be similar to this:

    select account,
      cast(cast(year as varchar(4))+'-'+replace(month_col, 'M', '')+'-01' as date) full_date,
      pts
    from points
    unpivot
    (
      pts
      for month_col in ([M01], [M02], [M03], [M04], [M05], [M06], [M07], [M08], [M09], [M10], [M11], [M12])
    ) unpiv
    

    See SQL Fiddle with Demo. The query gives a result similar to this:

    | ACCOUNT |  FULL_DATE | PTS |
    ------------------------------
    |     123 | 2011-01-01 |  10 |
    |     123 | 2011-02-01 |   0 |
    |     123 | 2011-03-01 |   0 |
    |     123 | 2011-04-01 |   0 |
    |     123 | 2011-05-01 |  10 |
    

    Once the data is in this format, you can join the Customers table to get the total points for each account, so the code will be similar to the following:

    select 
      c.account, sum(pts) TotalPoints
    from customers c
    inner join 
    (
      select account,
          cast(cast(year as varchar(4))+'-'+replace(month_col, 'M', '')+'-01' as date) full_date,
        pts
      from points
      unpivot
      (
        pts
        for month_col in ([M01], [M02], [M03], [M04], [M05], [M06], [M07], [M08], [M09], [M10], [M11], [M12])
      ) unpiv
    ) p
      on c.account = p.account
    where 
    (
      c.enddate = '9999-12-31'
      and full_date >= dateadd(year, -1, getdate()) 
      and full_date <= getdate()  
    )
    or
    (
      c.enddate <> '9999-12-31'
      and dateadd(year, -1, [enddate]) <= full_date
      and full_date <= [enddate]
    )
    group by c.account
    

    See SQL Fiddle with Demo

    qid & accept id: (15627299, 15627345) query: Using 'AND' in a many-to-many relationship soup:

    This problem is commonly known as Relational Division.

    \n
    SELECT  a.Name\nFROM    [user] a\n        INNER JOIN UserInGroup b\n            ON a.ID = b.UserID\n        INNER JOIN [Group] c\n            ON b.groupID = c.TypeId\nWHERE   c.Name IN ('Directors','London')\nGROUP   BY a.Name\nHAVING  COUNT(*) = 2\n
    \n\n

    But if a UNIQUE constraint was not enforce on GROUP for every USER, DISTINCT keywords is needed to filter out unique groups:

    \n
    SELECT  a.Name\nFROM    [user] a\n        INNER JOIN UserInGroup b\n            ON a.ID = b.UserID\n        INNER JOIN [Group] c\n            ON b.groupID = c.TypeId\nWHERE   c.Name IN ('Directors','London')\nGROUP   BY a.Name\nHAVING  COUNT(DISTINCT c.Name) = 2\n
    \n
    \n

    OUTPUT from both queries

    \n
    ╔══════╗\n║ NAME ║\n╠══════╣\n║ Bob  ║\n╚══════╝\n
    \n soup wrap:

    This problem is commonly known as Relational Division.

    SELECT  a.Name
    FROM    [user] a
            INNER JOIN UserInGroup b
                ON a.ID = b.UserID
            INNER JOIN [Group] c
                ON b.groupID = c.TypeId
    WHERE   c.Name IN ('Directors','London')
    GROUP   BY a.Name
    HAVING  COUNT(*) = 2
    

    But if a UNIQUE constraint was not enforce on GROUP for every USER, DISTINCT keywords is needed to filter out unique groups:

    SELECT  a.Name
    FROM    [user] a
            INNER JOIN UserInGroup b
                ON a.ID = b.UserID
            INNER JOIN [Group] c
                ON b.groupID = c.TypeId
    WHERE   c.Name IN ('Directors','London')
    GROUP   BY a.Name
    HAVING  COUNT(DISTINCT c.Name) = 2
    

    OUTPUT from both queries

    ╔══════╗
    ║ NAME ║
    ╠══════╣
    ║ Bob  ║
    ╚══════╝
    
    qid & accept id: (15650876, 15684494) query: Searching Across Multiple Tables soup:

    Your attributes are attached to pages. So, you can search for pages that have certain attributes, by checking if those Attributes exist for a page. Finding the pages would look like this:

    \n
    Select Page.ID\nFrom Page\nwhere EXISTS\n (Select * \n  From Attributes\n  Where Page_Id = Page.ID\n    and (     (Name = 'Season' and Value = 'Autumn')\n          or  (Name = 'Flavour' and Value = 'Savory')\n          ... etc. ...\n        )\n
    \n

    If you want to find the Links, then you can join this to PAGE_LINK (and even to LINK, if you like).

    \n
    Select Page.ID\nFrom Page\n Join Page_Link PL on PL.Page_ID = Page.ID\n Join Link on Link.ID = PL.Link_ID\nwhere EXISTS\n (Select * \n  From Attributes\n  Where Page_Id = Page.ID\n    and (     (Name = 'Season' and Value = 'Autumn')\n          or  (Name = 'Flavour' and Value = 'Savory')\n          ... etc. ...\n        )\n
    \n soup wrap:

    Your attributes are attached to pages. So, you can search for pages that have certain attributes, by checking if those Attributes exist for a page. Finding the pages would look like this:

    Select Page.ID
    From Page
    where EXISTS
     (Select * 
      From Attributes
      Where Page_Id = Page.ID
        and (     (Name = 'Season' and Value = 'Autumn')
              or  (Name = 'Flavour' and Value = 'Savory')
              ... etc. ...
            )
    

    If you want to find the Links, then you can join this to PAGE_LINK (and even to LINK, if you like).

    Select Page.ID
    From Page
     Join Page_Link PL on PL.Page_ID = Page.ID
     Join Link on Link.ID = PL.Link_ID
    where EXISTS
     (Select * 
      From Attributes
      Where Page_Id = Page.ID
        and (     (Name = 'Season' and Value = 'Autumn')
              or  (Name = 'Flavour' and Value = 'Savory')
              ... etc. ...
            )
    
    qid & accept id: (15706765, 15706859) query: How can I make three columns my primary key soup:
    ALTER TABLE space ADD PRIMARY KEY(Postal, Number, Houseletter);\n
    \n

    If a primary key already exists then you want to do this:

    \n
    ALTER TABLE space DROP PRIMARY KEY, ADD PRIMARY KEY(Postal, Number, Houseletter);\n
    \n

    if you got duplicate PKs, you can try this:

    \n
    ALTER IGNORE TABLE space ADD UNIQUE INDEX idx_name (Postal, Number, Houseletter );\n
    \n

    This will drop all the duplicate rows. As an added benefit, future INSERTs that are duplicates will error out. As always, you may want to take a backup before running something like this

    \n

    Second question, your query should look like this :

    \n
    SELECT postal, number, houseletter, furniturevalue, livingspace\nFROM space INNER JOIN furniture\nON ( space.postal = furniture.postal\nAND     space.number = furniture.number\nAND     space.houseletter = furniture.houseletter)\n
    \n soup wrap:
    ALTER TABLE space ADD PRIMARY KEY(Postal, Number, Houseletter);
    

    If a primary key already exists then you want to do this:

    ALTER TABLE space DROP PRIMARY KEY, ADD PRIMARY KEY(Postal, Number, Houseletter);
    

    if you got duplicate PKs, you can try this:

    ALTER IGNORE TABLE space ADD UNIQUE INDEX idx_name (Postal, Number, Houseletter );
    

    This will drop all the duplicate rows. As an added benefit, future INSERTs that are duplicates will error out. As always, you may want to take a backup before running something like this

    Second question, your query should look like this :

    SELECT postal, number, houseletter, furniturevalue, livingspace
    FROM space INNER JOIN furniture
    ON ( space.postal = furniture.postal
    AND     space.number = furniture.number
    AND     space.houseletter = furniture.houseletter)
    
    qid & accept id: (15720109, 15720178) query: beginner - obtain the top 3 in sql (taking same total score into account) soup:

    Since MySQL do not support Window Function like any RDBMS has, you can still simulate what DENSE_RANK() can do by using user define variables, eg

    \n
    SELECT  a.ID, a.TotalScore, b.Rank\nFROM    TableName a\n        INNER JOIN\n        (\n            SELECT  TotalScore, @rn := @rn + 1 Rank\n            FROM\n                    (\n                        SELECT  DISTINCT TotalScore\n                        FROM    TableName\n                    ) a, (SELECT @rn := 0) b\n            ORDER   BY TotalScore DESC\n        ) b ON  a.TotalScore = b.TotalScore\nWHERE   Rank <= 3\n
    \n\n

    OUTPUT

    \n
    ╔════╦════════════╦══════╗\n║ ID ║ TOTALSCORE ║ RANK ║\n╠════╬════════════╬══════╣\n║  7 ║         20 ║    1 ║\n║  4 ║         20 ║    1 ║\n║  6 ║         18 ║    2 ║\n║  9 ║         18 ║    2 ║\n║  1 ║         16 ║    3 ║\n╚════╩════════════╩══════╝\n
    \n soup wrap:

    Since MySQL do not support Window Function like any RDBMS has, you can still simulate what DENSE_RANK() can do by using user define variables, eg

    SELECT  a.ID, a.TotalScore, b.Rank
    FROM    TableName a
            INNER JOIN
            (
                SELECT  TotalScore, @rn := @rn + 1 Rank
                FROM
                        (
                            SELECT  DISTINCT TotalScore
                            FROM    TableName
                        ) a, (SELECT @rn := 0) b
                ORDER   BY TotalScore DESC
            ) b ON  a.TotalScore = b.TotalScore
    WHERE   Rank <= 3
    

    OUTPUT

    ╔════╦════════════╦══════╗
    ║ ID ║ TOTALSCORE ║ RANK ║
    ╠════╬════════════╬══════╣
    ║  7 ║         20 ║    1 ║
    ║  4 ║         20 ║    1 ║
    ║  6 ║         18 ║    2 ║
    ║  9 ║         18 ║    2 ║
    ║  1 ║         16 ║    3 ║
    ╚════╩════════════╩══════╝
    
    qid & accept id: (15736503, 15737262) query: Oracle using REGEXP to validate a date field soup:

    Try PL/SQL instead of a regular expression. It will be significantly slower, but will be safer and easier to maintain and extend.\nYou should rely on the Oracle format models to do this correctly. I've seen lots of attempts to validate this information using a regular expression, but\nI rarely see it done correctly.

    \n

    If you really care about performance, the real answer is to fix your data model.

    \n

    Code and Test Cases:

    \n
    --Function to convert a string to a date, or return null if the format is wrong.\ncreate or replace function validate_date(p_string in string) return date is\nbegin\n    return to_date(p_string, 'MONTH DD, YYYY');\nexception when others then\n    begin\n        return to_date(p_string, 'MM/DD/YYYY');\n    exception when others then\n        begin\n            return to_date(p_string, 'DD-MON-RR');\n        exception when others then\n            return null;\n        end;\n    end;\nend;\n/\n\n--Test individual values\nselect validate_date('JULY 31, 2009') from dual;\n2009-07-31\nselect validate_date('7/31/2009') from dual;\n2009-07-31\nselect validate_date('31-JUL-09') from dual;\n2009-07-31\nselect validate_date('2009-07-31') from dual;\n\n
    \n

    Simple Performance Test:

    \n
    --Create table to hold test data\ncreate table test1(a_date varchar2(1000)) nologging;\n\n--Insert 10 million rows\nbegin\n    for i in 1 .. 100 loop\n        insert /*+ append */ into test1\n        select to_char(sysdate+level, 'MM/DD/YYYY') from dual connect by level <= 100000;\n\n        commit;\n    end loop;\nend;\n/\n\n--"Warm up" the database, run this a few times, see how long a count takes.\n--Best case time to count: 2.3 seconds\nselect count(*) from test1;\n\n\n--How long does it take to convert all those strings?\n--6 minutes... ouch\nselect count(*)\nfrom test1\nwhere validate_date(a_date) is not null;\n
    \n soup wrap:

    Try PL/SQL instead of a regular expression. It will be significantly slower, but will be safer and easier to maintain and extend. You should rely on the Oracle format models to do this correctly. I've seen lots of attempts to validate this information using a regular expression, but I rarely see it done correctly.

    If you really care about performance, the real answer is to fix your data model.

    Code and Test Cases:

    --Function to convert a string to a date, or return null if the format is wrong.
    create or replace function validate_date(p_string in string) return date is
    begin
        return to_date(p_string, 'MONTH DD, YYYY');
    exception when others then
        begin
            return to_date(p_string, 'MM/DD/YYYY');
        exception when others then
            begin
                return to_date(p_string, 'DD-MON-RR');
            exception when others then
                return null;
            end;
        end;
    end;
    /
    
    --Test individual values
    select validate_date('JULY 31, 2009') from dual;
    2009-07-31
    select validate_date('7/31/2009') from dual;
    2009-07-31
    select validate_date('31-JUL-09') from dual;
    2009-07-31
    select validate_date('2009-07-31') from dual;
    
    

    Simple Performance Test:

    --Create table to hold test data
    create table test1(a_date varchar2(1000)) nologging;
    
    --Insert 10 million rows
    begin
        for i in 1 .. 100 loop
            insert /*+ append */ into test1
            select to_char(sysdate+level, 'MM/DD/YYYY') from dual connect by level <= 100000;
    
            commit;
        end loop;
    end;
    /
    
    --"Warm up" the database, run this a few times, see how long a count takes.
    --Best case time to count: 2.3 seconds
    select count(*) from test1;
    
    
    --How long does it take to convert all those strings?
    --6 minutes... ouch
    select count(*)
    from test1
    where validate_date(a_date) is not null;
    
    qid & accept id: (15742348, 15742443) query: devide operation in sql soup:

    Following should be your query -

    \n
    Select * from employee where projectname = (select projectname from employee where LastName = 'Jones');\n
    \n

    We have not used in clause as Jones is working in one project.

    \n

    If he is working in multiple projects

    \n

    then query can be -

    \n
    Select * from employee where projectname in (select projectname from employee where LastName = 'Jones');\n
    \n

    Thanks

    \n soup wrap:

    Following should be your query -

    Select * from employee where projectname = (select projectname from employee where LastName = 'Jones');
    

    We have not used in clause as Jones is working in one project.

    If he is working in multiple projects

    then query can be -

    Select * from employee where projectname in (select projectname from employee where LastName = 'Jones');
    

    Thanks

    qid & accept id: (15743183, 15743244) query: How to fetch Distinct Title from the GROUP_CONCAT as Left Join without repeating other tables' data? soup:

    You need to use GROUP BY clause because GROUP_CONCAT() is an aggregate function.

    \n
    SELECT  Title, GROUP_CONCAT(FEAT) FeatList\nFROM    Prop_Feat\nGROUP   BY Title\n
    \n\n

    OUTPUT

    \n
    ╔════════════╦═══════════════════╗\n║   TITLE    ║     FEATLIST      ║\n╠════════════╬═══════════════════╣\n║ Appliances ║ Gas Range,Fridge  ║\n║ Interior   ║ Hardwood Flooring ║\n╚════════════╩═══════════════════╝\n
    \n soup wrap:

    You need to use GROUP BY clause because GROUP_CONCAT() is an aggregate function.

    SELECT  Title, GROUP_CONCAT(FEAT) FeatList
    FROM    Prop_Feat
    GROUP   BY Title
    

    OUTPUT

    ╔════════════╦═══════════════════╗
    ║   TITLE    ║     FEATLIST      ║
    ╠════════════╬═══════════════════╣
    ║ Appliances ║ Gas Range,Fridge  ║
    ║ Interior   ║ Hardwood Flooring ║
    ╚════════════╩═══════════════════╝
    
    qid & accept id: (15758509, 15758945) query: Count references to own ID in MySQL with Grouping soup:

    Assuming for response the parentId is the postId for the response then you can achieve this by the following way

    \n

    Query 1:

    \n
    SELECT\n   a.user,\n   SUM(IF(a.parent_id = 0, 1, 0)) as 'NewPosts',\n   SUM(IF(a.parent_id > 0, 1,0))  as 'Responses',\n   COUNT(a.parent_id)             as 'TotalPosts',\n   SUM(IF(a.user = b.user, 1, 0)) as 'SelfResponses'\nFROM \n  Table1 a\nLEFT JOIN\n  Table1 b\nON \n  a.parent_id = b.id\nGROUP BY \n  a.user\n
    \n

    Results:

    \n
    |   USER | NEWPOSTS | RESPONSES | TOTALPOSTS | SELFRESPONSES |\n--------------------------------------------------------------\n|  Henry |        1 |         2 |          3 |             1 |\n| Joseph |        1 |         0 |          1 |             0 |\n
    \n

    SQL FIDDLE

    \n

    Hope this helps

    \n soup wrap:

    Assuming for response the parentId is the postId for the response then you can achieve this by the following way

    Query 1:

    SELECT
       a.user,
       SUM(IF(a.parent_id = 0, 1, 0)) as 'NewPosts',
       SUM(IF(a.parent_id > 0, 1,0))  as 'Responses',
       COUNT(a.parent_id)             as 'TotalPosts',
       SUM(IF(a.user = b.user, 1, 0)) as 'SelfResponses'
    FROM 
      Table1 a
    LEFT JOIN
      Table1 b
    ON 
      a.parent_id = b.id
    GROUP BY 
      a.user
    

    Results:

    |   USER | NEWPOSTS | RESPONSES | TOTALPOSTS | SELFRESPONSES |
    --------------------------------------------------------------
    |  Henry |        1 |         2 |          3 |             1 |
    | Joseph |        1 |         0 |          1 |             0 |
    

    SQL FIDDLE

    Hope this helps

    qid & accept id: (15808243, 15809796) query: How to Select master table data and select referance table top one data sql query soup:

    In SQLServer2005+ use option with OUTER APPLY operator

    \n
    SELECT *\nFROM master t1 OUTER APPLY (\n                            SELECT TOP 1 t2.Col1, t2.Col2 ...\n                            FROM child t2\n                            WHERE t1.Id = t2.Id\n                            ORDER BY t2.CreatedDate DESC\n                            ) o\n
    \n

    OR option with CTE and ROW_NUMBER() ranking function

    \n
    ;WITH cte AS\n (                            \n  SELECT *, \n         ROW_NUMBER() OVER(PARTITION BY t1.Id ORDER BY t2.CreatedDate DESC) AS rn\n  FROM master t1 JOIN child t2 ON t1.Id = t2.Id\n  )\n  SELECT *\n  FROM cte\n  WHERE rn = 1\n
    \n soup wrap:

    In SQLServer2005+ use option with OUTER APPLY operator

    SELECT *
    FROM master t1 OUTER APPLY (
                                SELECT TOP 1 t2.Col1, t2.Col2 ...
                                FROM child t2
                                WHERE t1.Id = t2.Id
                                ORDER BY t2.CreatedDate DESC
                                ) o
    

    OR option with CTE and ROW_NUMBER() ranking function

    ;WITH cte AS
     (                            
      SELECT *, 
             ROW_NUMBER() OVER(PARTITION BY t1.Id ORDER BY t2.CreatedDate DESC) AS rn
      FROM master t1 JOIN child t2 ON t1.Id = t2.Id
      )
      SELECT *
      FROM cte
      WHERE rn = 1
    
    qid & accept id: (15834569, 15834758) query: How to bulk insert only new rows in PostreSQL soup:

    Import data

    \n

    COPY everything to a temporary staging table and insert only new titles into your target table.

    \n
    CREATE TEMP TABLE tmp(title text);\n\nCOPY tmp FROM 'path/to/file.csv';\nANALYZE tmp;\n\nINSERT INTO tbl\nSELECT DISTINCT tmp.title\nFROM   tmp \nLEFT   JOIN tbl USING (title)\nWHERE  tbl.title IS NULL;\n
    \n

    IDs should be generated automatically with a serial column tbl_id in tbl.

    \n

    The LEFT JOIN / IS NULL construct disqualifies already existing titles. NOT EXISTS would be another possibility.

    \n

    DISTINCT prevents duplicates in the incoming data in the temporary table tmp.

    \n

    ANALYZE is useful to make sure the query planner picks a sensible plan, and temporary tables are not analyzed by autovacuum.

    \n

    Since you have 3 million items, it might pay to raise the setting for temp_buffer (for this session only):

    \n
    SET temp_buffers = 1000MB;\n
    \n

    Or however much you can afford and is enough to hold the temp table in RAM, which is much faster. Note: must be done first in the session - before any temp objects are created.

    \n

    Retrieve IDs

    \n

    To see all IDs for the imported data:

    \n
    SELECT tbl.tbl_id, tbl.title\nFROM   tbl\nJOIN   tmp USING (title)\n
    \n

    In the same session! A temporary table is dropped automatically at the end of the session.

    \n soup wrap:

    Import data

    COPY everything to a temporary staging table and insert only new titles into your target table.

    CREATE TEMP TABLE tmp(title text);
    
    COPY tmp FROM 'path/to/file.csv';
    ANALYZE tmp;
    
    INSERT INTO tbl
    SELECT DISTINCT tmp.title
    FROM   tmp 
    LEFT   JOIN tbl USING (title)
    WHERE  tbl.title IS NULL;
    

    IDs should be generated automatically with a serial column tbl_id in tbl.

    The LEFT JOIN / IS NULL construct disqualifies already existing titles. NOT EXISTS would be another possibility.

    DISTINCT prevents duplicates in the incoming data in the temporary table tmp.

    ANALYZE is useful to make sure the query planner picks a sensible plan, and temporary tables are not analyzed by autovacuum.

    Since you have 3 million items, it might pay to raise the setting for temp_buffer (for this session only):

    SET temp_buffers = 1000MB;
    

    Or however much you can afford and is enough to hold the temp table in RAM, which is much faster. Note: must be done first in the session - before any temp objects are created.

    Retrieve IDs

    To see all IDs for the imported data:

    SELECT tbl.tbl_id, tbl.title
    FROM   tbl
    JOIN   tmp USING (title)
    

    In the same session! A temporary table is dropped automatically at the end of the session.

    qid & accept id: (15836482, 15837317) query: Query to replace null values from the table soup:

    This query should work even if there are several records in a row with NULL

    \n

    Query:

    \n

    SQLFIDDLEExample

    \n
    UPDATE Table1\nSET car_name = (SELECT t1.car_name\n                FROM (SELECT * FROM Table1) t1\n                WHERE t1.id < Table1.id\n                AND t1.car_name is not null\n                ORDER BY t1.id DESC\n                LIMIT 1)\nWHERE car_name is null\n
    \n

    Result:

    \n
    | ID | CAR_NAME | MODEL | YEAR |\n--------------------------------\n|  1 |        a |   abc | 2000 |\n|  2 |        b |   xyx | 2001 |\n|  3 |        b |   asd | 2003 |\n|  4 |        c |   qwe | 2004 |\n|  5 |        c |   xds | 2005 |\n|  6 |        d |   asd | 2006 |\n
    \n soup wrap:

    This query should work even if there are several records in a row with NULL

    Query:

    SQLFIDDLEExample

    UPDATE Table1
    SET car_name = (SELECT t1.car_name
                    FROM (SELECT * FROM Table1) t1
                    WHERE t1.id < Table1.id
                    AND t1.car_name is not null
                    ORDER BY t1.id DESC
                    LIMIT 1)
    WHERE car_name is null
    

    Result:

    | ID | CAR_NAME | MODEL | YEAR |
    --------------------------------
    |  1 |        a |   abc | 2000 |
    |  2 |        b |   xyx | 2001 |
    |  3 |        b |   asd | 2003 |
    |  4 |        c |   qwe | 2004 |
    |  5 |        c |   xds | 2005 |
    |  6 |        d |   asd | 2006 |
    
    qid & accept id: (15872394, 15872582) query: Using multiple joins (e.g left join) soup:

    Let be table B:

    \n
    id\n----\n1\n2\n3\n
    \n

    Let be table C

    \n
    id     name\n------------\n1      John\n2      Mary\n2      Anne\n3      Stef\n
    \n

    Any id from b is matched with ids from c, then id=2 will be matched twice. So a left join on id will return 4 rows even if base table B has 3 rows.

    \n

    Now look at a more evil example:

    \n

    Table B

    \n
    id\n----\n1\n2\n2\n3\n4\n
    \n

    table C

    \n
    id     name\n------------\n1      John\n2      Mary\n2      Anne\n3      Stef\n
    \n

    Every id from b is matched with ids from c, then first id=2 will be matched twice and second id=2 will be matched twice so the result of

    \n
    select b.id, c.name\nfrom b left join c on (b.id = c.id)\n
    \n

    will be

    \n
    id     name\n------------\n1      John\n2      Mary\n2      Mary\n2      Anne\n2      Anne\n3      Stef\n4      (null)\n
    \n

    The id=4 is not matched but appears in the result because is a left join.

    \n soup wrap:

    Let be table B:

    id
    ----
    1
    2
    3
    

    Let be table C

    id     name
    ------------
    1      John
    2      Mary
    2      Anne
    3      Stef
    

    Any id from b is matched with ids from c, then id=2 will be matched twice. So a left join on id will return 4 rows even if base table B has 3 rows.

    Now look at a more evil example:

    Table B

    id
    ----
    1
    2
    2
    3
    4
    

    table C

    id     name
    ------------
    1      John
    2      Mary
    2      Anne
    3      Stef
    

    Every id from b is matched with ids from c, then first id=2 will be matched twice and second id=2 will be matched twice so the result of

    select b.id, c.name
    from b left join c on (b.id = c.id)
    

    will be

    id     name
    ------------
    1      John
    2      Mary
    2      Mary
    2      Anne
    2      Anne
    3      Stef
    4      (null)
    

    The id=4 is not matched but appears in the result because is a left join.

    qid & accept id: (15948208, 15948748) query: Group dates by their day of week soup:

    I think to get exactly what you want in one query is not easily possible. But I came to something that is nearly your desired result:

    \n
    SELECT TIME(air), title, GROUP_CONCAT(DAYOFWEEK(air)) \nFROM programs WHERE title = 'Factor' \nGROUP BY TIME(air)\n
    \n

    This gives me the following result:

    \n
    TIME(air)   title   GROUP_CONCAT(DAYOFWEEK(air))\n-------------------------------------------------\n14:00:00    Factor  3\n17:00:00    Factor  2,3,4\n
    \n

    With this result you can easily utilize php to get your desired result. Results like "monday, wednesday, friday-saturday" are possible with this too.

    \n soup wrap:

    I think to get exactly what you want in one query is not easily possible. But I came to something that is nearly your desired result:

    SELECT TIME(air), title, GROUP_CONCAT(DAYOFWEEK(air)) 
    FROM programs WHERE title = 'Factor' 
    GROUP BY TIME(air)
    

    This gives me the following result:

    TIME(air)   title   GROUP_CONCAT(DAYOFWEEK(air))
    -------------------------------------------------
    14:00:00    Factor  3
    17:00:00    Factor  2,3,4
    

    With this result you can easily utilize php to get your desired result. Results like "monday, wednesday, friday-saturday" are possible with this too.

    qid & accept id: (15964439, 15965140) query: Efficient way to insert multiple rows and assigning each one's Id to another table's column soup:

    You can use the OUTPUT clause to capture identities from multiple inserted rows. In the following, I'm assuming that ServiceName and RequestName are sufficient to uniquely identify values being passed in. If they're not, then hopefully you can adapt the below (you didn't really define in the question any usable non-identity column names or values):

    \n

    First, set up the tables:

    \n
    create table Requests (RId int IDENTITY(1,1) not null primary key,RequestName varchar(10) not null)\ncreate table Services (SId int IDENTITY(1,1) not null primary key,ServiceName varchar(10) not null)\ncreate table Mappings (MId int IDENTITY(1,1) not null,RId int not null references Requests,SId int not null references Services)\n
    \n

    Now declare what would be the TVP passed into the stored procedure (note that this script and the next need to be run together in this simulation):

    \n
    declare @NewValues table (\n    RequestName varchar(10) not null,\n    ServiceName varchar(10) not null\n)\ninsert into @NewValues (RequestName,ServiceName) values\n('R1','S1'),\n('R1','S2'),\n('R1','S3'),\n('R2','S4'),\n('R2','S5'),\n('R3','S6')\n
    \n

    And then, inside the SP, you'd have code like the following:

    \n
    declare @TmpRIDs table (RequestName varchar(10) not null,RId int not null)\ndeclare @TmpSIDs table (ServiceName varchar(10) not null,SId int not null)\n\n;merge into Requests r using (select distinct RequestName from @NewValues) n on 1=0\nwhen not matched then insert (RequestName) values (n.RequestName)\noutput n.RequestName,inserted.RId into @TmpRIDs;\n\n;merge into Services s using (select distinct ServiceName from @NewValues) n on 1=0\nwhen not matched then insert (ServiceName) values (n.ServiceName)\noutput n.ServiceName,inserted.SId into @TmpSIDs;\n\ninsert into Mappings (RId,SId)\nselect RId,SId\nfrom @NewValues nv\n    inner join\n    @TmpRIds r\n        on\n            nv.RequestName = r.RequestName \n    inner join\n    @TmpSIDs s\n        on\n            nv.ServiceName = s.ServiceName;\n
    \n

    And to check the result:

    \n
    select * from Mappings\n
    \n

    produces:

    \n
    MId         RId         SId\n----------- ----------- -----------\n1           1           1\n2           1           2\n3           1           3\n4           2           4\n5           2           5\n6           3           6\n
    \n

    Which is similar to what you have in your question.

    \n

    The tricky part of the code is (mis-)using the MERGE statement, in order to be able to capture columns from both the inserted table (which contains the newly generated IDENTITY values) and the table that's acting as the source of rows. The OUTPUT clause for the INSERT statement only allows reference to the inserted pseudo-table, so it can't be used here.

    \n soup wrap:

    You can use the OUTPUT clause to capture identities from multiple inserted rows. In the following, I'm assuming that ServiceName and RequestName are sufficient to uniquely identify values being passed in. If they're not, then hopefully you can adapt the below (you didn't really define in the question any usable non-identity column names or values):

    First, set up the tables:

    create table Requests (RId int IDENTITY(1,1) not null primary key,RequestName varchar(10) not null)
    create table Services (SId int IDENTITY(1,1) not null primary key,ServiceName varchar(10) not null)
    create table Mappings (MId int IDENTITY(1,1) not null,RId int not null references Requests,SId int not null references Services)
    

    Now declare what would be the TVP passed into the stored procedure (note that this script and the next need to be run together in this simulation):

    declare @NewValues table (
        RequestName varchar(10) not null,
        ServiceName varchar(10) not null
    )
    insert into @NewValues (RequestName,ServiceName) values
    ('R1','S1'),
    ('R1','S2'),
    ('R1','S3'),
    ('R2','S4'),
    ('R2','S5'),
    ('R3','S6')
    

    And then, inside the SP, you'd have code like the following:

    declare @TmpRIDs table (RequestName varchar(10) not null,RId int not null)
    declare @TmpSIDs table (ServiceName varchar(10) not null,SId int not null)
    
    ;merge into Requests r using (select distinct RequestName from @NewValues) n on 1=0
    when not matched then insert (RequestName) values (n.RequestName)
    output n.RequestName,inserted.RId into @TmpRIDs;
    
    ;merge into Services s using (select distinct ServiceName from @NewValues) n on 1=0
    when not matched then insert (ServiceName) values (n.ServiceName)
    output n.ServiceName,inserted.SId into @TmpSIDs;
    
    insert into Mappings (RId,SId)
    select RId,SId
    from @NewValues nv
        inner join
        @TmpRIds r
            on
                nv.RequestName = r.RequestName 
        inner join
        @TmpSIDs s
            on
                nv.ServiceName = s.ServiceName;
    

    And to check the result:

    select * from Mappings
    

    produces:

    MId         RId         SId
    ----------- ----------- -----------
    1           1           1
    2           1           2
    3           1           3
    4           2           4
    5           2           5
    6           3           6
    

    Which is similar to what you have in your question.

    The tricky part of the code is (mis-)using the MERGE statement, in order to be able to capture columns from both the inserted table (which contains the newly generated IDENTITY values) and the table that's acting as the source of rows. The OUTPUT clause for the INSERT statement only allows reference to the inserted pseudo-table, so it can't be used here.

    qid & accept id: (16036991, 16037053) query: Reference something in the select clause SQL soup:

    No, you cannot used the alias that was generated on the same level on the SELECT statement.

    \n

    Here are the possible ways to accomplish.

    \n

    Using the original formula:

    \n
    select sum([some calculation]) as x,\n       sum([some other calculation]) as y,\n       sum([some calculation]) / sum([some other calculation]) as z\nfrom    tableName\n
    \n

    or by using subquery:

    \n
    SELECT  x,\n        y,\n        x/y z\nFROM \n(\n   select sum([some calculation]) as x,\n          sum([some other calculation]) as y\n   from   tableName\n) s\n
    \n soup wrap:

    No, you cannot used the alias that was generated on the same level on the SELECT statement.

    Here are the possible ways to accomplish.

    Using the original formula:

    select sum([some calculation]) as x,
           sum([some other calculation]) as y,
           sum([some calculation]) / sum([some other calculation]) as z
    from    tableName
    

    or by using subquery:

    SELECT  x,
            y,
            x/y z
    FROM 
    (
       select sum([some calculation]) as x,
              sum([some other calculation]) as y
       from   tableName
    ) s
    
    qid & accept id: (16053215, 16053891) query: find second (or nth) latest value in oracle soup:

    If I understand you right, then try something like this:

    \n
    select * \nfrom(\n  select sent_by, row_number() over (order by sent_by desc, id asc) row_num\n  from MY_TEST) t\nwhere row_num = 2 -- or 3 ... n\n
    \n
    \n

    UPDATE

    \n

    Try this:

    \n
    select * \nfrom(\n  select sent_by, \n         rank() over (order by max(id) desc)  rk\n   from MY_TEST\n  group by sent_by) t\nwhere rk = 2 -- or 3 .. n\n
    \n

    Here is a sqlfiddle demo

    \n soup wrap:

    If I understand you right, then try something like this:

    select * 
    from(
      select sent_by, row_number() over (order by sent_by desc, id asc) row_num
      from MY_TEST) t
    where row_num = 2 -- or 3 ... n
    

    UPDATE

    Try this:

    select * 
    from(
      select sent_by, 
             rank() over (order by max(id) desc)  rk
       from MY_TEST
      group by sent_by) t
    where rk = 2 -- or 3 .. n
    

    Here is a sqlfiddle demo

    qid & accept id: (16053425, 16053663) query: Select column names that match a criteria (MySQL) soup:

    If I understand your question correctly, maybe you need something like this:

    \n
    SELECT 'col_a' col\nFROM yourtable\nWHERE col_a\nUNION\nSELECT 'col_b'\nFROM yourtable\nWHERE col_b\nUNION\nSELECT 'col_c'\nFROM yourtable\nWHERE col_c\n...\n
    \n

    this will return all columns in your table that have at least one row where they are true.

    \n

    Or maybe this:

    \n
    SELECT\n  id,\n  CONCAT_WS(', ',\n    CASE WHEN col_a THEN 'col_a' END,\n    CASE WHEN col_b THEN 'col_b' END,\n    CASE WHEN col_c THEN 'col_c' END) cols\nFROM\n  yourtable\n
    \n

    that will return rows in this format:

    \n
    | ID | COLS                |\n----------------------------\n|  1 | col_a, col_c        |\n|  2 | col_a, col_b, col_c |\n|  3 |                     |\n|  4 | col_c               |\n...\n
    \n

    Please see fiddle here. And if you need to do it dynamically, you could use this prepared statement:

    \n
    SELECT\n  CONCAT(\n    'SELECT id, CONCAT_WS(\', \',',\n  GROUP_CONCAT(\n    CONCAT('CASE WHEN ',\n           `COLUMN_NAME`,\n           ' THEN \'',\n           `COLUMN_NAME`,\n           '\' END')),\n    ') cols FROM yourtable'\n  )\nFROM\n  `INFORMATION_SCHEMA`.`COLUMNS` \nWHERE\n  `TABLE_NAME`='yourtable'\n  AND COLUMN_NAME!='id'\nINTO @sql;\n\nPREPARE stmt FROM @sql;\nEXECUTE stmt;\n
    \n

    Fiddle here.

    \n soup wrap:

    If I understand your question correctly, maybe you need something like this:

    SELECT 'col_a' col
    FROM yourtable
    WHERE col_a
    UNION
    SELECT 'col_b'
    FROM yourtable
    WHERE col_b
    UNION
    SELECT 'col_c'
    FROM yourtable
    WHERE col_c
    ...
    

    this will return all columns in your table that have at least one row where they are true.

    Or maybe this:

    SELECT
      id,
      CONCAT_WS(', ',
        CASE WHEN col_a THEN 'col_a' END,
        CASE WHEN col_b THEN 'col_b' END,
        CASE WHEN col_c THEN 'col_c' END) cols
    FROM
      yourtable
    

    that will return rows in this format:

    | ID | COLS                |
    ----------------------------
    |  1 | col_a, col_c        |
    |  2 | col_a, col_b, col_c |
    |  3 |                     |
    |  4 | col_c               |
    ...
    

    Please see fiddle here. And if you need to do it dynamically, you could use this prepared statement:

    SELECT
      CONCAT(
        'SELECT id, CONCAT_WS(\', \',',
      GROUP_CONCAT(
        CONCAT('CASE WHEN ',
               `COLUMN_NAME`,
               ' THEN \'',
               `COLUMN_NAME`,
               '\' END')),
        ') cols FROM yourtable'
      )
    FROM
      `INFORMATION_SCHEMA`.`COLUMNS` 
    WHERE
      `TABLE_NAME`='yourtable'
      AND COLUMN_NAME!='id'
    INTO @sql;
    
    PREPARE stmt FROM @sql;
    EXECUTE stmt;
    

    Fiddle here.

    qid & accept id: (16093468, 16093586) query: Over lapping in SQL soup:

    My solution starts by generating all possible pairs of applications that are of interest. This is the driver subquery.

    \n

    It then joins in the original data for each of the apps.

    \n

    Finally, it uses count(distinct) to count the distinct users that match between the two lists.

    \n
    select pairs.app1, pairs.app2,\n       COUNT(distinct case when tleft.user = tright.user then tleft.user end) as NumCommonUsers\nfrom (select t1.app as app1, t2.app as app2\n      from (select distinct app\n            from t\n           ) t1 cross join\n           (select distinct app\n            from t\n           ) t2\n      where t1.app <= t2.app\n     ) pairs left outer join\n     t tleft\n     on tleft.app = pairs.app1 left outer join\n     t tright\n     on tright.app = pairs.app2\ngroup by pairs.app1, pairs.app2\n
    \n

    You could move the conditional comparison in the count to the joins and just use count(distinct):

    \n
    select pairs.app1, pairs.app2,\n       COUNT(distinct tleft.user) as NumCommonUsers\nfrom (select t1.app as app1, t2.app as app2\n      from (select distinct app\n            from t\n           ) t1 cross join\n           (select distinct app\n            from t\n           ) t2\n      where t1.app <= t2.app\n     ) pairs left outer join\n     t tleft\n     on tleft.app = pairs.app1 left outer join\n     t tright\n     on tright.app = pairs.app2 and\n        tright.user = tleft.user\ngroup by pairs.app1, pairs.app2\n
    \n

    I prefer the first method because it is more explicit on what is being counted.

    \n

    This is standard SQL, so it should work on Vertica.

    \n soup wrap:

    My solution starts by generating all possible pairs of applications that are of interest. This is the driver subquery.

    It then joins in the original data for each of the apps.

    Finally, it uses count(distinct) to count the distinct users that match between the two lists.

    select pairs.app1, pairs.app2,
           COUNT(distinct case when tleft.user = tright.user then tleft.user end) as NumCommonUsers
    from (select t1.app as app1, t2.app as app2
          from (select distinct app
                from t
               ) t1 cross join
               (select distinct app
                from t
               ) t2
          where t1.app <= t2.app
         ) pairs left outer join
         t tleft
         on tleft.app = pairs.app1 left outer join
         t tright
         on tright.app = pairs.app2
    group by pairs.app1, pairs.app2
    

    You could move the conditional comparison in the count to the joins and just use count(distinct):

    select pairs.app1, pairs.app2,
           COUNT(distinct tleft.user) as NumCommonUsers
    from (select t1.app as app1, t2.app as app2
          from (select distinct app
                from t
               ) t1 cross join
               (select distinct app
                from t
               ) t2
          where t1.app <= t2.app
         ) pairs left outer join
         t tleft
         on tleft.app = pairs.app1 left outer join
         t tright
         on tright.app = pairs.app2 and
            tright.user = tleft.user
    group by pairs.app1, pairs.app2
    

    I prefer the first method because it is more explicit on what is being counted.

    This is standard SQL, so it should work on Vertica.

    qid & accept id: (16127878, 16128127) query: Inserting data from one table(triplestore) to another(property table) soup:

    This is easy if you have a known, fixed set of properties. If you do not have a known set of fixed properties you have to generate dynamic SQL, either from your app, from PL/PgSQL or using the crosstab function from the tablefunc extension.

    \n

    For fixed property sets you can self-join:

    \n

    http://sqlfiddle.com/#!12/391b7/6

    \n
    SELECT p1."Subject", p1."Object" AS "prop1", p2."Object" AS "prop2"\nFROM triplestore p1\nINNER JOIN triplestore p2 ON (p1."Subject" = p2."Subject")\nWHERE p1."Property" = 'prop1'\n  AND p2."Property" = 'prop2'\nORDER BY p1."Subject";\n\nSELECT p1."Subject", p1."Object" AS "prop1"\nFROM triplestore p1\nWHERE p1."Property" = 'prop3'\nORDER BY p1."Subject";\n
    \n

    To turn these into INSERTs simply use INSERT ... SELECT eg:

    \n
    INSERT INTO "Property Table 1"\nSELECT p1."Subject", p1."Object" AS "prop1"\nFROM triplestore p1\nWHERE p1."Property" = 'prop3'\nORDER BY p1."Subject";\n
    \n soup wrap:

    This is easy if you have a known, fixed set of properties. If you do not have a known set of fixed properties you have to generate dynamic SQL, either from your app, from PL/PgSQL or using the crosstab function from the tablefunc extension.

    For fixed property sets you can self-join:

    http://sqlfiddle.com/#!12/391b7/6

    SELECT p1."Subject", p1."Object" AS "prop1", p2."Object" AS "prop2"
    FROM triplestore p1
    INNER JOIN triplestore p2 ON (p1."Subject" = p2."Subject")
    WHERE p1."Property" = 'prop1'
      AND p2."Property" = 'prop2'
    ORDER BY p1."Subject";
    
    SELECT p1."Subject", p1."Object" AS "prop1"
    FROM triplestore p1
    WHERE p1."Property" = 'prop3'
    ORDER BY p1."Subject";
    

    To turn these into INSERTs simply use INSERT ... SELECT eg:

    INSERT INTO "Property Table 1"
    SELECT p1."Subject", p1."Object" AS "prop1"
    FROM triplestore p1
    WHERE p1."Property" = 'prop3'
    ORDER BY p1."Subject";
    
    qid & accept id: (16136119, 16136452) query: MySQL - Combining multiple selects from same table into one result table with a group by soup:

    MySQL does not have a PIVOT function but you can convert the rows of data into columns using an aggregate function with a CASE expression.

    \n

    If you have a limited number of years, then you can hard-code the query:

    \n
    select meterNo,\n  sum(case when year(readingDate) = 2009 then readingValue else 0 end) `2009`,\n  sum(case when year(readingDate) = 2010 then readingValue else 0 end) `2010`,\n  sum(case when year(readingDate) = 2011 then readingValue else 0 end) `2011`,\n  sum(case when year(readingDate) = 2012 then readingValue else 0 end) `2012`,\n  sum(case when year(readingDate) = 2013 then readingValue else 0 end) `2013`\nfrom readings\ngroup by meterno;\n
    \n

    See SQL Fiddle with Demo

    \n

    But if you are going to have an unknown number of values or what the query to adjust as new years are added to the database, then you can use a prepared statement to generate dynamic SQL:

    \n
    SET @sql = NULL;\nSELECT\n  GROUP_CONCAT(DISTINCT\n    CONCAT(\n      'sum(CASE WHEN year(readingDate) = ',\n      year(readingDate),\n      ' THEN readingValue else 0 END) AS `',\n      year(readingDate), '`'\n    )\n  ) INTO @sql\nFROM readings;\n\nSET @sql \n  = CONCAT('SELECT meterno, ', @sql, ' \n            from readings\n            group by meterno');\n\nPREPARE stmt FROM @sql;\nEXECUTE stmt;\nDEALLOCATE PREPARE stmt;\n
    \n

    See SQL Fiddle with Demo. Both give the result:

    \n
    | METERNO | 2009 | 2010 | 2012 | 2013 | 2011 |\n----------------------------------------------\n|       1 |   90 |  180 |    0 |   90 |   90 |\n|       2 |   50 |    0 |   90 |    0 |    0 |\n|       3 |   80 |   40 |   90 |   90 |    0 |\n
    \n

    As a side note, if you want null to display in the rows without values instead of the zeros, then you can remove the else 0 (see Demo)

    \n soup wrap:

    MySQL does not have a PIVOT function but you can convert the rows of data into columns using an aggregate function with a CASE expression.

    If you have a limited number of years, then you can hard-code the query:

    select meterNo,
      sum(case when year(readingDate) = 2009 then readingValue else 0 end) `2009`,
      sum(case when year(readingDate) = 2010 then readingValue else 0 end) `2010`,
      sum(case when year(readingDate) = 2011 then readingValue else 0 end) `2011`,
      sum(case when year(readingDate) = 2012 then readingValue else 0 end) `2012`,
      sum(case when year(readingDate) = 2013 then readingValue else 0 end) `2013`
    from readings
    group by meterno;
    

    See SQL Fiddle with Demo

    But if you are going to have an unknown number of values or what the query to adjust as new years are added to the database, then you can use a prepared statement to generate dynamic SQL:

    SET @sql = NULL;
    SELECT
      GROUP_CONCAT(DISTINCT
        CONCAT(
          'sum(CASE WHEN year(readingDate) = ',
          year(readingDate),
          ' THEN readingValue else 0 END) AS `',
          year(readingDate), '`'
        )
      ) INTO @sql
    FROM readings;
    
    SET @sql 
      = CONCAT('SELECT meterno, ', @sql, ' 
                from readings
                group by meterno');
    
    PREPARE stmt FROM @sql;
    EXECUTE stmt;
    DEALLOCATE PREPARE stmt;
    

    See SQL Fiddle with Demo. Both give the result:

    | METERNO | 2009 | 2010 | 2012 | 2013 | 2011 |
    ----------------------------------------------
    |       1 |   90 |  180 |    0 |   90 |   90 |
    |       2 |   50 |    0 |   90 |    0 |    0 |
    |       3 |   80 |   40 |   90 |   90 |    0 |
    

    As a side note, if you want null to display in the rows without values instead of the zeros, then you can remove the else 0 (see Demo)

    qid & accept id: (16143769, 16147021) query: Referencing the value of the previous calculcated value in Oracle soup:

    A variation on Ben's answer to use a windowing clause, which seems to take care of your updated requirements:

    \n
    select eventno, eventtype, totalcharge, remainingqty, outqty,\n    initial_charge - case when running_outqty = 0 then 0\n    else (running_outqty / 100) * initial_charge end as remainingcharge\nfrom (\n    select eventno, eventtype, totalcharge, remainingqty, outqty,\n        first_value(totalcharge) over (partition by null\n            order by eventno desc) as initial_charge,\n        sum(outqty) over (partition by null\n            order by eventno desc\n            rows between unbounded preceding and current row)\n            as running_outqty\n    from t42\n);\n
    \n

    Except it gives 19.2 instead of 12.8 for the third row, but that's what your formula suggests it should be:

    \n
       EVENTNO EVENT TOTALCHARGE REMAININGQTY     OUTQTY REMAININGCHARGE\n---------- ----- ----------- ------------ ---------- ---------------\n         4 ACQ            32          100          0              32\n         3 OTHER                      100          0              32\n         2 OUT                         60         40            19.2\n         1 OUT                          0         60               0\n
    \n

    If I add another split so it goes from 60 to zero in two steps, with another non-OUT record in the mix too:

    \n
       EVENTNO EVENT TOTALCHARGE REMAININGQTY     OUTQTY REMAININGCHARGE\n---------- ----- ----------- ------------ ---------- ---------------\n         6 ACQ            32          100          0              32\n         5 OTHER                      100          0              32\n         4 OUT                         60         40            19.2\n         3 OUT                         30         30             9.6\n         2 OTHER                       30          0             9.6\n         1 OUT                          0         30               0\n
    \n

    There's an assumption that the remaining quantity is consistent and you can effectively track a running total of what has gone before, but from the data you've shown that looks plausible. The inner query calculates that running total for each row, and the outer query does the calculation; that could be condensed but is hopefully clearer like this...

    \n soup wrap:

    A variation on Ben's answer to use a windowing clause, which seems to take care of your updated requirements:

    select eventno, eventtype, totalcharge, remainingqty, outqty,
        initial_charge - case when running_outqty = 0 then 0
        else (running_outqty / 100) * initial_charge end as remainingcharge
    from (
        select eventno, eventtype, totalcharge, remainingqty, outqty,
            first_value(totalcharge) over (partition by null
                order by eventno desc) as initial_charge,
            sum(outqty) over (partition by null
                order by eventno desc
                rows between unbounded preceding and current row)
                as running_outqty
        from t42
    );
    

    Except it gives 19.2 instead of 12.8 for the third row, but that's what your formula suggests it should be:

       EVENTNO EVENT TOTALCHARGE REMAININGQTY     OUTQTY REMAININGCHARGE
    ---------- ----- ----------- ------------ ---------- ---------------
             4 ACQ            32          100          0              32
             3 OTHER                      100          0              32
             2 OUT                         60         40            19.2
             1 OUT                          0         60               0
    

    If I add another split so it goes from 60 to zero in two steps, with another non-OUT record in the mix too:

       EVENTNO EVENT TOTALCHARGE REMAININGQTY     OUTQTY REMAININGCHARGE
    ---------- ----- ----------- ------------ ---------- ---------------
             6 ACQ            32          100          0              32
             5 OTHER                      100          0              32
             4 OUT                         60         40            19.2
             3 OUT                         30         30             9.6
             2 OTHER                       30          0             9.6
             1 OUT                          0         30               0
    

    There's an assumption that the remaining quantity is consistent and you can effectively track a running total of what has gone before, but from the data you've shown that looks plausible. The inner query calculates that running total for each row, and the outer query does the calculation; that could be condensed but is hopefully clearer like this...

    qid & accept id: (16184493, 16184689) query: SQL Server Insert table into same table? soup:

    Because Primary keys must contain unique value and cannot contain NULL values. so use following queries if your table don't have primary key.

    \n

    for all columns use:

    \n
    INSERT INTO dbo.Calls SELECT * fROM  dbo.Calls\n
    \n

    for selected columns use:

    \n
     INSERT INTO dbo.Calls () SELECT   FROM dbo.Calls\n
    \n soup wrap:

    Because Primary keys must contain unique value and cannot contain NULL values. so use following queries if your table don't have primary key.

    for all columns use:

    INSERT INTO dbo.Calls SELECT * fROM  dbo.Calls
    

    for selected columns use:

     INSERT INTO dbo.Calls () SELECT   FROM dbo.Calls
    
    qid & accept id: (16186786, 16186806) query: MySQL compare same values in two column soup:
    SELECT  jamu_a,\n        jamu_b,\n        GROUP_CONCAT(khasiat) khasiat,\n        COUNT(*) total\nFROM    TableName\nGROUP   BY  jamu_a, jamu_b\n
    \n\n

    OUTPUT

    \n
    ╔════════╦════════╦═════════╦═══════╗\n║ JAMU_A ║ JAMU_B ║ KHASIAT ║ TOTAL ║\n╠════════╬════════╬═════════╬═══════╣\n║ A      ║ B      ║ Z,X,C   ║     3 ║\n╚════════╩════════╩═════════╩═══════╝\n
    \n

    if there are repeating values on column KHASIAT and you want it to be unique, you can add DISTINCT on GROUP_CONCAT()

    \n
    SELECT  jamu_a,\n        jamu_b,\n        GROUP_CONCAT(DISTINCT khasiat) khasiat,\n        COUNT(*) total\nFROM    TableName\nGROUP   BY  jamu_a, jamu_b\n
    \n soup wrap:
    SELECT  jamu_a,
            jamu_b,
            GROUP_CONCAT(khasiat) khasiat,
            COUNT(*) total
    FROM    TableName
    GROUP   BY  jamu_a, jamu_b
    

    OUTPUT

    ╔════════╦════════╦═════════╦═══════╗
    ║ JAMU_A ║ JAMU_B ║ KHASIAT ║ TOTAL ║
    ╠════════╬════════╬═════════╬═══════╣
    ║ A      ║ B      ║ Z,X,C   ║     3 ║
    ╚════════╩════════╩═════════╩═══════╝
    

    if there are repeating values on column KHASIAT and you want it to be unique, you can add DISTINCT on GROUP_CONCAT()

    SELECT  jamu_a,
            jamu_b,
            GROUP_CONCAT(DISTINCT khasiat) khasiat,
            COUNT(*) total
    FROM    TableName
    GROUP   BY  jamu_a, jamu_b
    
    qid & accept id: (16212126, 16212506) query: Updating a Dataset to add caclulated fields soup:

    You could use something like this to get the results for each jockey in one row:

    \n
    SELECT  jockey.jockey_skey,\n        TotalRaces = COUNT(*),\n        [1sts] = COUNT(CASE WHEN raceresults.place = '01' THEN 1 END),\n        [2nds] = COUNT(CASE WHEN raceresults.place = '02' THEN 1 END),\n        [3rds] = COUNT(CASE WHEN raceresults.place = '03' THEN 1 END),\n        [4ths] = COUNT(CASE WHEN raceresults.place = '04' THEN 1 END),\n        [5ths] = COUNT(CASE WHEN raceresults.place = '05' THEN 1 END),\n        [6ths] = COUNT(CASE WHEN raceresults.place = '06' THEN 1 END),\n        [7ths] = COUNT(CASE WHEN raceresults.place = '07' THEN 1 END),\n        [8ths] = COUNT(CASE WHEN raceresults.place = '08' THEN 1 END),\n        -- etc\n        [NonRunner] = COUNT(CASE WHEN raceresults.place = 'NR' THEN 1 END),\n        [Fell] = COUNT(CASE WHEN raceresults.place = 'F' THEN 1 END),\n        [PulledUp] = COUNT(CASE WHEN raceresults.place = 'PU' THEN 1 END),\n        [Unseated] = COUNT(CASE WHEN raceresults.place = 'U' THEN 1 END),\n        [Refused] = COUNT(CASE WHEN raceresults.place = 'R' THEN 1 END),\n        [BroughtDown] = COUNT(CASE WHEN raceresults.place = 'B' THEN 1 END)\nFROM    jockey \n        INNER JOIN runnersandriders \n            ON jockey.jockey_skey = runnersandriders.jockey_skey \n        INNER JOIN horse \n            ON runnersandriders.horse_skey = horse.horse_skey \n        INNER JOIN raceresults \n            ON horse.horse_skey = raceresults.horse_skey \nGROUP  BY jockey.jockey_skey\nORDER  BY jockey.jockey_skey \n
    \n

    Simplified Example on SQL Fiddle

    \n

    ALternatively you could use WITH ROLLUP to get an additional row with totals:

    \n
    SELECT  jockey.jockey_skey,\n        raceresults.place,\n        [CountOfResult] = COUNT(*)\nFROM    jockey \n        INNER JOIN runnersandriders \n            ON jockey.jockey_skey = runnersandriders.jockey_skey \n        INNER JOIN horse \n            ON runnersandriders.horse_skey = horse.horse_skey \n        INNER JOIN raceresults \n            ON horse.horse_skey = raceresults.horse_skey \nGROUP  BY jockey.jockey_skey, raceresults.place\nWITH ROLLUP\nORDER  BY jockey.jockey_skey, raceresults.place;\n
    \n

    Where NULL values represent totals

    \n

    Simplified Example on SQL Fiddle

    \n soup wrap:

    You could use something like this to get the results for each jockey in one row:

    SELECT  jockey.jockey_skey,
            TotalRaces = COUNT(*),
            [1sts] = COUNT(CASE WHEN raceresults.place = '01' THEN 1 END),
            [2nds] = COUNT(CASE WHEN raceresults.place = '02' THEN 1 END),
            [3rds] = COUNT(CASE WHEN raceresults.place = '03' THEN 1 END),
            [4ths] = COUNT(CASE WHEN raceresults.place = '04' THEN 1 END),
            [5ths] = COUNT(CASE WHEN raceresults.place = '05' THEN 1 END),
            [6ths] = COUNT(CASE WHEN raceresults.place = '06' THEN 1 END),
            [7ths] = COUNT(CASE WHEN raceresults.place = '07' THEN 1 END),
            [8ths] = COUNT(CASE WHEN raceresults.place = '08' THEN 1 END),
            -- etc
            [NonRunner] = COUNT(CASE WHEN raceresults.place = 'NR' THEN 1 END),
            [Fell] = COUNT(CASE WHEN raceresults.place = 'F' THEN 1 END),
            [PulledUp] = COUNT(CASE WHEN raceresults.place = 'PU' THEN 1 END),
            [Unseated] = COUNT(CASE WHEN raceresults.place = 'U' THEN 1 END),
            [Refused] = COUNT(CASE WHEN raceresults.place = 'R' THEN 1 END),
            [BroughtDown] = COUNT(CASE WHEN raceresults.place = 'B' THEN 1 END)
    FROM    jockey 
            INNER JOIN runnersandriders 
                ON jockey.jockey_skey = runnersandriders.jockey_skey 
            INNER JOIN horse 
                ON runnersandriders.horse_skey = horse.horse_skey 
            INNER JOIN raceresults 
                ON horse.horse_skey = raceresults.horse_skey 
    GROUP  BY jockey.jockey_skey
    ORDER  BY jockey.jockey_skey 
    

    Simplified Example on SQL Fiddle

    ALternatively you could use WITH ROLLUP to get an additional row with totals:

    SELECT  jockey.jockey_skey,
            raceresults.place,
            [CountOfResult] = COUNT(*)
    FROM    jockey 
            INNER JOIN runnersandriders 
                ON jockey.jockey_skey = runnersandriders.jockey_skey 
            INNER JOIN horse 
                ON runnersandriders.horse_skey = horse.horse_skey 
            INNER JOIN raceresults 
                ON horse.horse_skey = raceresults.horse_skey 
    GROUP  BY jockey.jockey_skey, raceresults.place
    WITH ROLLUP
    ORDER  BY jockey.jockey_skey, raceresults.place;
    

    Where NULL values represent totals

    Simplified Example on SQL Fiddle

    qid & accept id: (16216129, 16216258) query: Using a current row value into a subquery soup:

    Based on your description, this may be the query that you want:

    \n
    select person, AVG(OrderTotal), COUNT(distinct orderId)\nfrom (select Customer_id as person, Order_id, SUM(total) as OrderTotal\n      from Orders\n      group by Customer_Id, Order_Id\n     ) o\ngroup by person \n
    \n

    I say "may" because I would expect OrderId to be a unique key in the Orders table. So, the inner subquery wouldn't be doing anything. Perhaps you mean something like OrderLines in the inner query.

    \n

    The reason your query fails is because of the correlation statement:

    \n
    where Customer_Id = person\n
    \n

    You intend for this to use the value from the outer query ("person") to relate to the inner one ("Customer_Id"). However, the inner query does not know the alias in the select clause of the outer one. So, "Person" is undefined. When doing correlated subqueries, you should always use table aliases. That query should look more like:

    \n
    (select COUNT(o2.Order_Id) as timesSeen  \n from Orders o2 where  o2.Customer_Id=o.person \n group by o2.Order_Id\n)\n
    \n

    Assuming "o" is the alias for orders in the outer query. Correlated subqueries are not needed. You should just simplify the query.

    \n soup wrap:

    Based on your description, this may be the query that you want:

    select person, AVG(OrderTotal), COUNT(distinct orderId)
    from (select Customer_id as person, Order_id, SUM(total) as OrderTotal
          from Orders
          group by Customer_Id, Order_Id
         ) o
    group by person 
    

    I say "may" because I would expect OrderId to be a unique key in the Orders table. So, the inner subquery wouldn't be doing anything. Perhaps you mean something like OrderLines in the inner query.

    The reason your query fails is because of the correlation statement:

    where Customer_Id = person
    

    You intend for this to use the value from the outer query ("person") to relate to the inner one ("Customer_Id"). However, the inner query does not know the alias in the select clause of the outer one. So, "Person" is undefined. When doing correlated subqueries, you should always use table aliases. That query should look more like:

    (select COUNT(o2.Order_Id) as timesSeen  
     from Orders o2 where  o2.Customer_Id=o.person 
     group by o2.Order_Id
    )
    

    Assuming "o" is the alias for orders in the outer query. Correlated subqueries are not needed. You should just simplify the query.

    qid & accept id: (16223233, 16226292) query: Codeigniter - loop through post information passing value to model query and outputting result soup:

    You should use CodeIgniter's input class to get all post values.

    \n
    $formValues = $this->input->post(NULL, TRUE);\n
    \n

    Then in your controller set an intermediate value to hold your data.

    \n
    $products = array();\n\nforeach($formValues as $key => $value) \n{\n    $products[] = $this->sales_model->get_productdetails($key)\n}\n\n$data = array();\n$data["products"] = $products;\n
    \n

    Pass intermediary to the view.

    \n
    $this->load->view('sales/new_autospread_order_lines', $data);\n
    \n

    In your review reference each hashed item in the $data array as a variable.

    \n
    \n

    \n \n

    \n\n
    \n soup wrap:

    You should use CodeIgniter's input class to get all post values.

    $formValues = $this->input->post(NULL, TRUE);
    

    Then in your controller set an intermediate value to hold your data.

    $products = array();
    
    foreach($formValues as $key => $value) 
    {
        $products[] = $this->sales_model->get_productdetails($key)
    }
    
    $data = array();
    $data["products"] = $products;
    

    Pass intermediary to the view.

    $this->load->view('sales/new_autospread_order_lines', $data);
    

    In your review reference each hashed item in the $data array as a variable.

    
    

    qid & accept id: (16291075, 16291086) query: oracle duplicate rows based on a single column soup:
    SELECT  a.*\nFROM    TableName a\n        INNER JOIN\n        (\n            SELECT  EmpID\n            FROM    TableName\n            GROUP   BY EmpID\n            HAVING  COUNT(*) > 1\n        ) b ON a.EmpID = b.EmpID\n
    \n\n

    Another way, although I prefer above, is to use IN

    \n
    SELECT  a.*\nFROM    TableName a\nWHERE   EmpId IN\n        (\n            SELECT  EmpId\n            FROM    TableName\n            GROUP   BY EmpId\n            HAVING  COUNT(*) > 1\n        ) \n
    \n\n soup wrap:
    SELECT  a.*
    FROM    TableName a
            INNER JOIN
            (
                SELECT  EmpID
                FROM    TableName
                GROUP   BY EmpID
                HAVING  COUNT(*) > 1
            ) b ON a.EmpID = b.EmpID
    

    Another way, although I prefer above, is to use IN

    SELECT  a.*
    FROM    TableName a
    WHERE   EmpId IN
            (
                SELECT  EmpId
                FROM    TableName
                GROUP   BY EmpId
                HAVING  COUNT(*) > 1
            ) 
    
    qid & accept id: (16330159, 16330179) query: Interview : update table values using select statement soup:

    Try this..

    \n
    Update TableName Set Gender=Case when Gender='M' Then 'F' Else 'M' end\n
    \n

    On OP request..update using Select...

    \n
    Update TableName T Set Gender=(\nSelect Gender from TableName B where  T.Gender!=B.Gender and rownum=1);\n
    \n

    SQL FIDDLE DEMO

    \n soup wrap:

    Try this..

    Update TableName Set Gender=Case when Gender='M' Then 'F' Else 'M' end
    

    On OP request..update using Select...

    Update TableName T Set Gender=(
    Select Gender from TableName B where  T.Gender!=B.Gender and rownum=1);
    

    SQL FIDDLE DEMO

    qid & accept id: (16335925, 16336267) query: difference in days, between two recordings soup:

    Please try:

    \n
    ;with T as(\n    select *, ROW_NUMBER() over (order by User, Days) Rnum from YourTable\n)\nselect \n    distinct a.User, \n    b.Days-a.Days difference_in_day \nfrom T a left join T b on a.Rnum=b.Rnum-1 \nwhere b.User is not null\n
    \n

    Sample

    \n
    declare @tbl as table(xUser nvarchar(1), xDays int)\ninsert into @tbl values \n('A', 1),\n('A', 1),\n('A', 2),\n('B', 2),\n('B', 5)\n\nselect *, ROW_NUMBER() over (order by xUser, xDays) Rnum from @tbl\n\n;with T as(\n    select *, ROW_NUMBER() over (order by xUser, xDays) Rnum from @tbl\n)\nselect \n    distinct a.xUser, \n    b.xDays-a.xDays difference_in_day \nfrom T a left join T b on a.Rnum=b.Rnum-1 \nwhere b.xUser is not null\n
    \n soup wrap:

    Please try:

    ;with T as(
        select *, ROW_NUMBER() over (order by User, Days) Rnum from YourTable
    )
    select 
        distinct a.User, 
        b.Days-a.Days difference_in_day 
    from T a left join T b on a.Rnum=b.Rnum-1 
    where b.User is not null
    

    Sample

    declare @tbl as table(xUser nvarchar(1), xDays int)
    insert into @tbl values 
    ('A', 1),
    ('A', 1),
    ('A', 2),
    ('B', 2),
    ('B', 5)
    
    select *, ROW_NUMBER() over (order by xUser, xDays) Rnum from @tbl
    
    ;with T as(
        select *, ROW_NUMBER() over (order by xUser, xDays) Rnum from @tbl
    )
    select 
        distinct a.xUser, 
        b.xDays-a.xDays difference_in_day 
    from T a left join T b on a.Rnum=b.Rnum-1 
    where b.xUser is not null
    
    qid & accept id: (16372169, 16374519) query: return value of stored procedure based on different rules soup:

    The key is to create columns for each of your criteria, i.e. one column for if the next door flat owner has the same nationality, a column for if the floor is empty.

    \n

    You can then take all your criteria and place them within the order by of a ROW_NUMBER() function to get the flats in the order you defined. The key part in the below query is this:

    \n
    RowNumber = ROW_NUMBER() OVER(ORDER BY PrevIsNationalityMatch DESC, \n                                        NextIsNationalityMatch DESC, \n                                        EmptyFloor DESC, \n                                        EmptyFlatsEitherSide DESC,\n                                        Floor, \n                                        FlatNo)\n
    \n

    The four columns (PrevIsNationalityMatch, NextIsNationalityMatch, EmptyFloor', 'EmptyFlatsEitherSide), are all bit fields, so if a row exists where the previous flat is owned by someone of the same nationality this will always be ranked one by the ROW_NUMBER function, otherwise it looks for if the next flat is owned by someone of the same nationality (I added this rule as it seemed logical but it could easily be removed by removing it from the order by), and so on and so on until it is left just sorting by floor and flat no.

    \n
    DECLARE @NewOwnerNationality VARCHAR(20) = 'BRAZIL';\nWITH FlatOwnerNationality AS\n(   SELECT  FlatMaster.Floor, \n            FlatMaster.FlatNo, \n            FlatMaster.IsOccupied,\n            IsNationalityMatch = CASE WHEN OwnerMaster.OwnerNationality = @NewOwnerNationality THEN 1 ELSE 0 END\n    FROM    FlatMaster\n            LEFT JOIN OwnerMaster\n                ON OwnerMaster.OwnerName = FlatMaster.OwnerName\n), Flats AS\n(   SELECT  FlatMaster.Floor,\n            FlatMaster.FlatNo,\n            FlatMaster.IsOccupied,\n            EmptyFlatsEitherSide = CASE WHEN PrevFlat.IsOccupied = 'NO' AND NextFlat.IsOccupied  = 'NO' THEN 1 ELSE 0 END,\n            EmptyFloor = CASE WHEN COUNT(CASE WHEN FlatMaster.IsOccupied = 'YES' THEN 1 END) OVER(PARTITION BY FlatMaster.Floor) = 0 THEN 1 ELSE 0 END,\n            PrevIsNationalityMatch = ISNULL(PrevFlat.IsNationalityMatch, 0),\n            NextIsNationalityMatch = ISNULL(NextFlat.IsNationalityMatch, 0)\n    FROM    FlatMaster\n            LEFT JOIN FlatOwnerNationality PrevFlat\n                ON PrevFlat.Floor = FlatMaster.Floor\n                AND PrevFlat.FlatNo = FlatMaster.FlatNo - 1\n            LEFT JOIN FlatOwnerNationality NextFlat\n                ON NextFlat.Floor = FlatMaster.Floor\n                AND NextFlat.FlatNo = FlatMaster.FlatNo + 1\n), RankedFlats AS\n(   SELECT  *,\n            RowNumber = ROW_NUMBER() OVER(ORDER BY PrevIsNationalityMatch DESC, \n                                                    NextIsNationalityMatch DESC, \n                                                    EmptyFloor DESC, \n                                                    EmptyFlatsEitherSide DESC,\n                                                    Floor, \n                                                    FlatNo)\n    FROM    Flats\n    WHERE   IsOccupied = 'NO'\n)\nSELECT  Floor,\n        FlatNo,\n        MatchedOn = CASE WHEN PrevIsNationalityMatch = 1 THEN 'First Flat after same nationality owner'\n                        WHEN NextIsNationalityMatch = 1 THEN 'First Flat before same nationality owner'\n                        WHEN EmptyFloor = 1 THEN 'No Nationality Match, placed on empty floor'\n                        WHEN EmptyFlatsEitherSide = 1 THEN 'Next flat with empty flats either side'\n                        ELSE 'First Available Flat'\n                    END\nFROM    RankedFlats\nWHERE   RowNumber = 1;\n
    \n

    Brazil Example - Floor 1, Flat 4

    \n

    England Example - Floor 1, Flat 2

    \n

    Spain Example - Floor 2, Flat 1

    \n

    EDIT

    \n
    DECLARE @NewOwnerNationality VARCHAR(20) = 'BRAZIL';\n\nWITH FlatOwnerNationality AS\n(   SELECT  FlatMaster.Floor, \n            FlatMaster.FlatNo, \n            FlatMaster.IsOccupied,\n            IsNationalityMatch = CASE WHEN OwnerMaster.OwnerNationality = @NewOwnerNationality THEN 1 ELSE 0 END\n    FROM    FlatMaster\n            LEFT JOIN OwnerMaster\n                ON OwnerMaster.OwnerName = FlatMaster.OwnerName\n), Flats AS\n(   SELECT  FlatMaster.Floor,\n            FlatMaster.FlatNo,\n            FlatMaster.IsOccupied,\n            EmptyFlatsEitherSide = CASE WHEN PrevFlat.IsOccupied = 'NO' AND NextFlat.IsOccupied  = 'NO' AND PrevFlat2.IsOccupied = 'NO' AND NextFlat2.IsOccupied  = 'NO' THEN 1 ELSE 0 END,\n            EmptyFloor = CASE WHEN COUNT(CASE WHEN FlatMaster.IsOccupied = 'YES' THEN 1 END) OVER(PARTITION BY FlatMaster.Floor) = 0 THEN 1 ELSE 0 END,\n            PrevIsNationalityMatch = ISNULL(PrevFlat.IsNationalityMatch, 0),\n            NextIsNationalityMatch = ISNULL(NextFlat.IsNationalityMatch, 0)\n    FROM    FlatMaster\n            LEFT JOIN FlatOwnerNationality PrevFlat\n                ON PrevFlat.Floor = FlatMaster.Floor\n                AND PrevFlat.FlatNo = FlatMaster.FlatNo - 1\n            LEFT JOIN FlatOwnerNationality NextFlat\n                ON NextFlat.Floor = FlatMaster.Floor\n                AND NextFlat.FlatNo = FlatMaster.FlatNo + 1\n            LEFT JOIN FlatMaster PrevFlat2\n                ON PrevFlat2.Floor = FlatMaster.Floor\n                AND PrevFlat2.FlatNo = FlatMaster.FlatNo - 2\n            LEFT JOIN FlatMaster NextFlat2\n                ON NextFlat2.Floor = FlatMaster.Floor\n                AND NextFlat2.FlatNo = FlatMaster.FlatNo + 2\n\n), RankedFlats AS\n(   SELECT  *,\n            RowNumber = ROW_NUMBER() OVER(ORDER BY PrevIsNationalityMatch DESC, \n                                                    NextIsNationalityMatch DESC, \n                                                    EmptyFloor DESC, \n                                                    EmptyFlatsEitherSide DESC,\n                                                    Floor, \n                                                    FlatNo)\n    FROM    Flats\n    WHERE   IsOccupied = 'NO'\n)\nSELECT  Floor,\n        FlatNo,\n        MatchedOn = CASE WHEN PrevIsNationalityMatch = 1 THEN 'First Flat after same nationality owner'\n                        WHEN NextIsNationalityMatch = 1 THEN 'First Flat before same nationality owner'\n                        WHEN EmptyFloor = 1 THEN 'No Nationality Match, placed on empty floor'\n                        WHEN EmptyFlatsEitherSide = 1 THEN 'Next flat with empty flats either side'\n                        ELSE 'First Available Flat'\n                    END\nFROM    RankedFlats\nWHERE   RowNumber = 1;\n
    \n soup wrap:

    The key is to create columns for each of your criteria, i.e. one column for if the next door flat owner has the same nationality, a column for if the floor is empty.

    You can then take all your criteria and place them within the order by of a ROW_NUMBER() function to get the flats in the order you defined. The key part in the below query is this:

    RowNumber = ROW_NUMBER() OVER(ORDER BY PrevIsNationalityMatch DESC, 
                                            NextIsNationalityMatch DESC, 
                                            EmptyFloor DESC, 
                                            EmptyFlatsEitherSide DESC,
                                            Floor, 
                                            FlatNo)
    

    The four columns (PrevIsNationalityMatch, NextIsNationalityMatch, EmptyFloor', 'EmptyFlatsEitherSide), are all bit fields, so if a row exists where the previous flat is owned by someone of the same nationality this will always be ranked one by the ROW_NUMBER function, otherwise it looks for if the next flat is owned by someone of the same nationality (I added this rule as it seemed logical but it could easily be removed by removing it from the order by), and so on and so on until it is left just sorting by floor and flat no.

    DECLARE @NewOwnerNationality VARCHAR(20) = 'BRAZIL';
    WITH FlatOwnerNationality AS
    (   SELECT  FlatMaster.Floor, 
                FlatMaster.FlatNo, 
                FlatMaster.IsOccupied,
                IsNationalityMatch = CASE WHEN OwnerMaster.OwnerNationality = @NewOwnerNationality THEN 1 ELSE 0 END
        FROM    FlatMaster
                LEFT JOIN OwnerMaster
                    ON OwnerMaster.OwnerName = FlatMaster.OwnerName
    ), Flats AS
    (   SELECT  FlatMaster.Floor,
                FlatMaster.FlatNo,
                FlatMaster.IsOccupied,
                EmptyFlatsEitherSide = CASE WHEN PrevFlat.IsOccupied = 'NO' AND NextFlat.IsOccupied  = 'NO' THEN 1 ELSE 0 END,
                EmptyFloor = CASE WHEN COUNT(CASE WHEN FlatMaster.IsOccupied = 'YES' THEN 1 END) OVER(PARTITION BY FlatMaster.Floor) = 0 THEN 1 ELSE 0 END,
                PrevIsNationalityMatch = ISNULL(PrevFlat.IsNationalityMatch, 0),
                NextIsNationalityMatch = ISNULL(NextFlat.IsNationalityMatch, 0)
        FROM    FlatMaster
                LEFT JOIN FlatOwnerNationality PrevFlat
                    ON PrevFlat.Floor = FlatMaster.Floor
                    AND PrevFlat.FlatNo = FlatMaster.FlatNo - 1
                LEFT JOIN FlatOwnerNationality NextFlat
                    ON NextFlat.Floor = FlatMaster.Floor
                    AND NextFlat.FlatNo = FlatMaster.FlatNo + 1
    ), RankedFlats AS
    (   SELECT  *,
                RowNumber = ROW_NUMBER() OVER(ORDER BY PrevIsNationalityMatch DESC, 
                                                        NextIsNationalityMatch DESC, 
                                                        EmptyFloor DESC, 
                                                        EmptyFlatsEitherSide DESC,
                                                        Floor, 
                                                        FlatNo)
        FROM    Flats
        WHERE   IsOccupied = 'NO'
    )
    SELECT  Floor,
            FlatNo,
            MatchedOn = CASE WHEN PrevIsNationalityMatch = 1 THEN 'First Flat after same nationality owner'
                            WHEN NextIsNationalityMatch = 1 THEN 'First Flat before same nationality owner'
                            WHEN EmptyFloor = 1 THEN 'No Nationality Match, placed on empty floor'
                            WHEN EmptyFlatsEitherSide = 1 THEN 'Next flat with empty flats either side'
                            ELSE 'First Available Flat'
                        END
    FROM    RankedFlats
    WHERE   RowNumber = 1;
    

    Brazil Example - Floor 1, Flat 4

    England Example - Floor 1, Flat 2

    Spain Example - Floor 2, Flat 1

    EDIT

    DECLARE @NewOwnerNationality VARCHAR(20) = 'BRAZIL';
    
    WITH FlatOwnerNationality AS
    (   SELECT  FlatMaster.Floor, 
                FlatMaster.FlatNo, 
                FlatMaster.IsOccupied,
                IsNationalityMatch = CASE WHEN OwnerMaster.OwnerNationality = @NewOwnerNationality THEN 1 ELSE 0 END
        FROM    FlatMaster
                LEFT JOIN OwnerMaster
                    ON OwnerMaster.OwnerName = FlatMaster.OwnerName
    ), Flats AS
    (   SELECT  FlatMaster.Floor,
                FlatMaster.FlatNo,
                FlatMaster.IsOccupied,
                EmptyFlatsEitherSide = CASE WHEN PrevFlat.IsOccupied = 'NO' AND NextFlat.IsOccupied  = 'NO' AND PrevFlat2.IsOccupied = 'NO' AND NextFlat2.IsOccupied  = 'NO' THEN 1 ELSE 0 END,
                EmptyFloor = CASE WHEN COUNT(CASE WHEN FlatMaster.IsOccupied = 'YES' THEN 1 END) OVER(PARTITION BY FlatMaster.Floor) = 0 THEN 1 ELSE 0 END,
                PrevIsNationalityMatch = ISNULL(PrevFlat.IsNationalityMatch, 0),
                NextIsNationalityMatch = ISNULL(NextFlat.IsNationalityMatch, 0)
        FROM    FlatMaster
                LEFT JOIN FlatOwnerNationality PrevFlat
                    ON PrevFlat.Floor = FlatMaster.Floor
                    AND PrevFlat.FlatNo = FlatMaster.FlatNo - 1
                LEFT JOIN FlatOwnerNationality NextFlat
                    ON NextFlat.Floor = FlatMaster.Floor
                    AND NextFlat.FlatNo = FlatMaster.FlatNo + 1
                LEFT JOIN FlatMaster PrevFlat2
                    ON PrevFlat2.Floor = FlatMaster.Floor
                    AND PrevFlat2.FlatNo = FlatMaster.FlatNo - 2
                LEFT JOIN FlatMaster NextFlat2
                    ON NextFlat2.Floor = FlatMaster.Floor
                    AND NextFlat2.FlatNo = FlatMaster.FlatNo + 2
    
    ), RankedFlats AS
    (   SELECT  *,
                RowNumber = ROW_NUMBER() OVER(ORDER BY PrevIsNationalityMatch DESC, 
                                                        NextIsNationalityMatch DESC, 
                                                        EmptyFloor DESC, 
                                                        EmptyFlatsEitherSide DESC,
                                                        Floor, 
                                                        FlatNo)
        FROM    Flats
        WHERE   IsOccupied = 'NO'
    )
    SELECT  Floor,
            FlatNo,
            MatchedOn = CASE WHEN PrevIsNationalityMatch = 1 THEN 'First Flat after same nationality owner'
                            WHEN NextIsNationalityMatch = 1 THEN 'First Flat before same nationality owner'
                            WHEN EmptyFloor = 1 THEN 'No Nationality Match, placed on empty floor'
                            WHEN EmptyFlatsEitherSide = 1 THEN 'Next flat with empty flats either side'
                            ELSE 'First Available Flat'
                        END
    FROM    RankedFlats
    WHERE   RowNumber = 1;
    
    qid & accept id: (16426039, 16427224) query: Stored procedure for getting sum of entries in table for each ID soup:

    Logically, you are grouping by two criteria, scale and skill name. However, if I understand it correctly, every row is supposed to represent a single skill name. Therefore, you should group by tblSkill.Name only. To get different counts for different scales in separate columns, you can use conditional aggregation, i.e. aggregation on an expression that (usually) involves a CASE construct. Here's how you could go about it:

    \n
    SELECT \n   tblSkill.Name AS skillname,\n   COUNT(CASE tblSkillMetrics.Scale WHEN 1 THEN EmployeeID END) AS NotAplicable,\n   COUNT(CASE tblSkillMetrics.Scale WHEN 2 THEN EmployeeID END) AS Beginner,\n   COUNT(CASE tblSkillMetrics.Scale WHEN 3 THEN EmployeeID END) AS Proficient,\n   COUNT(CASE tblSkillMetrics.Scale WHEN 4 THEN EmployeeID END) AS Expert\nFROM\n   tblSkill\nINNER JOIN \n   tblSkillMetrics ON tblSkillMetrics.SkillID = tblSkill.ID\nGROUP BY \n   tblSkill.Name \nORDER BY \n   skillname DESC\n;\n
    \n

    Note that there's a special syntax for this kind of queries. It employs the PIVOT keyword, as what you get is essentially a grouped result set pivoted on one of the grouping criteria, scale in this case. This is how the same could be achieved with PIVOT:

    \n
    SELECT\n   skillname,\n   [1] AS NotAplicable,\n   [2] AS Beginner,\n   [3] AS Proficient,\n   [4] AS Expert\nFROM (\n   SELECT \n      tblSkill.Name AS skillname,\n      tblSkillMetrics.Scale,\n      EmployeeID\n   FROM\n      tblSkill\n   INNER JOIN \n      tblSkillMetrics ON tblSkillMetrics.SkillID = tblSkill.ID\n) s\nPIVOT (\n   COUNT(EmployeeID) FOR Scale IN ([1], [2], [3], [4])\n) p\n;\n
    \n

    Basically, PIVOT implies grouping. All columns but one in the source dataset are grouping criteria, namely every one of them that is not used as an argument of an aggregate function in the PIVOT clause is a grouping criterion. One of them is also assigned to be the one the results are pivoted on. (Again, in this case it is scale.)

    \n

    Because grouping is implicit, a derived table is used to avoid grouping by more criteria than necessary. Values of Scale become names of new columns that the PIVOT clause produces. (That is why they are delimited with square brackets when listed in PIVOT: they are not IDs in that context but identifiers delimited as required by Transact-SQL syntax.)

    \n soup wrap:

    Logically, you are grouping by two criteria, scale and skill name. However, if I understand it correctly, every row is supposed to represent a single skill name. Therefore, you should group by tblSkill.Name only. To get different counts for different scales in separate columns, you can use conditional aggregation, i.e. aggregation on an expression that (usually) involves a CASE construct. Here's how you could go about it:

    SELECT 
       tblSkill.Name AS skillname,
       COUNT(CASE tblSkillMetrics.Scale WHEN 1 THEN EmployeeID END) AS NotAplicable,
       COUNT(CASE tblSkillMetrics.Scale WHEN 2 THEN EmployeeID END) AS Beginner,
       COUNT(CASE tblSkillMetrics.Scale WHEN 3 THEN EmployeeID END) AS Proficient,
       COUNT(CASE tblSkillMetrics.Scale WHEN 4 THEN EmployeeID END) AS Expert
    FROM
       tblSkill
    INNER JOIN 
       tblSkillMetrics ON tblSkillMetrics.SkillID = tblSkill.ID
    GROUP BY 
       tblSkill.Name 
    ORDER BY 
       skillname DESC
    ;
    

    Note that there's a special syntax for this kind of queries. It employs the PIVOT keyword, as what you get is essentially a grouped result set pivoted on one of the grouping criteria, scale in this case. This is how the same could be achieved with PIVOT:

    SELECT
       skillname,
       [1] AS NotAplicable,
       [2] AS Beginner,
       [3] AS Proficient,
       [4] AS Expert
    FROM (
       SELECT 
          tblSkill.Name AS skillname,
          tblSkillMetrics.Scale,
          EmployeeID
       FROM
          tblSkill
       INNER JOIN 
          tblSkillMetrics ON tblSkillMetrics.SkillID = tblSkill.ID
    ) s
    PIVOT (
       COUNT(EmployeeID) FOR Scale IN ([1], [2], [3], [4])
    ) p
    ;
    

    Basically, PIVOT implies grouping. All columns but one in the source dataset are grouping criteria, namely every one of them that is not used as an argument of an aggregate function in the PIVOT clause is a grouping criterion. One of them is also assigned to be the one the results are pivoted on. (Again, in this case it is scale.)

    Because grouping is implicit, a derived table is used to avoid grouping by more criteria than necessary. Values of Scale become names of new columns that the PIVOT clause produces. (That is why they are delimited with square brackets when listed in PIVOT: they are not IDs in that context but identifiers delimited as required by Transact-SQL syntax.)

    qid & accept id: (16426094, 16426214) query: Query Data From Two Tables + One Table Must Only Query Using Most Recent Data soup:

    You can simply add another JOIN to the existing query that you have. And it's a lot cleaner when you use an explicit (INNER) JOIN matching keys in the ON clause, compared with an inferred CROSS JOIN (using comma separated tables) that are filtered in the WHERE clause:

    \n
    SELECT p.VehicleKey, p.Timestamp, p.Latitude, p.Longitude, p.Speed, v.Name\nFROM AVLVehiclePosition p\nJOIN Vehicles v\n  ON p.VehicleKey = v.VehicleKey\nJOIN (SELECT max(Timestamp) as maxtime, VehicleKEy\n      FROM AVLVehiclePosition\n      GROUP BY VehicleKey) maxresults  \n  ON p.VehicleKey = maxresults.VehicleKEy  \n  AND p.Timestamp = maxresults.maxtime\n
    \n

    And you can make this even cleaner if you make use of ROW_NUMBER():

    \n
    WITH maxResults AS (\n  SELECT p.VehicleKey, p.Timestamp, p.Latitude, p.Longitude, p.Speed, v.Name,\n         ROW_NUMBER() OVER (PARTITION BY p.VehicleKey ORDER BY p.Timestamp DESC) rowNum\n  FROM AVLVehiclePosition p\n  JOIN Vehicles v\n    ON p.VehicleKey = v.VehicleKey)\nSELECT * FROM maxResults\nWHERE rowNum = 1\n
    \n soup wrap:

    You can simply add another JOIN to the existing query that you have. And it's a lot cleaner when you use an explicit (INNER) JOIN matching keys in the ON clause, compared with an inferred CROSS JOIN (using comma separated tables) that are filtered in the WHERE clause:

    SELECT p.VehicleKey, p.Timestamp, p.Latitude, p.Longitude, p.Speed, v.Name
    FROM AVLVehiclePosition p
    JOIN Vehicles v
      ON p.VehicleKey = v.VehicleKey
    JOIN (SELECT max(Timestamp) as maxtime, VehicleKEy
          FROM AVLVehiclePosition
          GROUP BY VehicleKey) maxresults  
      ON p.VehicleKey = maxresults.VehicleKEy  
      AND p.Timestamp = maxresults.maxtime
    

    And you can make this even cleaner if you make use of ROW_NUMBER():

    WITH maxResults AS (
      SELECT p.VehicleKey, p.Timestamp, p.Latitude, p.Longitude, p.Speed, v.Name,
             ROW_NUMBER() OVER (PARTITION BY p.VehicleKey ORDER BY p.Timestamp DESC) rowNum
      FROM AVLVehiclePosition p
      JOIN Vehicles v
        ON p.VehicleKey = v.VehicleKey)
    SELECT * FROM maxResults
    WHERE rowNum = 1
    
    qid & accept id: (16442686, 16442782) query: SQL Count of columns result for all existing Dates in the table soup:

    If you want the sum of all dates, just remove the where clause:

    \n
    select DTTransaction.machinename, count(DTTransaction.machinename)\nfrom DTTransaction join\n     DTHotelReservation\n     on DTTransaction.TransactionID = DTHotelReservation.TransactionID and\n        DTHotelReservation.HCOMCID in (415428, 415429, 415430, 415431, 415432)\ngroup by DTTransaction.machinename\n
    \n

    If you want the results by date, then include that in your group by. For instance,

    \n
    select DTTransaction.machinename, convert(varchar(10),BookedOn,101), count(DTTransaction.machinename)\nfrom DTTransaction join\n     DTHotelReservation\n     on DTTransaction.TransactionID = DTHotelReservation.TransactionID and\n        DTHotelReservation.HCOMCID in (415428, 415429, 415430, 415431, 415432)\ngroup by DTTransaction.machinename, convert(varchar(10),BookedOn,101)\norder by 1, MAX(BookedOn)\n
    \n

    I included an order by clause, so the results will be in order by date within each machine name.

    \n soup wrap:

    If you want the sum of all dates, just remove the where clause:

    select DTTransaction.machinename, count(DTTransaction.machinename)
    from DTTransaction join
         DTHotelReservation
         on DTTransaction.TransactionID = DTHotelReservation.TransactionID and
            DTHotelReservation.HCOMCID in (415428, 415429, 415430, 415431, 415432)
    group by DTTransaction.machinename
    

    If you want the results by date, then include that in your group by. For instance,

    select DTTransaction.machinename, convert(varchar(10),BookedOn,101), count(DTTransaction.machinename)
    from DTTransaction join
         DTHotelReservation
         on DTTransaction.TransactionID = DTHotelReservation.TransactionID and
            DTHotelReservation.HCOMCID in (415428, 415429, 415430, 415431, 415432)
    group by DTTransaction.machinename, convert(varchar(10),BookedOn,101)
    order by 1, MAX(BookedOn)
    

    I included an order by clause, so the results will be in order by date within each machine name.

    qid & accept id: (16487093, 16488203) query: SQL Full outer join or alternative solution soup:

    (assuming the OP wants a fully symmetric outer 4-join)

    \n
    WITH four AS (\n        SELECT id, event_dt FROM t1\n        UNION\n        SELECT id, event_dt FROM t2\n        UNION\n        SELECT id, event_dt FROM t3\n        UNION\n        SELECT id, event_dt FROM t4\n        )\nSELECT f.id, f.event_dt\n        , t1.amt1\n        , t2.amt2\n        , t3.amt3\n        , t4.amt4\nFROM four f\nLEFT JOIN t1 ON t1.id = f.id AND t1.event_dt = f.event_dt\nLEFT JOIN t2 ON t2.id = f.id AND t2.event_dt = f.event_dt\nLEFT JOIN t3 ON t3.id = f.id AND t3.event_dt = f.event_dt\nLEFT JOIN t4 ON t4.id = f.id AND t4.event_dt = f.event_dt\nORDER BY id, event_dt\n        ;\n
    \n

    Result:

    \n
     id |  event_dt  | amt1 | amt2 | amt3 | amt4 \n----+------------+------+------+------+------\n  1 | 2012-04-01 |    1 |      |      |     \n  1 | 2012-04-02 |    1 |      |    3 |     \n  1 | 2012-04-03 |    1 |      |    3 |     \n  1 | 2012-04-06 |      |    2 |    3 |    4\n  1 | 2012-04-07 |      |    2 |      |     \n  2 | 2012-04-01 |   40 |      |      |     \n  2 | 2012-04-02 |      |      |    3 |     \n  2 | 2012-04-03 |      |      |    3 |     \n  2 | 2012-04-04 |   40 |      |      |     \n(9 rows)\n
    \n

    BTW: after the UNION four, LEFT JOINs will do the same as FULL JOINs here (union four already has all the possible {id, event_dt} pairs)

    \n soup wrap:

    (assuming the OP wants a fully symmetric outer 4-join)

    WITH four AS (
            SELECT id, event_dt FROM t1
            UNION
            SELECT id, event_dt FROM t2
            UNION
            SELECT id, event_dt FROM t3
            UNION
            SELECT id, event_dt FROM t4
            )
    SELECT f.id, f.event_dt
            , t1.amt1
            , t2.amt2
            , t3.amt3
            , t4.amt4
    FROM four f
    LEFT JOIN t1 ON t1.id = f.id AND t1.event_dt = f.event_dt
    LEFT JOIN t2 ON t2.id = f.id AND t2.event_dt = f.event_dt
    LEFT JOIN t3 ON t3.id = f.id AND t3.event_dt = f.event_dt
    LEFT JOIN t4 ON t4.id = f.id AND t4.event_dt = f.event_dt
    ORDER BY id, event_dt
            ;
    

    Result:

     id |  event_dt  | amt1 | amt2 | amt3 | amt4 
    ----+------------+------+------+------+------
      1 | 2012-04-01 |    1 |      |      |     
      1 | 2012-04-02 |    1 |      |    3 |     
      1 | 2012-04-03 |    1 |      |    3 |     
      1 | 2012-04-06 |      |    2 |    3 |    4
      1 | 2012-04-07 |      |    2 |      |     
      2 | 2012-04-01 |   40 |      |      |     
      2 | 2012-04-02 |      |      |    3 |     
      2 | 2012-04-03 |      |      |    3 |     
      2 | 2012-04-04 |   40 |      |      |     
    (9 rows)
    

    BTW: after the UNION four, LEFT JOINs will do the same as FULL JOINs here (union four already has all the possible {id, event_dt} pairs)

    qid & accept id: (16490625, 16490738) query: SQL Server 2012: JOIN 3 tables for a condition soup:

    You can do this with a rather inefficient, nested query structure in an update clause.

    \n

    In SQL Server syntax:

    \n
    update tableC\n    set Name = (select top 1 b.name\n                from TableB b \n                where b.name not in (select name from TableA a where a.id = TableC.id)\n                order by NEWID()\n               )\n
    \n

    The inner most select from TableA gets all the names from the same id. The where clause chooses names that are not in this list. The order by () limit 1 randomly selects one of the names.

    \n

    Here is an example of the code that works, according to my understanding of the problem:

    \n
    declare @tableA table (id int, name varchar(2));\ndeclare @tableB table (name varchar(2));\ndeclare @tableC table (id int, name varchar(2))\n\ninsert into @tableA(id, name)\n    select 01, 'A4' union all\n    select 01, 'SH' union all\n    select 01, '9K' union all\n    select 02, 'M1' union all\n    select 02, 'L4' union all\n    select 03, '2G' union all\n    select 03, '99';\n\ninsert into @tableB(name)\n    select '5G' union all\n    select 'U8' union all\n    select '02' union all\n    select '45' union all\n    select '23' union all\n    select 'J7' union all\n    select '99' union all\n    select '9F' union all\n    select 'A4' union all\n    select 'H2';\n\n\ninsert into @tableC(id)\n    select 01 union all\n    select 01 union all\n    select 01 union all\n    select 02 union all\n    select 02 union all\n    select 03 union all\n    select 03;\n\n/*    \nselect * from @tableA;\nselect * from @tableB;\nselect * from @tableC;\n */\n\nupdate c\n    set Name = (select top 1 b.name\n                from @TableB b \n                where b.name not in (select name from @TableA a where a.id = c.id)\n                order by NEWID()\n               )\nfrom @tableC c\n\nselect *\nfrom @tableC\n
    \n soup wrap:

    You can do this with a rather inefficient, nested query structure in an update clause.

    In SQL Server syntax:

    update tableC
        set Name = (select top 1 b.name
                    from TableB b 
                    where b.name not in (select name from TableA a where a.id = TableC.id)
                    order by NEWID()
                   )
    

    The inner most select from TableA gets all the names from the same id. The where clause chooses names that are not in this list. The order by () limit 1 randomly selects one of the names.

    Here is an example of the code that works, according to my understanding of the problem:

    declare @tableA table (id int, name varchar(2));
    declare @tableB table (name varchar(2));
    declare @tableC table (id int, name varchar(2))
    
    insert into @tableA(id, name)
        select 01, 'A4' union all
        select 01, 'SH' union all
        select 01, '9K' union all
        select 02, 'M1' union all
        select 02, 'L4' union all
        select 03, '2G' union all
        select 03, '99';
    
    insert into @tableB(name)
        select '5G' union all
        select 'U8' union all
        select '02' union all
        select '45' union all
        select '23' union all
        select 'J7' union all
        select '99' union all
        select '9F' union all
        select 'A4' union all
        select 'H2';
    
    
    insert into @tableC(id)
        select 01 union all
        select 01 union all
        select 01 union all
        select 02 union all
        select 02 union all
        select 03 union all
        select 03;
    
    /*    
    select * from @tableA;
    select * from @tableB;
    select * from @tableC;
     */
    
    update c
        set Name = (select top 1 b.name
                    from @TableB b 
                    where b.name not in (select name from @TableA a where a.id = c.id)
                    order by NEWID()
                   )
    from @tableC c
    
    select *
    from @tableC
    
    qid & accept id: (16507239, 16508385) query: join comma delimited data column soup:

    Ideally, your best solution would be to normalize Table2 so you are not storing a comma separated list.

    \n

    Once you have this data normalized then you can easily query the data. The new table structure could be similar to this:

    \n
    CREATE TABLE T1\n(\n  [col1] varchar(2), \n  [col2] varchar(5),\n  constraint pk1_t1 primary key (col1)\n);\n\nINSERT INTO T1\n    ([col1], [col2])\nVALUES\n    ('C1', 'john'),\n    ('C2', 'alex'),\n    ('C3', 'piers'),\n    ('C4', 'sara')\n;\n\nCREATE TABLE T2\n(\n  [col1] varchar(2), \n  [col2] varchar(2),\n  constraint pk1_t2 primary key (col1, col2),\n  constraint fk1_col2 foreign key (col2) references t1 (col1)\n);\n\nINSERT INTO T2\n    ([col1], [col2])\nVALUES\n    ('R1', 'C1'),\n    ('R1', 'C2'),\n    ('R1', 'C4'),\n    ('R2', 'C3'),\n    ('R2', 'C4'),\n    ('R3', 'C1'),\n    ('R3', 'C4')\n;\n
    \n

    Normalizing the tables would make it much easier for you to query the data by joining the tables:

    \n
    select t2.col1, t1.col2\nfrom t2\ninner join t1\n  on t2.col2 = t1.col1\n
    \n

    See Demo

    \n

    Then if you wanted to display the data as a comma-separated list, you could use FOR XML PATH and STUFF:

    \n
    select distinct t2.col1, \n  STUFF(\n         (SELECT distinct ', ' + t1.col2\n          FROM t1\n          inner join t2 t\n            on t1.col1 = t.col2\n          where t2.col1 = t.col1\n          FOR XML PATH ('')), 1, 1, '') col2\nfrom t2;\n
    \n

    See Demo.

    \n

    If you are not able to normalize the data, then there are several things that you can do.

    \n

    First, you could create a split function that will convert the data stored in the list into rows that can be joined on. The split function would be similar to this:

    \n
    CREATE FUNCTION [dbo].[Split](@String varchar(MAX), @Delimiter char(1))       \nreturns @temptable TABLE (items varchar(MAX))       \nas       \nbegin      \n    declare @idx int       \n    declare @slice varchar(8000)       \n\n    select @idx = 1       \n        if len(@String)<1 or @String is null  return       \n\n    while @idx!= 0       \n    begin       \n        set @idx = charindex(@Delimiter,@String)       \n        if @idx!=0       \n            set @slice = left(@String,@idx - 1)       \n        else       \n            set @slice = @String       \n\n        if(len(@slice)>0)  \n            insert into @temptable(Items) values(@slice)       \n\n        set @String = right(@String,len(@String) - @idx)       \n        if len(@String) = 0 break       \n    end   \nreturn \nend;\n
    \n

    When you use the split, function you can either leave the data in the multiple rows or you can concatenate the values back into a comma separated list:

    \n
    ;with cte as\n(\n  select c.col1, t1.col2\n  from t1\n  inner join \n  (\n    select t2.col1, i.items col2\n    from t2\n    cross apply dbo.split(t2.col2, ',') i\n  ) c\n    on t1.col1 = c.col2\n) \nselect distinct c.col1, \n  STUFF(\n         (SELECT distinct ', ' + c1.col2\n          FROM cte c1\n          where c.col1 = c1.col1\n          FOR XML PATH ('')), 1, 1, '') col2\nfrom cte c\n
    \n

    See Demo.

    \n

    A final way that you could get the result is by applying FOR XML PATH directly.

    \n
    select col1, \n(\n  select ', '+t1.col2\n  from t1\n  where ','+t2.col2+',' like '%,'+cast(t1.col1 as varchar(10))+',%'\n  for xml path(''), type\n).value('substring(text()[1], 3)', 'varchar(max)') as col2\nfrom t2;\n
    \n

    See SQL Fiddle with Demo

    \n soup wrap:

    Ideally, your best solution would be to normalize Table2 so you are not storing a comma separated list.

    Once you have this data normalized then you can easily query the data. The new table structure could be similar to this:

    CREATE TABLE T1
    (
      [col1] varchar(2), 
      [col2] varchar(5),
      constraint pk1_t1 primary key (col1)
    );
    
    INSERT INTO T1
        ([col1], [col2])
    VALUES
        ('C1', 'john'),
        ('C2', 'alex'),
        ('C3', 'piers'),
        ('C4', 'sara')
    ;
    
    CREATE TABLE T2
    (
      [col1] varchar(2), 
      [col2] varchar(2),
      constraint pk1_t2 primary key (col1, col2),
      constraint fk1_col2 foreign key (col2) references t1 (col1)
    );
    
    INSERT INTO T2
        ([col1], [col2])
    VALUES
        ('R1', 'C1'),
        ('R1', 'C2'),
        ('R1', 'C4'),
        ('R2', 'C3'),
        ('R2', 'C4'),
        ('R3', 'C1'),
        ('R3', 'C4')
    ;
    

    Normalizing the tables would make it much easier for you to query the data by joining the tables:

    select t2.col1, t1.col2
    from t2
    inner join t1
      on t2.col2 = t1.col1
    

    See Demo

    Then if you wanted to display the data as a comma-separated list, you could use FOR XML PATH and STUFF:

    select distinct t2.col1, 
      STUFF(
             (SELECT distinct ', ' + t1.col2
              FROM t1
              inner join t2 t
                on t1.col1 = t.col2
              where t2.col1 = t.col1
              FOR XML PATH ('')), 1, 1, '') col2
    from t2;
    

    See Demo.

    If you are not able to normalize the data, then there are several things that you can do.

    First, you could create a split function that will convert the data stored in the list into rows that can be joined on. The split function would be similar to this:

    CREATE FUNCTION [dbo].[Split](@String varchar(MAX), @Delimiter char(1))       
    returns @temptable TABLE (items varchar(MAX))       
    as       
    begin      
        declare @idx int       
        declare @slice varchar(8000)       
    
        select @idx = 1       
            if len(@String)<1 or @String is null  return       
    
        while @idx!= 0       
        begin       
            set @idx = charindex(@Delimiter,@String)       
            if @idx!=0       
                set @slice = left(@String,@idx - 1)       
            else       
                set @slice = @String       
    
            if(len(@slice)>0)  
                insert into @temptable(Items) values(@slice)       
    
            set @String = right(@String,len(@String) - @idx)       
            if len(@String) = 0 break       
        end   
    return 
    end;
    

    When you use the split, function you can either leave the data in the multiple rows or you can concatenate the values back into a comma separated list:

    ;with cte as
    (
      select c.col1, t1.col2
      from t1
      inner join 
      (
        select t2.col1, i.items col2
        from t2
        cross apply dbo.split(t2.col2, ',') i
      ) c
        on t1.col1 = c.col2
    ) 
    select distinct c.col1, 
      STUFF(
             (SELECT distinct ', ' + c1.col2
              FROM cte c1
              where c.col1 = c1.col1
              FOR XML PATH ('')), 1, 1, '') col2
    from cte c
    

    See Demo.

    A final way that you could get the result is by applying FOR XML PATH directly.

    select col1, 
    (
      select ', '+t1.col2
      from t1
      where ','+t2.col2+',' like '%,'+cast(t1.col1 as varchar(10))+',%'
      for xml path(''), type
    ).value('substring(text()[1], 3)', 'varchar(max)') as col2
    from t2;
    

    See SQL Fiddle with Demo

    qid & accept id: (16550767, 16550825) query: ORACLE Update with MINUS result soup:

    How about:

    \n
    update table1\n   set d = 'TEST'\n where (a,b,c) not in(select a,b,c from table2);\n
    \n

    Edit:\nThe performance of minus generally suck, due to the sort operation. \nIf any of {a,b,c} are nullable, try the following instead:

    \n
    update table1 t1\n   set t1.d = 'TEST'\n where not exists(\n         select 'x'\n           from table2 t2\n          where t2.a = t1.a\n            and t2.b = t1.b\n            and t2.c = t1.c\n       );\n
    \n soup wrap:

    How about:

    update table1
       set d = 'TEST'
     where (a,b,c) not in(select a,b,c from table2);
    

    Edit: The performance of minus generally suck, due to the sort operation. If any of {a,b,c} are nullable, try the following instead:

    update table1 t1
       set t1.d = 'TEST'
     where not exists(
             select 'x'
               from table2 t2
              where t2.a = t1.a
                and t2.b = t1.b
                and t2.c = t1.c
           );
    
    qid & accept id: (16569297, 16569344) query: T-SQL How to build an aggregate table based on max values from a group? soup:
    SELECT  account_code, product_id\nFROM    (\n            SELECT  account_code, product_id, num_purchases,\n                    DENSE_RANK() OVER (PARTITION BY account_code \n                                        ORDER BY num_purchases DESC) RowID\n            FROM    TableName\n        )records\nWHERE   RowID = 1\n
    \n\n

    OUTPUT

    \n
    ╔══════════════╦════════════╗\n║ ACCOUNT_CODE ║ PRODUCT_ID ║\n╠══════════════╬════════════╣\n║ abc123       ║          1 ║\n║ xyz789       ║          1 ║\n╚══════════════╩════════════╝\n
    \n soup wrap:
    SELECT  account_code, product_id
    FROM    (
                SELECT  account_code, product_id, num_purchases,
                        DENSE_RANK() OVER (PARTITION BY account_code 
                                            ORDER BY num_purchases DESC) RowID
                FROM    TableName
            )records
    WHERE   RowID = 1
    

    OUTPUT

    ╔══════════════╦════════════╗
    ║ ACCOUNT_CODE ║ PRODUCT_ID ║
    ╠══════════════╬════════════╣
    ║ abc123       ║          1 ║
    ║ xyz789       ║          1 ║
    ╚══════════════╩════════════╝
    
    qid & accept id: (16668803, 16668874) query: Sql - Fetch next value to replace variable value soup:

    Try this one -

    \n

    Query:

    \n
    DECLARE \n      @prime_schema SYSNAME = 'aaa'\n    , @next_schema SYSNAME = 'bbb'\n\nDECLARE @SQL NVARCHAR(MAX)\nSELECT @SQL = (\n    SELECT CHAR(13) + '\n        SELECT * \n        INTO [' + @next_schema + '].[' + o.name + ']\n        FROM [' + s.name + '].[' + o.name + ']\n        WHERE 1 != 1'\n    FROM sys.objects o WITH (NOWAIT)\n    JOIN sys.schemas s WITH (NOWAIT) ON o.[schema_id] = s.[schema_id]\n    WHERE o.[type] = 'U'\n        AND s.name = @prime_schema\n        AND o.name IN ('table1', 'table2', 'table3')\n    FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)')\n\nPRINT @SQL\n
    \n

    Output:

    \n
    SELECT * \nINTO [bbb].[table1]\nFROM [aaa].[table1]\nWHERE 1 != 1\n\nSELECT * \nINTO [bbb].[table2]\nFROM [aaa].[table2]\nWHERE 1 != 1\n\nSELECT * \nINTO [bbb].[table3]\nFROM [aaa].[table3]\nWHERE 1 != 1\n
    \n soup wrap:

    Try this one -

    Query:

    DECLARE 
          @prime_schema SYSNAME = 'aaa'
        , @next_schema SYSNAME = 'bbb'
    
    DECLARE @SQL NVARCHAR(MAX)
    SELECT @SQL = (
        SELECT CHAR(13) + '
            SELECT * 
            INTO [' + @next_schema + '].[' + o.name + ']
            FROM [' + s.name + '].[' + o.name + ']
            WHERE 1 != 1'
        FROM sys.objects o WITH (NOWAIT)
        JOIN sys.schemas s WITH (NOWAIT) ON o.[schema_id] = s.[schema_id]
        WHERE o.[type] = 'U'
            AND s.name = @prime_schema
            AND o.name IN ('table1', 'table2', 'table3')
        FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)')
    
    PRINT @SQL
    

    Output:

    SELECT * 
    INTO [bbb].[table1]
    FROM [aaa].[table1]
    WHERE 1 != 1
    
    SELECT * 
    INTO [bbb].[table2]
    FROM [aaa].[table2]
    WHERE 1 != 1
    
    SELECT * 
    INTO [bbb].[table3]
    FROM [aaa].[table3]
    WHERE 1 != 1
    
    qid & accept id: (16685165, 16686190) query: Sql dividing one record into to many records soup:

    Try this:

    \n
    SELECT\n    user_id,\n    SUBSTRING_INDEX(tags,'<',2) as tag\nFROM\n    t1\nUNION ALL\nSELECT\n    user_id,\n    SUBSTRING_INDEX(tags,'>',-2) as tag\nFROM\n    t1\n
    \n

    UPDATE: for distinct values you can use :

    \n
    SELECT\n    user_id,\n    tag\nFROM (\n    SELECT\n        user_id,\n        SUBSTRING_INDEX(tags,'<',2) as tag\n    FROM\n        t1\n    UNION ALL\n    SELECT\n        user_id,\n        SUBSTRING_INDEX(tags,'>',-2) as tag\n    FROM\n        t1\n) as tmp\n    GROUP BY\n        user_id,\n        tag\n
    \n soup wrap:

    Try this:

    SELECT
        user_id,
        SUBSTRING_INDEX(tags,'<',2) as tag
    FROM
        t1
    UNION ALL
    SELECT
        user_id,
        SUBSTRING_INDEX(tags,'>',-2) as tag
    FROM
        t1
    

    UPDATE: for distinct values you can use :

    SELECT
        user_id,
        tag
    FROM (
        SELECT
            user_id,
            SUBSTRING_INDEX(tags,'<',2) as tag
        FROM
            t1
        UNION ALL
        SELECT
            user_id,
            SUBSTRING_INDEX(tags,'>',-2) as tag
        FROM
            t1
    ) as tmp
        GROUP BY
            user_id,
            tag
    
    qid & accept id: (16688990, 16691059) query: How to display progress bar while executing big SQLCommand VB.Net soup:

    Here is a cut down example of how to do Asychrounous Work with VB.Net 4.0.

    \n

    Lets imagine you have a form that has the following imports,

    \n
    Imports System.Windows.Forms\nImports System.Threading\nImports System.Threading.Tasks\n
    \n

    That form has two controls

    \n
    Private WithEvents DoSomthing As Button\nPrivate WithEvents Progress As ProgressBar\n
    \n

    Somewhere in your application we have a Function called ExecuteSlowStuff, this function is the equivalent of your executeMyQuery. The important part is the Action parameter which the function uses to show it is making progress.

    \n
    Private Shared Function ExecuteSlowStuff(ByVal progress As Action) As Integer\n    Dim result = 0\n    For i = 0 To 10000\n        result += i\n        Thread.Sleep(500)\n        progress()\n    Next\n\n    Return result\nEnd Function\n
    \n

    Lets say this work is started by the click of the DoSomething Button.

    \n
    Private Sub Start() Handled DoSomething.Click\n    Dim slowStuff = Task(Of Integer).Factory.StartNew(\n        Function() ExceuteSlowStuff(AddressOf Me.ShowProgress))\nEnd Sub\n
    \n

    You're probably wondering where ShowProgress comes from, that is the messier bit.

    \n
    Private Sub ShowProgress()\n    If Me.Progress.InvokeRequired Then\n        Dim cross As new Action(AddressOf Me.ShowProgress)\n        Me.Invoke(cross)\n    Else \n        If Me.Progress.Value = Me.Progress.Maximum Then\n            Me.Progress.Value = Me.Progress.Minimum\n        Else\n            Me.Progress.Increment(1)\n        End If\n\n        Me.Progress.Refresh()\n    End if\nEnd Sub\n
    \n

    Note that because ShowProgress can be invoked from another thread, it checks for cross thread calls. In that case it invokes itself on the main thread.

    \n soup wrap:

    Here is a cut down example of how to do Asychrounous Work with VB.Net 4.0.

    Lets imagine you have a form that has the following imports,

    Imports System.Windows.Forms
    Imports System.Threading
    Imports System.Threading.Tasks
    

    That form has two controls

    Private WithEvents DoSomthing As Button
    Private WithEvents Progress As ProgressBar
    

    Somewhere in your application we have a Function called ExecuteSlowStuff, this function is the equivalent of your executeMyQuery. The important part is the Action parameter which the function uses to show it is making progress.

    Private Shared Function ExecuteSlowStuff(ByVal progress As Action) As Integer
        Dim result = 0
        For i = 0 To 10000
            result += i
            Thread.Sleep(500)
            progress()
        Next
    
        Return result
    End Function
    

    Lets say this work is started by the click of the DoSomething Button.

    Private Sub Start() Handled DoSomething.Click
        Dim slowStuff = Task(Of Integer).Factory.StartNew(
            Function() ExceuteSlowStuff(AddressOf Me.ShowProgress))
    End Sub
    

    You're probably wondering where ShowProgress comes from, that is the messier bit.

    Private Sub ShowProgress()
        If Me.Progress.InvokeRequired Then
            Dim cross As new Action(AddressOf Me.ShowProgress)
            Me.Invoke(cross)
        Else 
            If Me.Progress.Value = Me.Progress.Maximum Then
                Me.Progress.Value = Me.Progress.Minimum
            Else
                Me.Progress.Increment(1)
            End If
    
            Me.Progress.Refresh()
        End if
    End Sub
    

    Note that because ShowProgress can be invoked from another thread, it checks for cross thread calls. In that case it invokes itself on the main thread.

    qid & accept id: (16756054, 16756109) query: Convert select result to column name in SQL Server soup:

    How about

    \n
    SELECT \n  CASE datename(dw,getdate())\n    WHEN 'Monday'    THEN Monday\n    WHEN 'Tuesday'   THEN Tuesday\n    WHEN 'Wednesday' THEN Wednesday\n    WHEN 'Thursday'  THEN Thursday\n    WHEN 'Friday'    THEN Friday\n    WHEN 'Saturday'  THEN Saturday\n    WHEN 'Sunday'    THEN Sunday\n  END today\n  FROM @MyTemp\n WHERE Name = 'Test'\n
    \n

    Sample output:

    \n
    | TODAY |\n---------\n| 09:30 |\n
    \n

    SQLFiddle

    \n soup wrap:

    How about

    SELECT 
      CASE datename(dw,getdate())
        WHEN 'Monday'    THEN Monday
        WHEN 'Tuesday'   THEN Tuesday
        WHEN 'Wednesday' THEN Wednesday
        WHEN 'Thursday'  THEN Thursday
        WHEN 'Friday'    THEN Friday
        WHEN 'Saturday'  THEN Saturday
        WHEN 'Sunday'    THEN Sunday
      END today
      FROM @MyTemp
     WHERE Name = 'Test'
    

    Sample output:

    | TODAY |
    ---------
    | 09:30 |
    

    SQLFiddle

    qid & accept id: (16797418, 16797478) query: TSql Sum By Date soup:

    If you want the number of each records for each day:

    \n
    SELECT DTTM,COUNT(*) AS Total\nFROM \n[Audits].[dbo].[Miscount]\nGroup by DTTM\nOrder by DTTM desc\n
    \n

    Or if you want a sum of a field on each record:

    \n
    SELECT DTTM,SUM(field1) AS Sum\nFROM \n[Audits].[dbo].[Miscount]\nGroup by DTTM\nOrder by DTTM desc\n
    \n

    Or if DTTM is a datetime then you can use:

    \n
    SELECT DATEADD(dd, 0, DATEDIFF(dd, 0, DTTM)) AS DTTM,COUNT(*) AS Total\nFROM \n[Audits].[dbo].[Miscount]\nGroup by DATEADD(dd, 0, DATEDIFF(dd, 0, DTTM))\nOrder by DATEADD(dd, 0, DATEDIFF(dd, 0, DTTM)) desc\n
    \n

    Newer versions of SQL Sever will support a Date type, so you can do this instead:

    \n
    SELECT CAST(DTTM AS Date) AS DTTM,COUNT(*) AS Total\nFROM \n[Audits].[dbo].[Miscount]\nGroup by CAST(DTTM AS Date)\nOrder by CAST(DTTM AS Date) desc\n
    \n soup wrap:

    If you want the number of each records for each day:

    SELECT DTTM,COUNT(*) AS Total
    FROM 
    [Audits].[dbo].[Miscount]
    Group by DTTM
    Order by DTTM desc
    

    Or if you want a sum of a field on each record:

    SELECT DTTM,SUM(field1) AS Sum
    FROM 
    [Audits].[dbo].[Miscount]
    Group by DTTM
    Order by DTTM desc
    

    Or if DTTM is a datetime then you can use:

    SELECT DATEADD(dd, 0, DATEDIFF(dd, 0, DTTM)) AS DTTM,COUNT(*) AS Total
    FROM 
    [Audits].[dbo].[Miscount]
    Group by DATEADD(dd, 0, DATEDIFF(dd, 0, DTTM))
    Order by DATEADD(dd, 0, DATEDIFF(dd, 0, DTTM)) desc
    

    Newer versions of SQL Sever will support a Date type, so you can do this instead:

    SELECT CAST(DTTM AS Date) AS DTTM,COUNT(*) AS Total
    FROM 
    [Audits].[dbo].[Miscount]
    Group by CAST(DTTM AS Date)
    Order by CAST(DTTM AS Date) desc
    
    qid & accept id: (16799445, 16799630) query: Select date + 3 days, not including weekends and holidays soup:

    EDIT:\nChanged to include non-workdays as valid fromDates.

    \n
    WITH rankedDates AS\n    (\n        SELECT \n            thedate\n            , ROW_NUMBER()\n                OVER(\n                    ORDER BY thedate\n                    ) dateRank\n        FROM \n            calendar c\n        WHERE \n            c.isweekday = 1 \n            AND \n            c.isholiday = 0\n    )\nSELECT \n    c1.fromdate\n    , rd2.thedate todate\nFROM\n    ( \n        SELECT \n            c.thedate fromDate\n            , \n                (\n                    SELECT \n                        TOP 1 daterank\n                    FROM \n                        rankedDates rd\n                    WHERE\n                        rd.thedate <= c.thedate\n                    ORDER BY \n                        thedate DESC\n                ) dateRank\n        FROM \n            calendar c\n    ) c1        \nLEFT JOIN\n    rankedDates rd2\n    ON \n        c1.dateRank + 3 = rd2.dateRank        \n
    \n

    You could put a date rank column on the calendar table to simplify this and avoid the CTE:

    \n
    CREATE TABLE\n    calendar\n    (\n        TheDate DATETIME PRIMARY KEY\n        , isweekday BIT NOT NULL\n        , isHoliday BIT NOT NULL DEFAULT 0\n        , dateRank INT NOT NULL\n    );\n
    \n

    Then you'd set the daterank column only where it's a non-holiday weekday.

    \n soup wrap:

    EDIT: Changed to include non-workdays as valid fromDates.

    WITH rankedDates AS
        (
            SELECT 
                thedate
                , ROW_NUMBER()
                    OVER(
                        ORDER BY thedate
                        ) dateRank
            FROM 
                calendar c
            WHERE 
                c.isweekday = 1 
                AND 
                c.isholiday = 0
        )
    SELECT 
        c1.fromdate
        , rd2.thedate todate
    FROM
        ( 
            SELECT 
                c.thedate fromDate
                , 
                    (
                        SELECT 
                            TOP 1 daterank
                        FROM 
                            rankedDates rd
                        WHERE
                            rd.thedate <= c.thedate
                        ORDER BY 
                            thedate DESC
                    ) dateRank
            FROM 
                calendar c
        ) c1        
    LEFT JOIN
        rankedDates rd2
        ON 
            c1.dateRank + 3 = rd2.dateRank        
    

    You could put a date rank column on the calendar table to simplify this and avoid the CTE:

    CREATE TABLE
        calendar
        (
            TheDate DATETIME PRIMARY KEY
            , isweekday BIT NOT NULL
            , isHoliday BIT NOT NULL DEFAULT 0
            , dateRank INT NOT NULL
        );
    

    Then you'd set the daterank column only where it's a non-holiday weekday.

    qid & accept id: (16874590, 16874868) query: Order SQL request when each row contains id of the next one soup:

    Solutions for SQL Server 2008-2012, PostgreSQL 9.1.9, Oracle 11g

    \n

    Actually, recursive CTE is a solution for almost all current RDBMS, including PostgreSQL (explanations and example shown below). However there is another better solution (optimized) for Oracle DBs: hierarchical queries.

    \n

    NOCYCLE instructs Oracle to return rows even if your data has a loop in it.

    \n

    CONNECT_BY_ROOT gives you access to the root element, even several layers down in the query.

    \n

    Using the HR schema:

    \n

    The corresponding code for Oracle 11g:

    \n
    select\nb.id_bus_line, b.id_bus_stop\nfrom BusLine_BusStop b\nstart with b.is_first_stop = 1\nconnect by nocycle prior b.id_next_bus_stop = b.id_bus_stop and prior b.id_bus_line = b.id_bus_line\n
    \n

    DEMO for Oracle 11g (code of my own).

    \n

    Please note that the standard is recursive CTE in the SQL:1999 norm. As you can see, there are several differences between SQL Server and PostgreSQL.

    \n

    The following solution is for SQL Server 2012:

    \n
    ;WITH route AS\n(\n  SELECT BusLineId, BusStopId, NextBusStopId\n  FROM BusLine_BusStop\n  WHERE IsFirstStop = 1\n  UNION ALL\n  SELECT b.BusLineId, b.BusStopId, b.NextBusStopId\n  FROM BusLine_BusStop b\n  INNER JOIN route r\n          ON r.BusLineId = b.BusLineId\n         AND r.NextBusStopId = b.BusStopId\n  WHERE IsFirstStop = 0 or IsFirstStop is null\n)\nSELECT BusLineId, BusStopId\nFROM route\nORDER BY BusLineId\n
    \n

    DEMO for SQL Server 2012 (inspired by T I).

    \n

    And this one is for PostgreSQL 9.1.9 (it is not optimal but should work):

    \n

    The trick consists in the creation of a dedicated temporary sequence for the current session that you can reset.

    \n
    create temp sequence rownum;\n\nWITH final_route AS\n(\n  WITH RECURSIVE route AS\n  (\n    SELECT BusLineId, BusStopId, NextBusStopId\n    FROM BusLine_BusStop\n    WHERE IsFirstStop = 1\n    UNION ALL\n    SELECT b.BusLineId, b.BusStopId, b.NextBusStopId\n    FROM BusLine_BusStop b\n    INNER JOIN route r\n            ON r.BusLineId = b.BusLineId\n           AND r.NextBusStopId = b.BusStopId\n    WHERE IsFirstStop = 0 or IsFirstStop is null\n  )\n  SELECT BusLineId, BusStopId, nextval('rownum') as rownum\n  FROM route\n)\nSELECT BusLineId, BusStopId\nFROM final_route\nORDER BY BusLineId, rownum;\n
    \n

    DEMO for PostgreSQL 9.1.9 of my own.

    \n

    EDIT:

    \n

    Sorry for the multiple edits. It is quite uncommon to connect records by children record instead of by its parent.\nYou can avoid this poor representation by dropping your isFirstStop column and connecting your records using an id_PreviousBusStop column (if possible). In that case, you have to set id_PreviousBusStop to null for the first record.\nYou may save space (for fixed-length data, the entire space is still reserved). Moreover your queries will then become more efficient using less characters.

    \n soup wrap:

    Solutions for SQL Server 2008-2012, PostgreSQL 9.1.9, Oracle 11g

    Actually, recursive CTE is a solution for almost all current RDBMS, including PostgreSQL (explanations and example shown below). However there is another better solution (optimized) for Oracle DBs: hierarchical queries.

    NOCYCLE instructs Oracle to return rows even if your data has a loop in it.

    CONNECT_BY_ROOT gives you access to the root element, even several layers down in the query.

    Using the HR schema:

    The corresponding code for Oracle 11g:

    select
    b.id_bus_line, b.id_bus_stop
    from BusLine_BusStop b
    start with b.is_first_stop = 1
    connect by nocycle prior b.id_next_bus_stop = b.id_bus_stop and prior b.id_bus_line = b.id_bus_line
    

    DEMO for Oracle 11g (code of my own).

    Please note that the standard is recursive CTE in the SQL:1999 norm. As you can see, there are several differences between SQL Server and PostgreSQL.

    The following solution is for SQL Server 2012:

    ;WITH route AS
    (
      SELECT BusLineId, BusStopId, NextBusStopId
      FROM BusLine_BusStop
      WHERE IsFirstStop = 1
      UNION ALL
      SELECT b.BusLineId, b.BusStopId, b.NextBusStopId
      FROM BusLine_BusStop b
      INNER JOIN route r
              ON r.BusLineId = b.BusLineId
             AND r.NextBusStopId = b.BusStopId
      WHERE IsFirstStop = 0 or IsFirstStop is null
    )
    SELECT BusLineId, BusStopId
    FROM route
    ORDER BY BusLineId
    

    DEMO for SQL Server 2012 (inspired by T I).

    And this one is for PostgreSQL 9.1.9 (it is not optimal but should work):

    The trick consists in the creation of a dedicated temporary sequence for the current session that you can reset.

    create temp sequence rownum;
    
    WITH final_route AS
    (
      WITH RECURSIVE route AS
      (
        SELECT BusLineId, BusStopId, NextBusStopId
        FROM BusLine_BusStop
        WHERE IsFirstStop = 1
        UNION ALL
        SELECT b.BusLineId, b.BusStopId, b.NextBusStopId
        FROM BusLine_BusStop b
        INNER JOIN route r
                ON r.BusLineId = b.BusLineId
               AND r.NextBusStopId = b.BusStopId
        WHERE IsFirstStop = 0 or IsFirstStop is null
      )
      SELECT BusLineId, BusStopId, nextval('rownum') as rownum
      FROM route
    )
    SELECT BusLineId, BusStopId
    FROM final_route
    ORDER BY BusLineId, rownum;
    

    DEMO for PostgreSQL 9.1.9 of my own.

    EDIT:

    Sorry for the multiple edits. It is quite uncommon to connect records by children record instead of by its parent. You can avoid this poor representation by dropping your isFirstStop column and connecting your records using an id_PreviousBusStop column (if possible). In that case, you have to set id_PreviousBusStop to null for the first record. You may save space (for fixed-length data, the entire space is still reserved). Moreover your queries will then become more efficient using less characters.

    qid & accept id: (16877276, 16877496) query: MySQL self inner joining and seaching in it soup:

    If I understand correctly you probably meant Fanta of Coca-Cola not vice versa.

    \n
    SELECT p.id_product, \n       CONCAT(p.name_product, ' of ', p1.name_product) name_product, \n       p.has_choice, \n       p.choice_id\n  FROM products p JOIN products p1\n    ON p.choice_id = p1.id_product\n
    \n

    Note in that particular case INNER JOIN eliminates the need in has_choice to get products that are choices of parent products.

    \n

    Output:

    \n
    | ID_PRODUCT |        NAME_PRODUCT | HAS_CHOICE | CHOICE_ID |\n-------------------------------------------------------------\n|          3 |  Fanta of Coca-Cola |          0 |         2 |\n|          4 | Sprite of Coca-Cola |          0 |         2 |\n
    \n

    Here is SQLFiddle demo.

    \n

    UPDATE1 To get list of all products wether they are choices of product or not you need to use LEFT JOIN. To search in product names both parent product and choices use appropriate aliases of tables in WHERE clause.

    \n
    SELECT p.id_product,\n       CASE WHEN p1.id_product IS NULL THEN\n           p.name_product\n       ELSE\n           CONCAT(p.name_product, ' of ', p1.name_product) \n       END name_product, \n       p.has_choice, \n       p.choice_id\n  FROM products p LEFT JOIN products p1  -- use LEFT JOIN here\n    ON p.choice_id = p1.id_product\n WHERE p.has_choice = 0                  -- filter out parent products\n   AND (p.name_product  LIKE '%a%'     -- search in product name\n        OR\n        p1.name_product LIKE '%a%') -- search in product name of a parent product\n
    \n

    CASE in that query allows to have plain product name for products that are not choices.

    \n

    Output:

    \n
    | ID_PRODUCT |        NAME_PRODUCT | HAS_CHOICE | CHOICE_ID |\n-------------------------------------------------------------\n|          3 |  Fanta of Coca-Cola |          0 |         2 |\n|          4 | Sprite of Coca-Cola |          0 |         2 |\n|          5 |               Axion |          0 |         0 |\n
    \n

    Here is SQLFiddle demo.

    \n soup wrap:

    If I understand correctly you probably meant Fanta of Coca-Cola not vice versa.

    SELECT p.id_product, 
           CONCAT(p.name_product, ' of ', p1.name_product) name_product, 
           p.has_choice, 
           p.choice_id
      FROM products p JOIN products p1
        ON p.choice_id = p1.id_product
    

    Note in that particular case INNER JOIN eliminates the need in has_choice to get products that are choices of parent products.

    Output:

    | ID_PRODUCT |        NAME_PRODUCT | HAS_CHOICE | CHOICE_ID |
    -------------------------------------------------------------
    |          3 |  Fanta of Coca-Cola |          0 |         2 |
    |          4 | Sprite of Coca-Cola |          0 |         2 |
    

    Here is SQLFiddle demo.

    UPDATE1 To get list of all products wether they are choices of product or not you need to use LEFT JOIN. To search in product names both parent product and choices use appropriate aliases of tables in WHERE clause.

    SELECT p.id_product,
           CASE WHEN p1.id_product IS NULL THEN
               p.name_product
           ELSE
               CONCAT(p.name_product, ' of ', p1.name_product) 
           END name_product, 
           p.has_choice, 
           p.choice_id
      FROM products p LEFT JOIN products p1  -- use LEFT JOIN here
        ON p.choice_id = p1.id_product
     WHERE p.has_choice = 0                  -- filter out parent products
       AND (p.name_product  LIKE '%a%'     -- search in product name
            OR
            p1.name_product LIKE '%a%') -- search in product name of a parent product
    

    CASE in that query allows to have plain product name for products that are not choices.

    Output:

    | ID_PRODUCT |        NAME_PRODUCT | HAS_CHOICE | CHOICE_ID |
    -------------------------------------------------------------
    |          3 |  Fanta of Coca-Cola |          0 |         2 |
    |          4 | Sprite of Coca-Cola |          0 |         2 |
    |          5 |               Axion |          0 |         0 |
    

    Here is SQLFiddle demo.

    qid & accept id: (16887108, 16887184) query: How to specify a foreign key? soup:

    Use foreign_key option:

    \n
    has_many :posts, :foreign_key => :poster_id\n
    \n

    For Post model it will be

    \n
    belongs_to :user, :foreign_key => :poster_id\n
    \n

    or

    \n
    belongs_to :poster, :class_name => 'User'\n
    \n soup wrap:

    Use foreign_key option:

    has_many :posts, :foreign_key => :poster_id
    

    For Post model it will be

    belongs_to :user, :foreign_key => :poster_id
    

    or

    belongs_to :poster, :class_name => 'User'
    
    qid & accept id: (16895364, 16895443) query: Select value which don't have atleast one association soup:

    Try this one:

    \n
    SELECT * FROM Table1\nWHERE item_id IN ( \n                   SELECT item_id FROM Table1\n                   GROUP BY item_id\n                   HAVING MAX(category_id) = 0\n                 )\n
    \n

    Result:

    \n
    ╔═════════╦═════════════╗\n║ ITEM_ID ║ CATEGORY_ID ║\n╠═════════╬═════════════╣\n║       4 ║           0 ║\n║       5 ║           0 ║\n╚═════════╩═════════════╝\n
    \n

    See this SQLFiddle

    \n

    You can use DISTINCT keyword if you don't want duplicate rows in the result:

    \n
    SELECT DISTINCT * FROM Table1\nWHERE item_id IN ( \n                   SELECT item_id FROM Table1\n                   GROUP BY item_id\n                   HAVING MAX(category_id) = 0\n                 );\n
    \n

    See this SQLFiddle for more details.

    \n soup wrap:

    Try this one:

    SELECT * FROM Table1
    WHERE item_id IN ( 
                       SELECT item_id FROM Table1
                       GROUP BY item_id
                       HAVING MAX(category_id) = 0
                     )
    

    Result:

    ╔═════════╦═════════════╗
    ║ ITEM_ID ║ CATEGORY_ID ║
    ╠═════════╬═════════════╣
    ║       4 ║           0 ║
    ║       5 ║           0 ║
    ╚═════════╩═════════════╝
    

    See this SQLFiddle

    You can use DISTINCT keyword if you don't want duplicate rows in the result:

    SELECT DISTINCT * FROM Table1
    WHERE item_id IN ( 
                       SELECT item_id FROM Table1
                       GROUP BY item_id
                       HAVING MAX(category_id) = 0
                     );
    

    See this SQLFiddle for more details.

    qid & accept id: (16914206, 16914288) query: SQL SELECT statement when using look up table soup:

    You need to use multiple joins to work across the relationships.

    \n
    select e.id, e.name, e.startDate, r.RoleName \nfrom employee e \njoin user_roles ur\non e.id = ur.employee_id\njoin roles r\non r.id = ur.role_id\n
    \n

    Full Example

    \n
    /*DDL*/\n\ncreate table EMPLOYEE(\n   ID int,\n   Name varchar(50),\n   StartDate date\n);\n\ncreate table USER_ROLES(\n  Employee_ID int,\n  Role_ID int\n);\n\ncreate table Roles(\n  ID int,\n  RoleName varchar(50)\n);\n\ninsert into EMPLOYEE values(1, 'Jon Skeet', '2013-03-04');\ninsert into USER_ROLES values (1,1);\ninsert into ROLES values(1, 'Superman');\n\n/* Query */\nselect e.id, e.name, e.startDate, r.RoleName \nfrom employee e \njoin user_roles ur\non e.id = ur.employee_id\njoin roles r\non r.id = ur.role_id;\n
    \n

    Working Example

    \n

    Nice Article Explaining Joins

    \n soup wrap:

    You need to use multiple joins to work across the relationships.

    select e.id, e.name, e.startDate, r.RoleName 
    from employee e 
    join user_roles ur
    on e.id = ur.employee_id
    join roles r
    on r.id = ur.role_id
    

    Full Example

    /*DDL*/
    
    create table EMPLOYEE(
       ID int,
       Name varchar(50),
       StartDate date
    );
    
    create table USER_ROLES(
      Employee_ID int,
      Role_ID int
    );
    
    create table Roles(
      ID int,
      RoleName varchar(50)
    );
    
    insert into EMPLOYEE values(1, 'Jon Skeet', '2013-03-04');
    insert into USER_ROLES values (1,1);
    insert into ROLES values(1, 'Superman');
    
    /* Query */
    select e.id, e.name, e.startDate, r.RoleName 
    from employee e 
    join user_roles ur
    on e.id = ur.employee_id
    join roles r
    on r.id = ur.role_id;
    

    Working Example

    Nice Article Explaining Joins

    qid & accept id: (16930761, 16930783) query: Oracle 10g SQL: Return true if a column has only a value, but > 1 rows in a table soup:

    You want an aggregation with a case statement. The following query checks for multiple values (assuming no NULLs):

    \n
    select (case when count(distinct Reference) = 1 then 'TRUE'\n             else 'FALSE'\n        end)\nfrom t\n
    \n

    If you really need the multiple rows as well:

    \n
    select (case when count(distinct Reference) = 1 and count(*) > 1 then 'TRUE'\n             else 'FALSE'\n        end)\nfrom t\n
    \n soup wrap:

    You want an aggregation with a case statement. The following query checks for multiple values (assuming no NULLs):

    select (case when count(distinct Reference) = 1 then 'TRUE'
                 else 'FALSE'
            end)
    from t
    

    If you really need the multiple rows as well:

    select (case when count(distinct Reference) = 1 and count(*) > 1 then 'TRUE'
                 else 'FALSE'
            end)
    from t
    
    qid & accept id: (16962915, 16963367) query: Select id on grouped unique set of data soup:

    Although SQLite has group_concat(), it won't help here because the order of the concatenated elements is arbitrary. That is the easiest way to do this.

    \n

    Instead, we have to think of this relationally. The idea is to do the following:

    \n
      \n
    1. Count the number of colors that two ids have in common
    2. \n
    3. Count the number of colors on each id
    4. \n
    5. Select id pairs where these three values are equal
    6. \n
    7. Identify each pair by the minimum id in the pair
    8. \n
    \n

    Then distinct values of the minimum are the list you want.

    \n

    The following query takes this approach:

    \n
    select distinct MIN(id2)\nfrom (select t1.id as id1, t2.id as id2, count(*) as cnt\n      from t t1 join\n           t t2\n           on t1.color = t2.color\n      group by t1.id, t2.id\n     ) t1t2 join\n     (select t.id, COUNT(*) as cnt\n      from t\n      group by t.id\n     ) t1sum\n     on t1t2.id1 = t1sum.id and t1sum.cnt = t1t2.cnt join\n     (select t.id, COUNT(*) as cnt\n      from t\n      group by t.id\n     ) t2sum\n     on t1t2.id2 = t2sum.id and t2sum.cnt = t1t2.cnt\ngroup by t1t2.id1, t1t2.cnt, t1sum.cnt, t2sum.cnt\n
    \n

    I actually tested this in SQL Server by placing this with clause in front:

    \n
    with t as (\n      select 1 as id, 'r' as color union all\n      select 1, 'g' union all\n      select 1, 'b' union all\n      select 2 as id, 'r' as color union all\n      select 2, 'g' union all\n      select 2, 'b' union all\n      select 3, 'r' union all\n      select 4, 'y' union all\n      select 4, 'p' union all\n      select 5 as id, 'r' as color union all\n      select 5, 'g' union all\n      select 5, 'b' union all\n      select 5, 'p'\n     )\n
    \n soup wrap:

    Although SQLite has group_concat(), it won't help here because the order of the concatenated elements is arbitrary. That is the easiest way to do this.

    Instead, we have to think of this relationally. The idea is to do the following:

    1. Count the number of colors that two ids have in common
    2. Count the number of colors on each id
    3. Select id pairs where these three values are equal
    4. Identify each pair by the minimum id in the pair

    Then distinct values of the minimum are the list you want.

    The following query takes this approach:

    select distinct MIN(id2)
    from (select t1.id as id1, t2.id as id2, count(*) as cnt
          from t t1 join
               t t2
               on t1.color = t2.color
          group by t1.id, t2.id
         ) t1t2 join
         (select t.id, COUNT(*) as cnt
          from t
          group by t.id
         ) t1sum
         on t1t2.id1 = t1sum.id and t1sum.cnt = t1t2.cnt join
         (select t.id, COUNT(*) as cnt
          from t
          group by t.id
         ) t2sum
         on t1t2.id2 = t2sum.id and t2sum.cnt = t1t2.cnt
    group by t1t2.id1, t1t2.cnt, t1sum.cnt, t2sum.cnt
    

    I actually tested this in SQL Server by placing this with clause in front:

    with t as (
          select 1 as id, 'r' as color union all
          select 1, 'g' union all
          select 1, 'b' union all
          select 2 as id, 'r' as color union all
          select 2, 'g' union all
          select 2, 'b' union all
          select 3, 'r' union all
          select 4, 'y' union all
          select 4, 'p' union all
          select 5 as id, 'r' as color union all
          select 5, 'g' union all
          select 5, 'b' union all
          select 5, 'p'
         )
    
    qid & accept id: (16971556, 16974128) query: Convert numeric to string inside a user-defined function soup:

    Converting numeric to text is the least of your problems.

    \n
    \n

    My purpose is to define a new variable "x%" as its name, with x\n varying as the function input.

    \n
    \n
      \n
    • First of all: there are no variables in an SQL function. SQL functions are just wrappers for valid SQL statements. Input and output parameters can be named, but names are static, not dynamic.

    • \n
    • You may be thinking of a PL/pgSQL function, where you have procedural elements including variables. Parameter names are still static, though. There are no dynamic variable names in plpgsql. You can execute dynamic SQL with EXECUTE but that's something different entirely.

    • \n
    • While it is possible to declare a static variable with a name like "123%" it is really exceptionally uncommon to do so. Maybe for deliberately obfuscating code? Other than that: Don't. Use proper, simple, legal, lower case variable names without the need to double-quote and without the potential to do something unexpected after a typo.

    • \n
    • Since the window function ntile() returns integer and you run an equality check on the result, the input parameter should be integer, not numeric.

    • \n
    • To assign a variable in plpgsql you can use the assignment operator := for a single variable or SELECT INTO for any number of variables. Either way, you want the query to return a single row or you have to loop.

    • \n
    • If you want the maximum billed from the chosen percentile, you don't GROUP BY x, y. That might return multiple rows and does not do what you seem to want. Use plain max(billed) without GROUP BY to get a single row.

    • \n
    • You don't need to double quote perfectly legal column names.

    • \n
    \n

    A valid function might look like this. It's not exactly what you were trying to do, which cannot be done. But it may get you closer to what you actually need.

    \n

    \n
    CREATE OR REPLACE FUNCTION ntile_loop(x integer)\nRETURNS SETOF numeric as \n$func$\nDECLARE\n   myvar text;\nBEGIN\n\nSELECT INTO myvar  max(billed)\nFROM  (\n   SELECT billed, id, cm\n         ,ntile(100) OVER (PARTITION BY id, cm ORDER BY billed) AS tile\n   FROM   table_all\n   ) sub\nWHERE  sub.tile = $1;\n\n-- do something with myvar, depending on the value of $1 ...\nEND\n$func$ LANGUAGE plpgsql;\n
    \n

    Long story short, you need to study the basics before you try to create sophisticated functions.

    \n

    Plain SQL

    \n

    After Q update:

    \n
    \n

    I'd like to calculate 5, 10, 20, 30, ....90th percentile and display\n all of them in the same table for each id+cm group.

    \n
    \n

    This simple query should do it all:

    \n
    SELECT id, cm, tile, max(billed) AS max_billed\nFROM  (\n   SELECT billed, id, cm\n         ,ntile(100) OVER (PARTITION BY id, cm ORDER BY billed) AS tile\n   FROM   table_all\n   ) sub\nWHERE (tile%10 = 0 OR tile = 5)\nAND    tile <= 90\nGROUP  BY 1,2,3\nORDER  BY 1,2,3;\n
    \n

    % .. modulo operator
    \nGROUP BY 1,2,3 .. positional parameter

    \n soup wrap:

    Converting numeric to text is the least of your problems.

    My purpose is to define a new variable "x%" as its name, with x varying as the function input.

    • First of all: there are no variables in an SQL function. SQL functions are just wrappers for valid SQL statements. Input and output parameters can be named, but names are static, not dynamic.

    • You may be thinking of a PL/pgSQL function, where you have procedural elements including variables. Parameter names are still static, though. There are no dynamic variable names in plpgsql. You can execute dynamic SQL with EXECUTE but that's something different entirely.

    • While it is possible to declare a static variable with a name like "123%" it is really exceptionally uncommon to do so. Maybe for deliberately obfuscating code? Other than that: Don't. Use proper, simple, legal, lower case variable names without the need to double-quote and without the potential to do something unexpected after a typo.

    • Since the window function ntile() returns integer and you run an equality check on the result, the input parameter should be integer, not numeric.

    • To assign a variable in plpgsql you can use the assignment operator := for a single variable or SELECT INTO for any number of variables. Either way, you want the query to return a single row or you have to loop.

    • If you want the maximum billed from the chosen percentile, you don't GROUP BY x, y. That might return multiple rows and does not do what you seem to want. Use plain max(billed) without GROUP BY to get a single row.

    • You don't need to double quote perfectly legal column names.

    A valid function might look like this. It's not exactly what you were trying to do, which cannot be done. But it may get you closer to what you actually need.

    CREATE OR REPLACE FUNCTION ntile_loop(x integer)
    RETURNS SETOF numeric as 
    $func$
    DECLARE
       myvar text;
    BEGIN
    
    SELECT INTO myvar  max(billed)
    FROM  (
       SELECT billed, id, cm
             ,ntile(100) OVER (PARTITION BY id, cm ORDER BY billed) AS tile
       FROM   table_all
       ) sub
    WHERE  sub.tile = $1;
    
    -- do something with myvar, depending on the value of $1 ...
    END
    $func$ LANGUAGE plpgsql;
    

    Long story short, you need to study the basics before you try to create sophisticated functions.

    Plain SQL

    After Q update:

    I'd like to calculate 5, 10, 20, 30, ....90th percentile and display all of them in the same table for each id+cm group.

    This simple query should do it all:

    SELECT id, cm, tile, max(billed) AS max_billed
    FROM  (
       SELECT billed, id, cm
             ,ntile(100) OVER (PARTITION BY id, cm ORDER BY billed) AS tile
       FROM   table_all
       ) sub
    WHERE (tile%10 = 0 OR tile = 5)
    AND    tile <= 90
    GROUP  BY 1,2,3
    ORDER  BY 1,2,3;
    

    % .. modulo operator
    GROUP BY 1,2,3 .. positional parameter

    qid & accept id: (17003542, 17003759) query: How to compile multiple stored procedures from a single file? soup:
    @/path/main_script.sql:\nSTART script_one.sql\nSTART script_two.sql\nSTART script_three.sql\nSTART script_four.sql\nSTART script_five.sql\n
    \n

    OR

    \n
    @/path/main_script.sql:\n@@/path/script_one.sql\n@@/path/script_two.sql\n@@/path/script_three.sql\n@@/path/script_four.sql\n@@/path/script_five.sql\n
    \n soup wrap:
    @/path/main_script.sql:
    START script_one.sql
    START script_two.sql
    START script_three.sql
    START script_four.sql
    START script_five.sql
    

    OR

    @/path/main_script.sql:
    @@/path/script_one.sql
    @@/path/script_two.sql
    @@/path/script_three.sql
    @@/path/script_four.sql
    @@/path/script_five.sql
    
    qid & accept id: (17017125, 17017223) query: How to use "Group By" for date interval in postgres soup:

    You want to use the count aggregate as a window function, eg count(id) over (partition by event_date rows 3 preceeding)... but it's greatly complicated by the nature of your data. You're storing timestamps, not just dates, and you want to group by day not by number of previous events. To top it all off, you want to cross-tabulate the results.

    \n

    If PostgreSQL supported RANGE in window functions this would be considerably simpler than it is. As it is, you have to do it the hard way.

    \n

    You can then filter that through a window to get the per-event per-day lagged counts ... except that your event days aren't contiguous and unfortunately PostgreSQL window functions only support ROWS, not RANGE, so you have to join across a generated series of dates first.

    \n
    WITH\n/* First, get a listing of event counts by day */\nevent_days(event_name, event_day, event_day_count) AS (\n        SELECT event_name, date_trunc('day', event_date), count(id)\n        FROM Table1\n        GROUP BY event_name, date_trunc('day', event_date)\n        ORDER BY date_trunc('day', event_date), event_name\n),\n/* \n * Then fill in zeros for any days within the range that didn't have any events.\n * If PostgreSQL supported RANGE windows, not just ROWS, we could get rid of this/\n */\nevent_days_contiguous(event_name, event_day, event_day_count) AS (\n        SELECT event_names.event_name, gen_day, COALESCE(event_days.event_day_count,0)\n        FROM generate_series( (SELECT min(event_day)::date FROM event_days), (SELECT max(event_day)::date FROM event_days), INTERVAL '1' DAY ) gen_day\n        CROSS JOIN (SELECT DISTINCT event_name FROM event_days) event_names(event_name)\n        LEFT OUTER JOIN event_days ON (gen_day = event_days.event_day AND event_names.event_name = event_days.event_name)\n),\n/*\n * Get the lagged counts by using the sum() function over a row window...\n */\nlagged_days(event_name, event_day_first, event_day_last, event_days_count) AS (\n        SELECT event_name, event_day, first_value(event_day) OVER w, sum(event_day_count) OVER w\n        FROM event_days_contiguous\n        WINDOW w AS (PARTITION BY event_name ORDER BY event_day ROWS 1 PRECEDING)\n)\n/* Now do a manual pivot. For arbitrary column counts use an external tool\n * or check out the 'crosstab' function in the 'tablefunc' contrib module \n */\nSELECT d1.event_day_first, d1.event_days_count AS "Event_A", d2.event_days_count AS "Event_B"\nFROM lagged_days d1\nINNER JOIN lagged_days d2 ON (d1.event_day_first = d2.event_day_first AND d1.event_name = 'event_A' AND d2.event_name = 'event_B')\nORDER BY d1.event_day_first;\n
    \n

    Output with the sample data:

    \n
        event_day_first     | Event_A | Event_B \n------------------------+---------+---------\n 2013-04-24 00:00:00+08 |       2 |       1\n 2013-04-25 00:00:00+08 |       4 |       1\n 2013-04-26 00:00:00+08 |       3 |       0\n 2013-04-27 00:00:00+08 |       2 |       1\n(4 rows)\n
    \n

    You can potentially make the query faster but much uglier by combining the three CTE clauses into a nested query using FROM (SELECT...) and wrapping them in a view instead of a CTE for use from the outer query. This will allow Pg to "push down" predicates into the queries, greatly reducing the data you have to work with when querying subsets of the data.

    \n

    SQLFiddle doesn't seem to be working at the moment, but here's the demo setup I used:

    \n
    CREATE TABLE Table1 \n(id integer primary key, "event_date" timestamp not null, "event_name" text);\n\nINSERT INTO Table1\n("id", "event_date", "event_name")\nVALUES\n(101, '2013-04-24 18:33:37', 'event_A'),\n(102, '2013-04-24 20:34:37', 'event_B'),\n(103, '2013-04-24 20:40:37', 'event_A'),\n(104, '2013-04-25 01:00:00', 'event_A'),\n(105, '2013-04-25 12:00:15', 'event_A'),\n(106, '2013-04-26 00:56:10', 'event_A'),\n(107, '2013-04-27 12:00:15', 'event_A'),\n(108, '2013-04-27 12:00:15', 'event_B');\n
    \n

    I changed the ID of the last entry from 107 to 108, as I suspect that was just an error in your manual editing.

    \n

    Here's how to express it as a view instead:

    \n
    CREATE VIEW lagged_days AS\nSELECT event_name, event_day AS event_day_first, sum(event_day_count) OVER w AS event_days_count \nFROM (\n        SELECT event_names.event_name, gen_day, COALESCE(event_days.event_day_count,0)\n        FROM generate_series( (SELECT min(event_date)::date FROM Table1), (SELECT max(event_date)::date FROM Table1), INTERVAL '1' DAY ) gen_day\n        CROSS JOIN (SELECT DISTINCT event_name FROM Table1) event_names(event_name)\n        LEFT OUTER JOIN (\n                SELECT event_name, date_trunc('day', event_date), count(id)\n                FROM Table1\n                GROUP BY event_name, date_trunc('day', event_date)\n                ORDER BY date_trunc('day', event_date), event_name\n        ) event_days(event_name, event_day, event_day_count)\n        ON (gen_day = event_days.event_day AND event_names.event_name = event_days.event_name)\n) event_days_contiguous(event_name, event_day, event_day_count)\nWINDOW w AS (PARTITION BY event_name ORDER BY event_day ROWS 1 PRECEDING);\n
    \n

    You can then use the view in whatever crosstab queries you want to write. It'll work with the prior hand-crosstab query:

    \n
    SELECT d1.event_day_first, d1.event_days_count AS "Event_A", d2.event_days_count AS "Event_B"\nFROM lagged_days d1\nINNER JOIN lagged_days d2 ON (d1.event_day_first = d2.event_day_first AND d1.event_name = 'event_A' AND d2.event_name = 'event_B')\nORDER BY d1.event_day_first;\n
    \n

    ... or using crosstab from the tablefunc extension, which I'll let you study up on.

    \n

    For a laugh, here's the explain on the above view-based query: http://explain.depesz.com/s/nvUq

    \n soup wrap:

    You want to use the count aggregate as a window function, eg count(id) over (partition by event_date rows 3 preceeding)... but it's greatly complicated by the nature of your data. You're storing timestamps, not just dates, and you want to group by day not by number of previous events. To top it all off, you want to cross-tabulate the results.

    If PostgreSQL supported RANGE in window functions this would be considerably simpler than it is. As it is, you have to do it the hard way.

    You can then filter that through a window to get the per-event per-day lagged counts ... except that your event days aren't contiguous and unfortunately PostgreSQL window functions only support ROWS, not RANGE, so you have to join across a generated series of dates first.

    WITH
    /* First, get a listing of event counts by day */
    event_days(event_name, event_day, event_day_count) AS (
            SELECT event_name, date_trunc('day', event_date), count(id)
            FROM Table1
            GROUP BY event_name, date_trunc('day', event_date)
            ORDER BY date_trunc('day', event_date), event_name
    ),
    /* 
     * Then fill in zeros for any days within the range that didn't have any events.
     * If PostgreSQL supported RANGE windows, not just ROWS, we could get rid of this/
     */
    event_days_contiguous(event_name, event_day, event_day_count) AS (
            SELECT event_names.event_name, gen_day, COALESCE(event_days.event_day_count,0)
            FROM generate_series( (SELECT min(event_day)::date FROM event_days), (SELECT max(event_day)::date FROM event_days), INTERVAL '1' DAY ) gen_day
            CROSS JOIN (SELECT DISTINCT event_name FROM event_days) event_names(event_name)
            LEFT OUTER JOIN event_days ON (gen_day = event_days.event_day AND event_names.event_name = event_days.event_name)
    ),
    /*
     * Get the lagged counts by using the sum() function over a row window...
     */
    lagged_days(event_name, event_day_first, event_day_last, event_days_count) AS (
            SELECT event_name, event_day, first_value(event_day) OVER w, sum(event_day_count) OVER w
            FROM event_days_contiguous
            WINDOW w AS (PARTITION BY event_name ORDER BY event_day ROWS 1 PRECEDING)
    )
    /* Now do a manual pivot. For arbitrary column counts use an external tool
     * or check out the 'crosstab' function in the 'tablefunc' contrib module 
     */
    SELECT d1.event_day_first, d1.event_days_count AS "Event_A", d2.event_days_count AS "Event_B"
    FROM lagged_days d1
    INNER JOIN lagged_days d2 ON (d1.event_day_first = d2.event_day_first AND d1.event_name = 'event_A' AND d2.event_name = 'event_B')
    ORDER BY d1.event_day_first;
    

    Output with the sample data:

        event_day_first     | Event_A | Event_B 
    ------------------------+---------+---------
     2013-04-24 00:00:00+08 |       2 |       1
     2013-04-25 00:00:00+08 |       4 |       1
     2013-04-26 00:00:00+08 |       3 |       0
     2013-04-27 00:00:00+08 |       2 |       1
    (4 rows)
    

    You can potentially make the query faster but much uglier by combining the three CTE clauses into a nested query using FROM (SELECT...) and wrapping them in a view instead of a CTE for use from the outer query. This will allow Pg to "push down" predicates into the queries, greatly reducing the data you have to work with when querying subsets of the data.

    SQLFiddle doesn't seem to be working at the moment, but here's the demo setup I used:

    CREATE TABLE Table1 
    (id integer primary key, "event_date" timestamp not null, "event_name" text);
    
    INSERT INTO Table1
    ("id", "event_date", "event_name")
    VALUES
    (101, '2013-04-24 18:33:37', 'event_A'),
    (102, '2013-04-24 20:34:37', 'event_B'),
    (103, '2013-04-24 20:40:37', 'event_A'),
    (104, '2013-04-25 01:00:00', 'event_A'),
    (105, '2013-04-25 12:00:15', 'event_A'),
    (106, '2013-04-26 00:56:10', 'event_A'),
    (107, '2013-04-27 12:00:15', 'event_A'),
    (108, '2013-04-27 12:00:15', 'event_B');
    

    I changed the ID of the last entry from 107 to 108, as I suspect that was just an error in your manual editing.

    Here's how to express it as a view instead:

    CREATE VIEW lagged_days AS
    SELECT event_name, event_day AS event_day_first, sum(event_day_count) OVER w AS event_days_count 
    FROM (
            SELECT event_names.event_name, gen_day, COALESCE(event_days.event_day_count,0)
            FROM generate_series( (SELECT min(event_date)::date FROM Table1), (SELECT max(event_date)::date FROM Table1), INTERVAL '1' DAY ) gen_day
            CROSS JOIN (SELECT DISTINCT event_name FROM Table1) event_names(event_name)
            LEFT OUTER JOIN (
                    SELECT event_name, date_trunc('day', event_date), count(id)
                    FROM Table1
                    GROUP BY event_name, date_trunc('day', event_date)
                    ORDER BY date_trunc('day', event_date), event_name
            ) event_days(event_name, event_day, event_day_count)
            ON (gen_day = event_days.event_day AND event_names.event_name = event_days.event_name)
    ) event_days_contiguous(event_name, event_day, event_day_count)
    WINDOW w AS (PARTITION BY event_name ORDER BY event_day ROWS 1 PRECEDING);
    

    You can then use the view in whatever crosstab queries you want to write. It'll work with the prior hand-crosstab query:

    SELECT d1.event_day_first, d1.event_days_count AS "Event_A", d2.event_days_count AS "Event_B"
    FROM lagged_days d1
    INNER JOIN lagged_days d2 ON (d1.event_day_first = d2.event_day_first AND d1.event_name = 'event_A' AND d2.event_name = 'event_B')
    ORDER BY d1.event_day_first;
    

    ... or using crosstab from the tablefunc extension, which I'll let you study up on.

    For a laugh, here's the explain on the above view-based query: http://explain.depesz.com/s/nvUq

    qid & accept id: (17025457, 17025549) query: MIN/MAX price for each product (query) soup:

    First, when you use join, you should always have an on clause, even though MySQL does not require this. If you want a cross join, then be explicit about it.

    \n

    Second, you don't use the tm_markets table at all in the query. It is not needed, so remove it.

    \n

    The resulting query should work:

    \n
    SELECT MIN(`map`.`Product_Price`) as `minProductPrice`,\n       MAX(`map`.`Product_Price`) as `maxProductPrice`,\n       `pr`.`Product_Name` as `productName`\nFROM `bm_market_products` `map` join\n     `bm_products` as `pr`\n     on map`.`Product_Id` = `pr`.`Product_Id`\nWHERE `map`.`Product_Id` = 1 \n
    \n

    Because you are only choosing one product, a group by is probably not necessary. You might consider this, however:

    \n
    SELECT MIN(`map`.`Product_Price`) as `minProductPrice`,\n       MAX(`map`.`Product_Price`) as `maxProductPrice`,\n       `pr`.`Product_Name` as `productName`\nFROM `bm_market_products` `map` join\n     `bm_products` as `pr`\n     on map`.`Product_Id` = `pr`.`Product_Id`\ngroup by `map`.`Product_Id`\n
    \n

    That will return the information for all products.

    \n soup wrap:

    First, when you use join, you should always have an on clause, even though MySQL does not require this. If you want a cross join, then be explicit about it.

    Second, you don't use the tm_markets table at all in the query. It is not needed, so remove it.

    The resulting query should work:

    SELECT MIN(`map`.`Product_Price`) as `minProductPrice`,
           MAX(`map`.`Product_Price`) as `maxProductPrice`,
           `pr`.`Product_Name` as `productName`
    FROM `bm_market_products` `map` join
         `bm_products` as `pr`
         on map`.`Product_Id` = `pr`.`Product_Id`
    WHERE `map`.`Product_Id` = 1 
    

    Because you are only choosing one product, a group by is probably not necessary. You might consider this, however:

    SELECT MIN(`map`.`Product_Price`) as `minProductPrice`,
           MAX(`map`.`Product_Price`) as `maxProductPrice`,
           `pr`.`Product_Name` as `productName`
    FROM `bm_market_products` `map` join
         `bm_products` as `pr`
         on map`.`Product_Id` = `pr`.`Product_Id`
    group by `map`.`Product_Id`
    

    That will return the information for all products.

    qid & accept id: (17043777, 17044496) query: Is it possible to get results, and count of the results, at the same time? (to filter results based on the result count) soup:

    The following query:

    \n
    SELECT id, related_info, count(related_info)\nFROM my_table\nWHERE \ngroup by id, related_info with rollup\n
    \n

    would produce results like:

    \n
    id | related_info |  count(related_info)|\n1  |         info1|                    1|\n1  |         info2|                    1|\n1  |         info3|                    1|\n1  |         NULL |                    3|\n
    \n

    rollup adds an extra row with the summary information.

    \n

    The solution is easy in most databases:

    \n
    SELECT id, related_info, count(related_info) over (partition by id)\nFROM my_table\nWHERE \n
    \n

    Getting the equivalent in MySQL without repeating the where clause is challenging.

    \n

    A typical alternative in MySQL, if you need the list of "related_info" is to use group_concat:

    \n
    select id, group_concat(related_info), count(*)\nfrom my_table\nwhere \ngroup by id;\n
    \n

    And a final method, assuming that related_info is a single column that uniquely identifies each row:

    \n
    select mt.id, mt.related_info, t.cnt\nfrom my_table mt join\n     (select id, group_concat(related_info) as relatedInfoList, count(*) as cnt\n      from my_table\n      where \n      group by id\n     ) t\n     on mt.id = t.id and\n        find_in_set(related_info, relatedInfoList) > 0\n
    \n

    This turns "related_info" into a list and then matches back to the original data. This can also be done with a unique id in the original data (which id is not based on the sample data).

    \n soup wrap:

    The following query:

    SELECT id, related_info, count(related_info)
    FROM my_table
    WHERE 
    group by id, related_info with rollup
    

    would produce results like:

    id | related_info |  count(related_info)|
    1  |         info1|                    1|
    1  |         info2|                    1|
    1  |         info3|                    1|
    1  |         NULL |                    3|
    

    rollup adds an extra row with the summary information.

    The solution is easy in most databases:

    SELECT id, related_info, count(related_info) over (partition by id)
    FROM my_table
    WHERE 
    

    Getting the equivalent in MySQL without repeating the where clause is challenging.

    A typical alternative in MySQL, if you need the list of "related_info" is to use group_concat:

    select id, group_concat(related_info), count(*)
    from my_table
    where 
    group by id;
    

    And a final method, assuming that related_info is a single column that uniquely identifies each row:

    select mt.id, mt.related_info, t.cnt
    from my_table mt join
         (select id, group_concat(related_info) as relatedInfoList, count(*) as cnt
          from my_table
          where 
          group by id
         ) t
         on mt.id = t.id and
            find_in_set(related_info, relatedInfoList) > 0
    

    This turns "related_info" into a list and then matches back to the original data. This can also be done with a unique id in the original data (which id is not based on the sample data).

    qid & accept id: (17044086, 17044133) query: sum of two different rows(salary) in a table soup:

    try to use the coalesce operator.

    \n
    select sum(coalesce(columna, 0) + coalesce(columnb, 0))\n
    \n

    cause if any part is null, result will be null.

    \n

    if you're talking of row instead of columns :

    \n
    SELECT SUM(Salary)\nFROM yourTable\nWHERE Name IN ('Smith', 'Wong')\nGROUP BY Name\n
    \n soup wrap:

    try to use the coalesce operator.

    select sum(coalesce(columna, 0) + coalesce(columnb, 0))
    

    cause if any part is null, result will be null.

    if you're talking of row instead of columns :

    SELECT SUM(Salary)
    FROM yourTable
    WHERE Name IN ('Smith', 'Wong')
    GROUP BY Name
    
    qid & accept id: (17057129, 17057798) query: MongoDB - How to Determine Date Created for Dynamically Created DBs and Collections? soup:

    For database: \nYou can check the creation time for "database-name.ns" file

    \n
    ls -l test.ns\n-rw------- 1 root root 16777216 Jun 12 07:10 test.ns\n
    \n

    For collection:\nMost of time collection is created when you insert something into it. So, if you are not creating the collection using createCollection() command and you are using the default ObjectId for _id key, then you can get a rough estimate of the creation of the collection by knowing the time at which the first document inserted in that collection.

    \n
    Mongo > db.test.find().sort({$natural : 1}).limit(1).toArray()[0]._id.getTimestamp()\nISODate("2013-06-12T01:40:04Z")\n
    \n soup wrap:

    For database: You can check the creation time for "database-name.ns" file

    ls -l test.ns
    -rw------- 1 root root 16777216 Jun 12 07:10 test.ns
    

    For collection: Most of time collection is created when you insert something into it. So, if you are not creating the collection using createCollection() command and you are using the default ObjectId for _id key, then you can get a rough estimate of the creation of the collection by knowing the time at which the first document inserted in that collection.

    Mongo > db.test.find().sort({$natural : 1}).limit(1).toArray()[0]._id.getTimestamp()
    ISODate("2013-06-12T01:40:04Z")
    
    qid & accept id: (17070859, 17070904) query: SQL Server inserting Date as 1/1/1900 soup:

    You have not given it as null, you're trying to insert an empty string (''). You need:

    \n
    INSERT INTO [ABC] ([code],[updatedate],[flag],[Mfdate]) \nVALUES ('203', '6/12/2013','N/A', NULL) \n
    \n

    Although really, if you're going to be inserting dates, best to insert them in YYYYMMDD format, as:

    \n
    INSERT INTO [ABC] ([code],[updatedate],[flag],[Mfdate]) \nVALUES ('203', '20130612','N/A', NULL) \n
    \n soup wrap:

    You have not given it as null, you're trying to insert an empty string (''). You need:

    INSERT INTO [ABC] ([code],[updatedate],[flag],[Mfdate]) 
    VALUES ('203', '6/12/2013','N/A', NULL) 
    

    Although really, if you're going to be inserting dates, best to insert them in YYYYMMDD format, as:

    INSERT INTO [ABC] ([code],[updatedate],[flag],[Mfdate]) 
    VALUES ('203', '20130612','N/A', NULL) 
    
    qid & accept id: (17073134, 17073196) query: SQL server join tables and pivot soup:

    This should work:

    \n
    WITH Sales AS (\n   SELECT\n      S.SaleID,\n      S.SoldBy,\n      S.SalePrice,\n      S.Margin,\n      S.Date,\n      I.SalePrice,\n      I.Category\n   FROM\n      dbo.Sale S\n      INNER JOIN dbo.SaleItem I\n         ON S.SaleID = I.SaleID\n)\nSELECT *\nFROM\n   Sales\n   PIVOT (Max(SalePrice) FOR Category IN (Books, Printing, DVD)) P\n;\n
    \n

    Or alternately:

    \n
    SELECT\n   S.SaleID,\n   S.SoldBy,\n   S.SalePrice,\n   S.Margin,\n   S.Date,\n   I.Books,\n   I.Printing,\n   I.DVD\nFROM\n   dbo.Sale S\n   INNER JOIN (\n      SELECT *\n      FROM\n         (SELECT SaleID, SalePrice, Category FROM dbo.SaleItem) I\n         PIVOT (Max(SalePrice) FOR Category IN (Books, Printing, DVD)) P\n   ) I ON S.SaleID = I.SaleID\n;\n
    \n

    These have the same resultset and may in fact be treated the same by the query optimizer, but possibly not. The big difference comes into play when you start putting conditions on the Sale table--you should test and see which query works better.

    \n

    May I suggest, however, that you do the pivoting in the presentation layer? If, for example, you are using SSRS it is quite easy to use a matrix control that will do all the pivoting for you. That is best, because then if you add a new Category, you won't have modify all your SQL code!

    \n

    There is a way to dynamically find the column names to pivot, but it involves dynamic SQL. I don't really recommend that as the best way, either, though it is possible.

    \n

    Another way that could work would be to preprocess this query--meaning to set a trigger on the Category table that rewrites a VIEW to contain all the extant categories that exist. This does solve a lot of the other problems I've mentioned, but again, using the presentation layer is best.

    \n

    Note: If your column names (that were formerly values) are numbers or begin with a number, you must quote them with square brackets as in PIVOT (Max(Value) FOR CategoryId IN ([1], [2], [3], [4])) P. Alternately, you can modify the values before they get to the PIVOT part of the query to prepend some letters, so that the column list doesn't need escaping. For further reading on this check out the rules for identifiers in SQL Server.

    \n soup wrap:

    This should work:

    WITH Sales AS (
       SELECT
          S.SaleID,
          S.SoldBy,
          S.SalePrice,
          S.Margin,
          S.Date,
          I.SalePrice,
          I.Category
       FROM
          dbo.Sale S
          INNER JOIN dbo.SaleItem I
             ON S.SaleID = I.SaleID
    )
    SELECT *
    FROM
       Sales
       PIVOT (Max(SalePrice) FOR Category IN (Books, Printing, DVD)) P
    ;
    

    Or alternately:

    SELECT
       S.SaleID,
       S.SoldBy,
       S.SalePrice,
       S.Margin,
       S.Date,
       I.Books,
       I.Printing,
       I.DVD
    FROM
       dbo.Sale S
       INNER JOIN (
          SELECT *
          FROM
             (SELECT SaleID, SalePrice, Category FROM dbo.SaleItem) I
             PIVOT (Max(SalePrice) FOR Category IN (Books, Printing, DVD)) P
       ) I ON S.SaleID = I.SaleID
    ;
    

    These have the same resultset and may in fact be treated the same by the query optimizer, but possibly not. The big difference comes into play when you start putting conditions on the Sale table--you should test and see which query works better.

    May I suggest, however, that you do the pivoting in the presentation layer? If, for example, you are using SSRS it is quite easy to use a matrix control that will do all the pivoting for you. That is best, because then if you add a new Category, you won't have modify all your SQL code!

    There is a way to dynamically find the column names to pivot, but it involves dynamic SQL. I don't really recommend that as the best way, either, though it is possible.

    Another way that could work would be to preprocess this query--meaning to set a trigger on the Category table that rewrites a VIEW to contain all the extant categories that exist. This does solve a lot of the other problems I've mentioned, but again, using the presentation layer is best.

    Note: If your column names (that were formerly values) are numbers or begin with a number, you must quote them with square brackets as in PIVOT (Max(Value) FOR CategoryId IN ([1], [2], [3], [4])) P. Alternately, you can modify the values before they get to the PIVOT part of the query to prepend some letters, so that the column list doesn't need escaping. For further reading on this check out the rules for identifiers in SQL Server.

    qid & accept id: (17099089, 17099191) query: MySQL query to append key:value to JSON string soup:

    What about this

    \n
    UPDATE table SET table_field1 = CONCAT(table_field1,' This will be added.');\n
    \n

    EDIT:

    \n

    I personally would have done the manipulation with a language like PHP before inserting it. Much easier. Anyway, Ok is this what you want? This should work providing your json format that is being added is in the format {'key':'value'}

    \n
     UPDATE table\n SET col = CONCAT_WS(",", SUBSTRING(col, 1, CHAR_LENGTH(col) - 1),SUBSTRING('newjson', 2));\n
    \n soup wrap:

    What about this

    UPDATE table SET table_field1 = CONCAT(table_field1,' This will be added.');
    

    EDIT:

    I personally would have done the manipulation with a language like PHP before inserting it. Much easier. Anyway, Ok is this what you want? This should work providing your json format that is being added is in the format {'key':'value'}

     UPDATE table
     SET col = CONCAT_WS(",", SUBSTRING(col, 1, CHAR_LENGTH(col) - 1),SUBSTRING('newjson', 2));
    
    qid & accept id: (17099697, 17099862) query: Output multiple child record ids to one row soup:

    You can do it using pivot and rank:

    \n
    select StudentID, [1] as P1, [2] as P2, [3] as P3 from (\n  select StudentID, ParentID, RANK() over (PARTITION BY StudentID ORDER BY ParentID) as rnk\n  from STUDENT_PARENTS\n) ranked PIVOT (min(ParentID) for rnk in ([1], [2], [3])) as p\n
    \n

    See it on SqlFiddle here:

    \n

    http://sqlfiddle.com/#!3/e3254/9

    \n

    If you are using GUIDs it's a little more tricky, you need to cast them to BINARY to use min():

    \n
    select StudentID, \n    cast([1] as uniqueidentifier) as P1, \n    cast([2] as uniqueidentifier) as P2, \n    cast([3] as uniqueidentifier) as P3 \nfrom (\n  select StudentID, cast(ParentID as binary(16)) as ParentID, RANK() over (PARTITION BY StudentID ORDER BY StudentParentID) as rnk\n  from STUDENT_PARENTS\n) ranked PIVOT (min(ParentID) for rnk in ([1], [2], [3])) as p\n
    \n

    SqlFiddle here: http://sqlfiddle.com/#!3/8d0d7/14

    \n soup wrap:

    You can do it using pivot and rank:

    select StudentID, [1] as P1, [2] as P2, [3] as P3 from (
      select StudentID, ParentID, RANK() over (PARTITION BY StudentID ORDER BY ParentID) as rnk
      from STUDENT_PARENTS
    ) ranked PIVOT (min(ParentID) for rnk in ([1], [2], [3])) as p
    

    See it on SqlFiddle here:

    http://sqlfiddle.com/#!3/e3254/9

    If you are using GUIDs it's a little more tricky, you need to cast them to BINARY to use min():

    select StudentID, 
        cast([1] as uniqueidentifier) as P1, 
        cast([2] as uniqueidentifier) as P2, 
        cast([3] as uniqueidentifier) as P3 
    from (
      select StudentID, cast(ParentID as binary(16)) as ParentID, RANK() over (PARTITION BY StudentID ORDER BY StudentParentID) as rnk
      from STUDENT_PARENTS
    ) ranked PIVOT (min(ParentID) for rnk in ([1], [2], [3])) as p
    

    SqlFiddle here: http://sqlfiddle.com/#!3/8d0d7/14

    qid & accept id: (17102375, 17102449) query: How do I use SQL's JOIN to select column A if column B = column C? soup:

    How about something like

    \n
    SELECT  m.username\nFROM    members m INNER JOIN\n    friends f   ON  m.id IN (f.user_id,f.friend_id)\nWHERE   m.id = $variable\n
    \n

    I noted that the above might return more than 1 entry based on the data in your tables, so here is another example.

    \n
    SELECT  m.username\nFROM \nmembers m\nWHERE m.id = 2    \nAND     EXISTS  (\n            SELECT  1 \n            FROM    friends f \n            WHERE m.id IN (f.user_id,f.friend_id)\n        )\n
    \n

    SQL Fiddle DEMO

    \n

    The above example will show you the difference between the 2 statements.

    \n

    This article has some nice visual representation of joins, and is always handy to have around.

    \n

    Introduction to JOINs – Basic of JOINs

    \n soup wrap:

    How about something like

    SELECT  m.username
    FROM    members m INNER JOIN
        friends f   ON  m.id IN (f.user_id,f.friend_id)
    WHERE   m.id = $variable
    

    I noted that the above might return more than 1 entry based on the data in your tables, so here is another example.

    SELECT  m.username
    FROM 
    members m
    WHERE m.id = 2    
    AND     EXISTS  (
                SELECT  1 
                FROM    friends f 
                WHERE m.id IN (f.user_id,f.friend_id)
            )
    

    SQL Fiddle DEMO

    The above example will show you the difference between the 2 statements.

    This article has some nice visual representation of joins, and is always handy to have around.

    Introduction to JOINs – Basic of JOINs

    qid & accept id: (17113532, 17113770) query: LOAD DATA INFILE into Single Field on MySQL soup:

    There are a couple of ways of dong this, depending on the details of your scenario:

    \n

    LOAD DATA INFILE

    \n

    You probably want something like this:

    \n
    LOAD DATA LOCAL INFILE '/path/to/file/data_file.csv'\n    IGNORE\n    INTO TABLE `databasename`.`tablename`\n    CHARACTER SET utf8\n    FIELDS\n        TERMINATED BY '\n'\n        OPTIONALLY ENCLOSED BY '"'\n    IGNORE 1 LINES\n    (column1)\nSHOW WARNINGS;\n
    \n

    This will import from /path/to/file/data_file.csv into databasename.tablename, with each complete line in the text file being imported into a new row in the table, with all the data from that line being put into the column called column1. More details here.

    \n

    LOAD_FILE

    \n

    Or you could use the LOAD_FILE function, like this:

    \n
    UPDATE table\n  SET column1=LOAD_FILE('/path/to/file/data_file.csv')\n  WHERE id=1;\n
    \n

    This will import the contents of the file /path/to/file/data_file.csv and store it in column1 of the row where id=1. More details here. This is mostly intended for loading binary files into BLOB fields, but you can use it to suck a whole text file into a single column in a single row too, if that's what you want.

    \n

    Using a TEXT Column

    \n

    For loading large text files, you should use a column of type TEXT - it's can store very large amounts of text with no problems - see here for more details.

    \n soup wrap:

    There are a couple of ways of dong this, depending on the details of your scenario:

    LOAD DATA INFILE

    You probably want something like this:

    LOAD DATA LOCAL INFILE '/path/to/file/data_file.csv'
        IGNORE
        INTO TABLE `databasename`.`tablename`
        CHARACTER SET utf8
        FIELDS
            TERMINATED BY '\n'
            OPTIONALLY ENCLOSED BY '"'
        IGNORE 1 LINES
        (column1)
    SHOW WARNINGS;
    

    This will import from /path/to/file/data_file.csv into databasename.tablename, with each complete line in the text file being imported into a new row in the table, with all the data from that line being put into the column called column1. More details here.

    LOAD_FILE

    Or you could use the LOAD_FILE function, like this:

    UPDATE table
      SET column1=LOAD_FILE('/path/to/file/data_file.csv')
      WHERE id=1;
    

    This will import the contents of the file /path/to/file/data_file.csv and store it in column1 of the row where id=1. More details here. This is mostly intended for loading binary files into BLOB fields, but you can use it to suck a whole text file into a single column in a single row too, if that's what you want.

    Using a TEXT Column

    For loading large text files, you should use a column of type TEXT - it's can store very large amounts of text with no problems - see here for more details.

    qid & accept id: (17129510, 17129746) query: reuse the auto inserted generated field in another field soup:

    You can achieve your goal generating folio numbers at insertion time using BEFORE INSERT trigger and a separate table (if you don't mind) for sequencing.

    \n

    First of all sequencing table

    \n
    CREATE TABLE table1_seq \n  (id INT NOT NULL AUTO_INCREMENT PRIMARY KEY);\n
    \n

    Your actual table

    \n
    CREATE TABLE Table1\n  (`id` INT NOT NULL DEFAULT 0, \n   `folio` VARCHAR(32)\n   ...\n  );\n
    \n

    A trigger

    \n
    DELIMITER $$\nCREATE TRIGGER tg_table1_insert \nBEFORE INSERT ON Table1\nFOR EACH ROW\nBEGIN\n  INSERT INTO table1_seq VALUES (NULL);\n  SET NEW.id = LAST_INSERT_ID();\n  SET NEW.folio = CONCAT(DATE_FORMAT(CURDATE(), '%d%m%y'), UPPER(NEW.folio), NEW.id);\nEND$$\nDELIMITER ;\n
    \n

    Now you can insert a new record

    \n
    INSERT INTO Table1 (`folio`, ...)\nVALUES ('a', ...), ('e', ...);\n
    \n

    And you'll have in your table1

    \n
    \n| ID |    FOLIO |...\n-----------------...\n|  1 | 160613A1 |...\n|  2 | 160613E2 |...\n
    \n

    Here is SQLFiddle demo.

    \n

    Another way is just to wrap your INSERT and UPDATE in a stored procedure

    \n
    DELIMITER $$\nCREATE PROCEDURE sp_table1_insert (IN folio_type VARCHAR(1), ...)\nBEGIN\n  DECLARE newid INT DEFAULT 0;\n  START TRANSACTION;\n  INSERT INTO table1 (id, ...) VALUES (NULL, ...);\n  SET newid = LAST_INSERT_ID();\n  UPDATE table1 \n     SET folio = CONCAT(DATE_FORMAT(CURDATE(), '%d%m%y'), UPPER(folio_type), newid)\n   WHERE id = newid;\n  COMMIT;\nEND$$\nDELIMITER ;\n
    \n

    And then insert new records using this stored procedure

    \n
    CALL sp_table1_insert ('a',...);\nCALL sp_table1_insert ('e',...);\n
    \n

    Here is SQLFiddle demo for that.

    \n soup wrap:

    You can achieve your goal generating folio numbers at insertion time using BEFORE INSERT trigger and a separate table (if you don't mind) for sequencing.

    First of all sequencing table

    CREATE TABLE table1_seq 
      (id INT NOT NULL AUTO_INCREMENT PRIMARY KEY);
    

    Your actual table

    CREATE TABLE Table1
      (`id` INT NOT NULL DEFAULT 0, 
       `folio` VARCHAR(32)
       ...
      );
    

    A trigger

    DELIMITER $$
    CREATE TRIGGER tg_table1_insert 
    BEFORE INSERT ON Table1
    FOR EACH ROW
    BEGIN
      INSERT INTO table1_seq VALUES (NULL);
      SET NEW.id = LAST_INSERT_ID();
      SET NEW.folio = CONCAT(DATE_FORMAT(CURDATE(), '%d%m%y'), UPPER(NEW.folio), NEW.id);
    END$$
    DELIMITER ;
    

    Now you can insert a new record

    INSERT INTO Table1 (`folio`, ...)
    VALUES ('a', ...), ('e', ...);
    

    And you'll have in your table1

    | ID |    FOLIO |...
    -----------------...
    |  1 | 160613A1 |...
    |  2 | 160613E2 |...
    

    Here is SQLFiddle demo.

    Another way is just to wrap your INSERT and UPDATE in a stored procedure

    DELIMITER $$
    CREATE PROCEDURE sp_table1_insert (IN folio_type VARCHAR(1), ...)
    BEGIN
      DECLARE newid INT DEFAULT 0;
      START TRANSACTION;
      INSERT INTO table1 (id, ...) VALUES (NULL, ...);
      SET newid = LAST_INSERT_ID();
      UPDATE table1 
         SET folio = CONCAT(DATE_FORMAT(CURDATE(), '%d%m%y'), UPPER(folio_type), newid)
       WHERE id = newid;
      COMMIT;
    END$$
    DELIMITER ;
    

    And then insert new records using this stored procedure

    CALL sp_table1_insert ('a',...);
    CALL sp_table1_insert ('e',...);
    

    Here is SQLFiddle demo for that.

    qid & accept id: (17163648, 17183754) query: How to exclude holidays between two dates? soup:

    Here is an even better and efficient solution to the problem,

    \n
    SELECT A.ID,\nCOUNT(A.ID) AS COUNTED\nFROM tableA A\nLEFT JOIN TableB B\nON A.tableB_id=B.id\nLEFT JOIN holiday C\nON TRUNC(C.hdate) BETWEEN (TRUNC(a.date1) +1) AND TRUNC(B.date2)\nWHERE c.hdate IS NOT NULL\nGROUP BY A.ID;\n
    \n

    where TableA contains date1 and tableB contains date2. Holiday contains the list of holidays and Sundays. And this query excludes 'date1' from the count.

    \n

    RESULT LOGIC

    \n
    trunc(date2) - trunc(date1) = x      \nx - result of the query\n
    \n soup wrap:

    Here is an even better and efficient solution to the problem,

    SELECT A.ID,
    COUNT(A.ID) AS COUNTED
    FROM tableA A
    LEFT JOIN TableB B
    ON A.tableB_id=B.id
    LEFT JOIN holiday C
    ON TRUNC(C.hdate) BETWEEN (TRUNC(a.date1) +1) AND TRUNC(B.date2)
    WHERE c.hdate IS NOT NULL
    GROUP BY A.ID;
    

    where TableA contains date1 and tableB contains date2. Holiday contains the list of holidays and Sundays. And this query excludes 'date1' from the count.

    RESULT LOGIC

    trunc(date2) - trunc(date1) = x      
    x - result of the query
    
    qid & accept id: (17255338, 17256047) query: escape entire column if all of that column's fields are null (or zero) soup:

    Technically you can do that with dynamic SQL, but whether you have to proceed with this approach is very questionable.

    \n
    DELIMITER $$\nCREATE PROCEDURE sp_select_not_empty(IN tbl_name VARCHAR(64))\nBEGIN\n    SET @sql = NULL, @cols = NULL;\n    SELECT\n      GROUP_CONCAT(\n        CONCAT(\n          'SELECT ''',\n          column_name,\n          ''' name, COUNT(NULLIF(',\n          column_name, ', ', \n          CASE WHEN data_type IN('int', 'decimal') THEN 0 WHEN data_type IN('varchar', 'char') THEN '''''' END,\n          ')) n FROM ',\n          tbl_name\n        )\n      SEPARATOR ' UNION ALL ') INTO @sql\n     FROM INFORMATION_SCHEMA.COLUMNS \n    WHERE table_name = tbl_name;\n\n    SET @sql = CONCAT(\n                 'SELECT GROUP_CONCAT(name) INTO @cols FROM (', \n                 @sql, \n                 ') q WHERE q.n > 0'\n               );\n    PREPARE stmt FROM @sql;\n    EXECUTE stmt;\n\n    SET @sql = CONCAT('SELECT ', @cols, ' FROM ', @tbl);\n    PREPARE stmt FROM @sql;\n    EXECUTE stmt;\n    DEALLOCATE PREPARE stmt;\nEND$$\nDELIMITER ;\n
    \n

    Now calling our procedure

    \n
    CALL sp_select_not_empty('Table1');\n
    \n

    And we get

    \n
    \n+------+--------+--------+\n| id   | value1 | value3 |\n+------+--------+--------+\n|    1 |      3 | A      |\n|    2 |      5 | B      |\n|    3 |      0 | C      |\n|    4 |      9 | D      |\n|    5 |      7 | NULL   |\n|    6 |      9 | E      |\n+------+--------+--------+\n
    \n soup wrap:

    Technically you can do that with dynamic SQL, but whether you have to proceed with this approach is very questionable.

    DELIMITER $$
    CREATE PROCEDURE sp_select_not_empty(IN tbl_name VARCHAR(64))
    BEGIN
        SET @sql = NULL, @cols = NULL;
        SELECT
          GROUP_CONCAT(
            CONCAT(
              'SELECT ''',
              column_name,
              ''' name, COUNT(NULLIF(',
              column_name, ', ', 
              CASE WHEN data_type IN('int', 'decimal') THEN 0 WHEN data_type IN('varchar', 'char') THEN '''''' END,
              ')) n FROM ',
              tbl_name
            )
          SEPARATOR ' UNION ALL ') INTO @sql
         FROM INFORMATION_SCHEMA.COLUMNS 
        WHERE table_name = tbl_name;
    
        SET @sql = CONCAT(
                     'SELECT GROUP_CONCAT(name) INTO @cols FROM (', 
                     @sql, 
                     ') q WHERE q.n > 0'
                   );
        PREPARE stmt FROM @sql;
        EXECUTE stmt;
    
        SET @sql = CONCAT('SELECT ', @cols, ' FROM ', @tbl);
        PREPARE stmt FROM @sql;
        EXECUTE stmt;
        DEALLOCATE PREPARE stmt;
    END$$
    DELIMITER ;
    

    Now calling our procedure

    CALL sp_select_not_empty('Table1');
    

    And we get

    +------+--------+--------+
    | id   | value1 | value3 |
    +------+--------+--------+
    |    1 |      3 | A      |
    |    2 |      5 | B      |
    |    3 |      0 | C      |
    |    4 |      9 | D      |
    |    5 |      7 | NULL   |
    |    6 |      9 | E      |
    +------+--------+--------+
    
    qid & accept id: (17265080, 17271468) query: Storing app preferences in Spring app soup:

    We use an approach with default values and a generic GUI. Therefore we use a property file that contains the default value as well as type information for every key. in the dayabdatabase we store only this values that have been modified by the user. The database schema is just a simple key value table. The key is the same like the one from the property file, the value is of type string, because we have to parse the default value anyway. The type info (int, positiveInt, boolean, string, text, html) from the propty file is used by the generic GUI to have the right input for every key.

    \n

    Example:

    \n

    default.properties

    \n
    my.example.value=1\nmy.example.type=into\n
    \n

    default.properties_en

    \n
    my.example.title=Example Value\nmy.example.descruption=This is..\n
    \n

    Db:\nKey=string(256)\nValue=string(2048)

    \n soup wrap:

    We use an approach with default values and a generic GUI. Therefore we use a property file that contains the default value as well as type information for every key. in the dayabdatabase we store only this values that have been modified by the user. The database schema is just a simple key value table. The key is the same like the one from the property file, the value is of type string, because we have to parse the default value anyway. The type info (int, positiveInt, boolean, string, text, html) from the propty file is used by the generic GUI to have the right input for every key.

    Example:

    default.properties

    my.example.value=1
    my.example.type=into
    

    default.properties_en

    my.example.title=Example Value
    my.example.descruption=This is..
    

    Db: Key=string(256) Value=string(2048)

    qid & accept id: (17325149, 17325624) query: SQL query get lowest value from related record, subquery soup:

    You need to use an additional subquery to find out what the minimum radius is per mechanic (where the radius is greater than the distance), and then you can join this back to your two tables and get all the column information you need from the two tables:

    \n
    SELECT  m.ID, mz.Zone, m.distance, mz.radius\nFROM    Mechanics m\n        INNER JOIN mechanic_zones mz\n            ON mz.Mechanic_ID = m.ID\n        INNER JOIN\n        (   SELECT  m.ID, \n                    MIN(mz.radius) AS radius\n            FROM    Mechanics m\n                    INNER JOIN mechanic_zones mz\n                        ON mz.Mechanic_ID = m.ID\n            WHERE   mz.radius > M.distance\n            GROUP BY m.ID\n        ) MinZone\n            ON MinZone.ID = m.ID\n            AND MinZone.radius= mz.radius\nORDER BY mz.Zone;\n
    \n

    Example on SQL Fiddle

    \n

    If you don't actually want to know the radius of the selected zone, and the zone with the lowest radius will always have the lowest letter you can just use:

    \n
    SELECT  m.ID, mz.MinZone, m.distance\nFROM    Mechanics m\n        INNER JOIN\n        (   SELECT  m.ID, \n                    MIN(mz.Zone) AS Zone\n            FROM    Mechanics m\n                    INNER JOIN mechanic_zones mz\n                        ON mz.Mechanic_ID = m.ID\n            WHERE   mz.radius > M.distance\n            GROUP BY m.ID\n        ) MinZone\n            ON MinZone.ID = m.ID\nORDER BY MinZone.Zone;\n
    \n

    Example on SQL Fiddle

    \n

    EDIT

    \n

    Your fiddle is very close to what I would use, but I would use the following so that the calculation is only done once:

    \n
    SELECT  m.id, m.name, m.distance, m.radius, m.zone\nFROM    (   SELECT  m.ID, \n                    m.Name,\n                    m.Distance,\n                    MIN(mz.radius) AS radius\n            FROM    (   SELECT  ID, Name, (1 * Distance) AS Distance\n                        FROM    Mechanics \n                    ) m\n                    INNER JOIN mechanic_zones mz\n                        ON mz.Mechanic_ID = m.ID\n            WHERE   mz.radius > M.distance\n            GROUP BY m.ID, m.Name, m.Distance\n        ) m\n        INNER JOIN  mechanic_zones mz\n            ON mz.Mechanic_ID = m.ID\n            AND mz.radius = m.radius;\n
    \n

    Example on SQL Fiddle

    \n

    The reasoning behind this that your query has columns in the select list and not in a group by, so there is no guarantee that the radius returned will be lowest one. For example if you change the order in which the records are inserted to mechanic_zones (as in this fiddle) you results become:

    \n
    ID  NAME    DTJ     RADIUS  ZONE\n1   Jon     2       10      a\n2   Paul    11      50      b\n3   George  5       5       a\n
    \n

    Instead of

    \n
    ID  NAME    DTJ     RADIUS  ZONE\n1   Jon     2       5       a\n2   Paul    11      20      b\n3   George  5       5       a\n
    \n

    As you can see the radius for Jon is wrong. To explain this further below is an extract of an explanation I have written about the short comings of MySQL's implentation of implicit grouping.

    \n
    \n

    I would advise to avoid the implicit grouping offered by MySQL where possible, by this i mean including columns in the select list, even though they are not contained in an aggregate function or the group by clause.

    \n

    Imagine the following simple table (T):

    \n
    ID  | Column1 | Column2  |\n----|---------+----------|\n1   |    A    |    X     |\n2   |    A    |    Y     |\n
    \n

    In MySQL you can write

    \n
    SELECT  ID, Column1, Column2\nFROM    T\nGROUP BY Column1;\n
    \n

    This actually breaks the SQL Standard, but it works in MySQL, however the trouble is it is non-deterministic, the result:

    \n
    ID  | Column1 | Column2  |\n----|---------+----------|\n1   |    A    |    X     |\n
    \n

    Is no more or less correct than

    \n
    ID  | Column1 | Column2  |  \n----|---------+----------|\n2   |    A    |    Y     |\n
    \n

    So what you are saying is give me one row for each distinct value of Column1, which both results sets satisfy, so how do you know which one you will get? Well you don't, it seems to be a fairly popular misconception that you can add and ORDER BY clause to influence the results, so for example the following query:

    \n
    SELECT  ID, Column1, Column2\nFROM    T\nGROUP BY Column1\nORDER BY ID DESC;\n
    \n

    Would ensure that you get the following result:

    \n
    ID  | Column1 | Column2  |  \n----|---------+----------|\n2   |    A    |    Y     |\n
    \n

    because of the ORDER BY ID DESC, however this is not true (as demonstrated here).

    \n

    The MMySQL documents state:

    \n
    \n

    The server is free to choose any value from each group, so unless they are the same, the values chosen are indeterminate. Furthermore, the selection of values from each group cannot be influenced by adding an ORDER BY clause.

    \n
    \n

    So even though you have an order by this does not apply until after one row per group has been selected, and this one row is non-determistic.

    \n

    The SQL-Standard does allow columns in the select list not contained in the GROUP BY or an aggregate function, however these columns must be functionally dependant on a column in the GROUP BY. For example, ID in the sample table is the PRIMARY KEY, so we know it is unique in the table, so the following query conforms to the SQL standard and would run in MySQL and fail in many DBMS currently (At the time of writing Postgresql is the closest DBMS I know of to correctly implementing the standard):

    \n
    SELECT  ID, Column1, Column2\nFROM    T\nGROUP BY ID;\n
    \n

    Since ID is unique for each row, there can only be one value of Column1 for each ID, one value of Column2 there is no ambiguity about what to return for each row.

    \n soup wrap:

    You need to use an additional subquery to find out what the minimum radius is per mechanic (where the radius is greater than the distance), and then you can join this back to your two tables and get all the column information you need from the two tables:

    SELECT  m.ID, mz.Zone, m.distance, mz.radius
    FROM    Mechanics m
            INNER JOIN mechanic_zones mz
                ON mz.Mechanic_ID = m.ID
            INNER JOIN
            (   SELECT  m.ID, 
                        MIN(mz.radius) AS radius
                FROM    Mechanics m
                        INNER JOIN mechanic_zones mz
                            ON mz.Mechanic_ID = m.ID
                WHERE   mz.radius > M.distance
                GROUP BY m.ID
            ) MinZone
                ON MinZone.ID = m.ID
                AND MinZone.radius= mz.radius
    ORDER BY mz.Zone;
    

    Example on SQL Fiddle

    If you don't actually want to know the radius of the selected zone, and the zone with the lowest radius will always have the lowest letter you can just use:

    SELECT  m.ID, mz.MinZone, m.distance
    FROM    Mechanics m
            INNER JOIN
            (   SELECT  m.ID, 
                        MIN(mz.Zone) AS Zone
                FROM    Mechanics m
                        INNER JOIN mechanic_zones mz
                            ON mz.Mechanic_ID = m.ID
                WHERE   mz.radius > M.distance
                GROUP BY m.ID
            ) MinZone
                ON MinZone.ID = m.ID
    ORDER BY MinZone.Zone;
    

    Example on SQL Fiddle

    EDIT

    Your fiddle is very close to what I would use, but I would use the following so that the calculation is only done once:

    SELECT  m.id, m.name, m.distance, m.radius, m.zone
    FROM    (   SELECT  m.ID, 
                        m.Name,
                        m.Distance,
                        MIN(mz.radius) AS radius
                FROM    (   SELECT  ID, Name, (1 * Distance) AS Distance
                            FROM    Mechanics 
                        ) m
                        INNER JOIN mechanic_zones mz
                            ON mz.Mechanic_ID = m.ID
                WHERE   mz.radius > M.distance
                GROUP BY m.ID, m.Name, m.Distance
            ) m
            INNER JOIN  mechanic_zones mz
                ON mz.Mechanic_ID = m.ID
                AND mz.radius = m.radius;
    

    Example on SQL Fiddle

    The reasoning behind this that your query has columns in the select list and not in a group by, so there is no guarantee that the radius returned will be lowest one. For example if you change the order in which the records are inserted to mechanic_zones (as in this fiddle) you results become:

    ID  NAME    DTJ     RADIUS  ZONE
    1   Jon     2       10      a
    2   Paul    11      50      b
    3   George  5       5       a
    

    Instead of

    ID  NAME    DTJ     RADIUS  ZONE
    1   Jon     2       5       a
    2   Paul    11      20      b
    3   George  5       5       a
    

    As you can see the radius for Jon is wrong. To explain this further below is an extract of an explanation I have written about the short comings of MySQL's implentation of implicit grouping.


    I would advise to avoid the implicit grouping offered by MySQL where possible, by this i mean including columns in the select list, even though they are not contained in an aggregate function or the group by clause.

    Imagine the following simple table (T):

    ID  | Column1 | Column2  |
    ----|---------+----------|
    1   |    A    |    X     |
    2   |    A    |    Y     |
    

    In MySQL you can write

    SELECT  ID, Column1, Column2
    FROM    T
    GROUP BY Column1;
    

    This actually breaks the SQL Standard, but it works in MySQL, however the trouble is it is non-deterministic, the result:

    ID  | Column1 | Column2  |
    ----|---------+----------|
    1   |    A    |    X     |
    

    Is no more or less correct than

    ID  | Column1 | Column2  |  
    ----|---------+----------|
    2   |    A    |    Y     |
    

    So what you are saying is give me one row for each distinct value of Column1, which both results sets satisfy, so how do you know which one you will get? Well you don't, it seems to be a fairly popular misconception that you can add and ORDER BY clause to influence the results, so for example the following query:

    SELECT  ID, Column1, Column2
    FROM    T
    GROUP BY Column1
    ORDER BY ID DESC;
    

    Would ensure that you get the following result:

    ID  | Column1 | Column2  |  
    ----|---------+----------|
    2   |    A    |    Y     |
    

    because of the ORDER BY ID DESC, however this is not true (as demonstrated here).

    The MMySQL documents state:

    The server is free to choose any value from each group, so unless they are the same, the values chosen are indeterminate. Furthermore, the selection of values from each group cannot be influenced by adding an ORDER BY clause.

    So even though you have an order by this does not apply until after one row per group has been selected, and this one row is non-determistic.

    The SQL-Standard does allow columns in the select list not contained in the GROUP BY or an aggregate function, however these columns must be functionally dependant on a column in the GROUP BY. For example, ID in the sample table is the PRIMARY KEY, so we know it is unique in the table, so the following query conforms to the SQL standard and would run in MySQL and fail in many DBMS currently (At the time of writing Postgresql is the closest DBMS I know of to correctly implementing the standard):

    SELECT  ID, Column1, Column2
    FROM    T
    GROUP BY ID;
    

    Since ID is unique for each row, there can only be one value of Column1 for each ID, one value of Column2 there is no ambiguity about what to return for each row.

    qid & accept id: (17340363, 17443175) query: Replacing Text which does not match a pattern in Oracle soup:

    The above solutions didn't work and below is what I did.

    \n
    update temp_table set col2=regexp_replace(col2,'([0-9]{10},[a-z0-9]+)','(\1)') ;\nupdate temp_table set col2=regexp_replace(col2,'\),[\s\S]*~\(','(\1)$');\nupdate temp_table set col2=regexp_replace(col2,'\).*?\(','$');\nupdate temp_table set col2=replace(regexp_replace(col2,'\).*',''),'(','');\n
    \n

    After these 4 update commands, the col2 will have something like

    \n
    1 1331882981,ab123456$1331890329,pqr123223\n2 1331882981,abc333$1331890329,pqrs23\n
    \n

    Then I wrote a function to split this thing. The reason I went for the function is to split by "$" and the fact that the col2 still has >10k characters

    \n
    create or replace function parse( p_clob in clob ) return sys.odciVarchar2List\npipelined\nas\n        l_offset number := 1;\n        l_clob   clob := translate( p_clob, chr(13)|| chr(10) || chr(9), '   ' ) || '$';\n        l_hit    number;\nbegin\n        loop\n          --Find occurance of "$" from l_offset\n          l_hit := instr( l_clob, '$', l_offset );\n          exit when nvl(l_hit,0) = 0;\n          --Extract string from l_offset to l_hit\n          pipe row ( substr(l_clob, l_offset , (l_hit - l_offset)) );\n          --Move offset\n          l_offset := l_hit+1;\n        end loop;\nend;\n
    \n

    I then called

    \n
    select col1,\n       REGEXP_SUBSTR(column_value, '[^,]+', 1, 1) col3,\n       REGEXP_SUBSTR(column_value, '[^,]+', 1, 2) col4\n  from temp_table, table(parse(temp_table.col2));\n
    \n soup wrap:

    The above solutions didn't work and below is what I did.

    update temp_table set col2=regexp_replace(col2,'([0-9]{10},[a-z0-9]+)','(\1)') ;
    update temp_table set col2=regexp_replace(col2,'\),[\s\S]*~\(','(\1)$');
    update temp_table set col2=regexp_replace(col2,'\).*?\(','$');
    update temp_table set col2=replace(regexp_replace(col2,'\).*',''),'(','');
    

    After these 4 update commands, the col2 will have something like

    1 1331882981,ab123456$1331890329,pqr123223
    2 1331882981,abc333$1331890329,pqrs23
    

    Then I wrote a function to split this thing. The reason I went for the function is to split by "$" and the fact that the col2 still has >10k characters

    create or replace function parse( p_clob in clob ) return sys.odciVarchar2List
    pipelined
    as
            l_offset number := 1;
            l_clob   clob := translate( p_clob, chr(13)|| chr(10) || chr(9), '   ' ) || '$';
            l_hit    number;
    begin
            loop
              --Find occurance of "$" from l_offset
              l_hit := instr( l_clob, '$', l_offset );
              exit when nvl(l_hit,0) = 0;
              --Extract string from l_offset to l_hit
              pipe row ( substr(l_clob, l_offset , (l_hit - l_offset)) );
              --Move offset
              l_offset := l_hit+1;
            end loop;
    end;
    

    I then called

    select col1,
           REGEXP_SUBSTR(column_value, '[^,]+', 1, 1) col3,
           REGEXP_SUBSTR(column_value, '[^,]+', 1, 2) col4
      from temp_table, table(parse(temp_table.col2));
    
    qid & accept id: (17352572, 17353504) query: SQL Server - Setting multiple columns from another table soup:

    First off, I strongly suggest you look into an alternative. This will get messy very fast, as you're essentially treating rows as columns. It doesn't help much that Table1 is already denormalized - though if it really only has 3 columns, it's not that big of a deal to normalize it again.:

    \n
    CREATE VIEW v_Table1 AS\n   SELECT Id, Code1 as Code FROM Table1\n   UNION SELECT Id, Code2 as Code FROM Table1\n   UNION SELECT Id, Code3 as Code FROM Table1\n
    \n

    If we take you second query, it appears you want all possible combinations of ID and Category, and a boolean of whether that combination appears in Table2 (using Code to get back to ID in Table1).

    \n

    Since there doesn't appear to be a canonical list of ID and Category, we'll generate it:

    \n
    CREATE VIEW v_AllCategories AS\n   SELECT DISTINCT ID, Category FROM v_Table1 CROSS JOIN Table2\n
    \n

    Getting the list of represented ID and Category is pretty straightforward:

    \n
    CREATE VIEW v_ReportedCategories AS\n   SELECT DISTINCT ID, Category FROM Table2 \n   JOIN v_Table1 ON Table2.Code = v_Table1.Code\n
    \n

    Put those together, and we can then get the bool to tell us which exists:

    \n
    CREATE VIEW v_CategoryReports AS\n    SELECT\n       T1.ID, T1.Category, CASE WHEN T2.ID IS NULL THEN 0 ELSE 1 END as Reported\n    FROM v_AllCategories as T1\n    LEFT OUTER JOIN v_ReportedCategories as T2 ON\n       T1.ID = T2.ID\n       AND T1.Category = T2.Category\n
    \n

    That gets you your answer in a normalized form:

    \n
    ID  | Category | Reported\n10  | cat1     | 1\n10  | cat2     | 1\n10  | cat3     | 0    \n
    \n

    From there, you'd need to do a PIVOT to get your Category values as columns:

    \n
    SELECT\n    ID,\n    cat1,\n    cat2,\n    cat3\nFROM v_CategoryReports\nPIVOT (\n    MAX([Reported]) FOR Category IN ([cat1], [cat2], [cat3])\n) p\n
    \n

    Since you mentioned over 50 'Categories', I'll assume they're not really 'cat1' - 'cat50'. In which case, you'll need to code gen the pivot operation.

    \n

    SqlFiddle with a self-contained example.

    \n soup wrap:

    First off, I strongly suggest you look into an alternative. This will get messy very fast, as you're essentially treating rows as columns. It doesn't help much that Table1 is already denormalized - though if it really only has 3 columns, it's not that big of a deal to normalize it again.:

    CREATE VIEW v_Table1 AS
       SELECT Id, Code1 as Code FROM Table1
       UNION SELECT Id, Code2 as Code FROM Table1
       UNION SELECT Id, Code3 as Code FROM Table1
    

    If we take you second query, it appears you want all possible combinations of ID and Category, and a boolean of whether that combination appears in Table2 (using Code to get back to ID in Table1).

    Since there doesn't appear to be a canonical list of ID and Category, we'll generate it:

    CREATE VIEW v_AllCategories AS
       SELECT DISTINCT ID, Category FROM v_Table1 CROSS JOIN Table2
    

    Getting the list of represented ID and Category is pretty straightforward:

    CREATE VIEW v_ReportedCategories AS
       SELECT DISTINCT ID, Category FROM Table2 
       JOIN v_Table1 ON Table2.Code = v_Table1.Code
    

    Put those together, and we can then get the bool to tell us which exists:

    CREATE VIEW v_CategoryReports AS
        SELECT
           T1.ID, T1.Category, CASE WHEN T2.ID IS NULL THEN 0 ELSE 1 END as Reported
        FROM v_AllCategories as T1
        LEFT OUTER JOIN v_ReportedCategories as T2 ON
           T1.ID = T2.ID
           AND T1.Category = T2.Category
    

    That gets you your answer in a normalized form:

    ID  | Category | Reported
    10  | cat1     | 1
    10  | cat2     | 1
    10  | cat3     | 0    
    

    From there, you'd need to do a PIVOT to get your Category values as columns:

    SELECT
        ID,
        cat1,
        cat2,
        cat3
    FROM v_CategoryReports
    PIVOT (
        MAX([Reported]) FOR Category IN ([cat1], [cat2], [cat3])
    ) p
    

    Since you mentioned over 50 'Categories', I'll assume they're not really 'cat1' - 'cat50'. In which case, you'll need to code gen the pivot operation.

    SqlFiddle with a self-contained example.

    qid & accept id: (17524409, 17524573) query: Compare two sets in MySQL for equality soup:
    WHERE language IN('x','y') GROUP BY emp_id HAVING COUNT (*) = 2 \n
    \n

    (where '2' is the number of items in the IN clause)

    \n

    So your whole query could be:

    \n
    SELECT e.emp_Id\n     , e.Name\n  FROM Employee e\n  JOIN Employee_Language l\n    ON e.emp_id = l.emp_id\n WHERE l.Language IN('English', 'French')\n GROUP  \n    BY e.emp_id \nHAVING COUNT(*) = 2\n
    \n

    See this SQLFiddle

    \n soup wrap:
    WHERE language IN('x','y') GROUP BY emp_id HAVING COUNT (*) = 2 
    

    (where '2' is the number of items in the IN clause)

    So your whole query could be:

    SELECT e.emp_Id
         , e.Name
      FROM Employee e
      JOIN Employee_Language l
        ON e.emp_id = l.emp_id
     WHERE l.Language IN('English', 'French')
     GROUP  
        BY e.emp_id 
    HAVING COUNT(*) = 2
    

    See this SQLFiddle

    qid & accept id: (17535389, 17535893) query: MySQL create temporary fields with values from another table soup:

    I built a schema based on your image. This is what I came up with:

    \n
    SELECT\n  a.id,\n  a.first_name,\n  a.surname,\n  if (b1.type is null, '', 'on') as A1,\n  if (b2.type is null, '', 'on') as A2,\n  if (b3.type is null, '', 'on') as A3\nFROM `a`\n  LEFT JOIN `b` as b1 ON a.id = b1.uid AND b1.type = 1 AND b1.status = 'accepted'\n  LEFT JOIN `b` as b2 ON a.id = b2.uid AND b2.type = 2 AND b2.status = 'accepted'\n  LEFT JOIN `b` as b3 ON a.id = b3.uid AND b3.type = 3 AND b3.status = 'accepted'\nGROUP BY a.id;\n
    \n

    Result:

    \n
    +----+------------+-----------+----+----+----+\n| id | first_name | surname   | A1 | A2 | A3 |\n+----+------------+-----------+----+----+----+\n|  1 | john       | smith     | on |    |    |\n|  2 | david      | russel    | on | on |    |\n|  3 | james      | duncan    | on |    | on |\n|  4 | gavin      | dow       | on | on |    |\n+----+------------+-----------+----+----+----+\n
    \n

    Here's the data I used:

    \n
    --\n-- Table structure for table `a`\n--\n\nCREATE TABLE IF NOT EXISTS `a` (\n  `id` int(10) unsigned NOT NULL,\n  `first_name` varchar(32) NOT NULL,\n  `surname` varchar(32) NOT NULL,\n  PRIMARY KEY (`id`)\n) ENGINE=MyISAM DEFAULT CHARSET=latin1;\n\n--\n-- Dumping data for table `a`\n--\n\nINSERT INTO `a` (`id`, `first_name`, `surname`) VALUES\n(1, 'john', 'smith'),\n(2, 'david', 'russel'),\n(3, 'james', 'duncan'),\n(4, 'gavin', 'dow');\n\n--\n-- Table structure for table `b`\n--\n\nCREATE TABLE IF NOT EXISTS `b` (\n  `id` int(10) unsigned NOT NULL,\n  `uid` int(10) unsigned NOT NULL,\n  `type` int(10) NOT NULL,\n  `status` varchar(32) NOT NULL,\n  PRIMARY KEY (`id`),\n  KEY `uid` (`uid`)\n) ENGINE=MyISAM DEFAULT CHARSET=latin1;\n\n--\n-- Dumping data for table `b`\n--\n\nINSERT INTO `b` (`id`, `uid`, `type`, `status`) VALUES\n(1, 1, 1, 'accepted'),\n(2, 2, 1, 'accepted'),\n(3, 2, 2, 'accepted'),\n(4, 4, 1, 'accepted'),\n(5, 4, 2, 'accepted'),\n(6, 4, 3, 'declined'),\n(7, 3, 1, 'accepted'),\n(8, 3, 2, 'declined'),\n(9, 1, 2, 'declined'),\n(10, 3, 3, 'accepted');\n
    \n soup wrap:

    I built a schema based on your image. This is what I came up with:

    SELECT
      a.id,
      a.first_name,
      a.surname,
      if (b1.type is null, '', 'on') as A1,
      if (b2.type is null, '', 'on') as A2,
      if (b3.type is null, '', 'on') as A3
    FROM `a`
      LEFT JOIN `b` as b1 ON a.id = b1.uid AND b1.type = 1 AND b1.status = 'accepted'
      LEFT JOIN `b` as b2 ON a.id = b2.uid AND b2.type = 2 AND b2.status = 'accepted'
      LEFT JOIN `b` as b3 ON a.id = b3.uid AND b3.type = 3 AND b3.status = 'accepted'
    GROUP BY a.id;
    

    Result:

    +----+------------+-----------+----+----+----+
    | id | first_name | surname   | A1 | A2 | A3 |
    +----+------------+-----------+----+----+----+
    |  1 | john       | smith     | on |    |    |
    |  2 | david      | russel    | on | on |    |
    |  3 | james      | duncan    | on |    | on |
    |  4 | gavin      | dow       | on | on |    |
    +----+------------+-----------+----+----+----+
    

    Here's the data I used:

    --
    -- Table structure for table `a`
    --
    
    CREATE TABLE IF NOT EXISTS `a` (
      `id` int(10) unsigned NOT NULL,
      `first_name` varchar(32) NOT NULL,
      `surname` varchar(32) NOT NULL,
      PRIMARY KEY (`id`)
    ) ENGINE=MyISAM DEFAULT CHARSET=latin1;
    
    --
    -- Dumping data for table `a`
    --
    
    INSERT INTO `a` (`id`, `first_name`, `surname`) VALUES
    (1, 'john', 'smith'),
    (2, 'david', 'russel'),
    (3, 'james', 'duncan'),
    (4, 'gavin', 'dow');
    
    --
    -- Table structure for table `b`
    --
    
    CREATE TABLE IF NOT EXISTS `b` (
      `id` int(10) unsigned NOT NULL,
      `uid` int(10) unsigned NOT NULL,
      `type` int(10) NOT NULL,
      `status` varchar(32) NOT NULL,
      PRIMARY KEY (`id`),
      KEY `uid` (`uid`)
    ) ENGINE=MyISAM DEFAULT CHARSET=latin1;
    
    --
    -- Dumping data for table `b`
    --
    
    INSERT INTO `b` (`id`, `uid`, `type`, `status`) VALUES
    (1, 1, 1, 'accepted'),
    (2, 2, 1, 'accepted'),
    (3, 2, 2, 'accepted'),
    (4, 4, 1, 'accepted'),
    (5, 4, 2, 'accepted'),
    (6, 4, 3, 'declined'),
    (7, 3, 1, 'accepted'),
    (8, 3, 2, 'declined'),
    (9, 1, 2, 'declined'),
    (10, 3, 3, 'accepted');
    
    qid & accept id: (17558290, 17558698) query: Performing a simple search in MySQL db with variable amount of input soup:

    Using keyword search predicates LIKE '%pattern%' is a sure way to cause poor performance, because it forces a table-scan.

    \n

    The best way to do a relational division query, that is to match only movies where all three criteria are matched, is to find individual rows for each of the criteria, and then JOIN them together.

    \n
    SELECT f.*, CONCAT_WS(' ', a1.ambienceName, a2.ambienceName, a3.ambienceName) AS ambiences\nFROM Films AS f \nINNER JOIN Films_Ambiences as fa1 ON f.id = fa1.film_id           \nINNER JOIN Ambiences AS a1 ON a1.id = fa1.ambience_id\nINNER JOIN Films_Ambiences as fa2 ON f.id = fa2.film_id           \nINNER JOIN Ambiences AS a2 ON a2.id = fa2.ambience_id\nINNER JOIN Films_Ambiences as fa3 ON f.id = fa3.film_id           \nINNER JOIN Ambiences AS a3 ON a3.id = fa3.ambience_id\nWHERE (a1.ambienceName, a2.ambienceName, a3.ambienceName) = (?, ?, ?);\n
    \n

    You'll need an additional JOIN to Films_Ambiences and Ambiences for each search term.

    \n

    You should have an index on ambienceName, and then all three lookups will be more efficient.

    \n
    ALTER TABLE Ambiences ADD KEY (ambienceName);\n
    \n

    I compared different solutions for relational division in a recent presentation:

    \n\n
    \n

    Re your comment:

    \n
    \n

    Is there a way to alter this query so that it also displays the rest of the ambiences after the criteria are found?

    \n
    \n

    Yes, but you have to join one more time to get the full set of ambiences for the film:

    \n
    SELECT f.*, GROUP_CONCAT(a_all.ambienceName) AS ambiences\nFROM Films AS f \nINNER JOIN Films_Ambiences as fa1 ON f.id = fa1.film_id           \nINNER JOIN Ambiences AS a1 ON a1.id = fa1.ambience_id\nINNER JOIN Films_Ambiences as fa2 ON f.id = fa2.film_id           \nINNER JOIN Ambiences AS a2 ON a2.id = fa2.ambience_id\nINNER JOIN Films_Ambiences as fa3 ON f.id = fa3.film_id           \nINNER JOIN Ambiences AS a3 ON a3.id = fa3.ambience_id\nINNER JOIN Films_Ambiences AS fa_all ON f.id = fa_all.film_id\nINNER JOIN Ambiences AS a_all ON a_all.id = fa_all.ambience_id\nWHERE (a1.ambienceName, a2.ambienceName, a3.ambienceName) = (?, ?, ?)\nGROUP BY f.id;\n
    \n
    \n

    is there a way to alter this query so that the result are only films that have the ambiences required but no more?

    \n
    \n

    The query above should do that.

    \n
    \n
    \n

    What the query does, I think, is to look for films that include the given ambiences (so it also find films that have more ambiences).

    \n
    \n

    Right, the query does not match a film unless it matches all three ambiences in the search criteria. But the film may have other ambiences beyond those in the search criteria, and all of the film's ambiences (those in the search criteria plus others) are collected as GROUP_CONCAT(a_all.ambienceName).

    \n

    I tested this example:

    \n
    mysql> INSERT INTO Ambiences (ambienceName) \n VALUES ('funny'), ('scary'), ('1950s'), ('London'), ('bank'), ('crime'), ('stupid');\nmysql> INSERT INTO Films (title) \n VALUES ('Mary Poppins'), ('Heist'), ('Scary Movie'), ('Godzilla'), ('Signs');\nmysql> INSERT INTO Films_Ambiences \n VALUES (1,1),(1,2),(1,4),(1,5), (2,1),(2,2),(2,5),(2,6), (3,1),(3,2),(3,7), (4,2),(4,3), (5,2),(5,7);\n\nmysql> SELECT f.*, GROUP_CONCAT(a_all.ambienceName) AS ambiences \n FROM Films AS f  \n INNER JOIN Films_Ambiences as fa1 ON f.id = fa1.film_id            \n INNER JOIN Ambiences AS a1 ON a1.id = fa1.ambience_id \n INNER JOIN Films_Ambiences as fa2 ON f.id = fa2.film_id            \n INNER JOIN Ambiences AS a2 ON a2.id = fa2.ambience_id \n INNER JOIN Films_Ambiences as fa3 ON f.id = fa3.film_id            \n INNER JOIN Ambiences AS a3 ON a3.id = fa3.ambience_id \n INNER JOIN Films_Ambiences AS fa_all ON f.id = fa_all.film_id \n INNER JOIN Ambiences AS a_all ON a_all.id = fa_all.ambience_id \n WHERE (a1.ambienceName, a2.ambienceName, a3.ambienceName) = ('funny','scary','bank') \n GROUP BY f.id;\n+----+--------------+-------------------------+\n| id | Title        | ambiences               |\n+----+--------------+-------------------------+\n|  1 | Mary Poppins | funny,scary,London,bank |\n|  2 | Heist        | funny,scary,bank,crime  |\n+----+--------------+-------------------------+\n
    \n

    By the way, here's the EXPLAIN showing usage of indexes:

    \n
    +----+-------------+--------+--------+----------------------+--------------+---------+-----------------------------+------+-----------------------------------------------------------+\n| id | select_type | table  | type   | possible_keys        | key          | key_len | ref                         | rows | Extra                                                     |\n+----+-------------+--------+--------+----------------------+--------------+---------+-----------------------------+------+-----------------------------------------------------------+\n|  1 | SIMPLE      | a1     | ref    | PRIMARY,ambienceName | ambienceName | 258     | const                       |    1 | Using where; Using index; Using temporary; Using filesort |\n|  1 | SIMPLE      | a2     | ref    | PRIMARY,ambienceName | ambienceName | 258     | const                       |    1 | Using where; Using index                                  |\n|  1 | SIMPLE      | a3     | ref    | PRIMARY,ambienceName | ambienceName | 258     | const                       |    1 | Using where; Using index                                  |\n|  1 | SIMPLE      | fa1    | ref    | PRIMARY,ambience_id  | ambience_id  | 4       | test.a1.id                  |    1 | Using index                                               |\n|  1 | SIMPLE      | f      | eq_ref | PRIMARY              | PRIMARY      | 4       | test.fa1.film_id            |    1 | NULL                                                      |\n|  1 | SIMPLE      | fa2    | eq_ref | PRIMARY,ambience_id  | PRIMARY      | 8       | test.fa1.film_id,test.a2.id |    1 | Using index                                               |\n|  1 | SIMPLE      | fa3    | eq_ref | PRIMARY,ambience_id  | PRIMARY      | 8       | test.fa1.film_id,test.a3.id |    1 | Using index                                               |\n|  1 | SIMPLE      | fa_all | ref    | PRIMARY,ambience_id  | PRIMARY      | 4       | test.fa1.film_id            |    1 | Using index                                               |\n|  1 | SIMPLE      | a_all  | eq_ref | PRIMARY              | PRIMARY      | 4       | test.fa_all.ambience_id     |    1 | NULL                                                      |\n+----+-------------+--------+--------+----------------------+--------------+---------+-----------------------------+------+-----------------------------------------------------------+\n
    \n
    \n
    \n

    I have a film1 which is scary, funny, stupid. When I search for a film which is only scary, stupid I will get film1 anyway. What if I dont want that?

    \n
    \n

    Oh, okay, I totally didn't understand that was what you meant, and it's an unusual requirement in these types of problems.

    \n

    Here's a solution:

    \n
    mysql> SELECT f.*, GROUP_CONCAT(a_all.ambienceName) AS ambiences\n FROM Films AS f\n INNER JOIN Films_Ambiences as fa1 ON f.id = fa1.film_id\n INNER JOIN Ambiences AS a1 ON a1.id = fa1.ambience_id\n INNER JOIN Films_Ambiences as fa2 ON f.id = fa2.film_id\n INNER JOIN Ambiences AS a2 ON a2.id = fa2.ambience_id\n INNER JOIN Films_Ambiences AS fa_all ON f.id = fa_all.film_id\n WHERE (a1.ambienceName, a2.ambienceName) = ('scary','stupid')\n GROUP BY f.id\n HAVING COUNT(*) = 2\n+----+-------+--------------+\n| id | Title | ambiences    |\n+----+-------+--------------+\n|  5 | Signs | scary,stupid |\n+----+-------+--------------+\n
    \n

    There's no need to join to a_all in this case, because we don't need the list of ambiences names, we only need the count of ambiences, which we can get just by joining to fa_all.

    \n soup wrap:

    Using keyword search predicates LIKE '%pattern%' is a sure way to cause poor performance, because it forces a table-scan.

    The best way to do a relational division query, that is to match only movies where all three criteria are matched, is to find individual rows for each of the criteria, and then JOIN them together.

    SELECT f.*, CONCAT_WS(' ', a1.ambienceName, a2.ambienceName, a3.ambienceName) AS ambiences
    FROM Films AS f 
    INNER JOIN Films_Ambiences as fa1 ON f.id = fa1.film_id           
    INNER JOIN Ambiences AS a1 ON a1.id = fa1.ambience_id
    INNER JOIN Films_Ambiences as fa2 ON f.id = fa2.film_id           
    INNER JOIN Ambiences AS a2 ON a2.id = fa2.ambience_id
    INNER JOIN Films_Ambiences as fa3 ON f.id = fa3.film_id           
    INNER JOIN Ambiences AS a3 ON a3.id = fa3.ambience_id
    WHERE (a1.ambienceName, a2.ambienceName, a3.ambienceName) = (?, ?, ?);
    

    You'll need an additional JOIN to Films_Ambiences and Ambiences for each search term.

    You should have an index on ambienceName, and then all three lookups will be more efficient.

    ALTER TABLE Ambiences ADD KEY (ambienceName);
    

    I compared different solutions for relational division in a recent presentation:


    Re your comment:

    Is there a way to alter this query so that it also displays the rest of the ambiences after the criteria are found?

    Yes, but you have to join one more time to get the full set of ambiences for the film:

    SELECT f.*, GROUP_CONCAT(a_all.ambienceName) AS ambiences
    FROM Films AS f 
    INNER JOIN Films_Ambiences as fa1 ON f.id = fa1.film_id           
    INNER JOIN Ambiences AS a1 ON a1.id = fa1.ambience_id
    INNER JOIN Films_Ambiences as fa2 ON f.id = fa2.film_id           
    INNER JOIN Ambiences AS a2 ON a2.id = fa2.ambience_id
    INNER JOIN Films_Ambiences as fa3 ON f.id = fa3.film_id           
    INNER JOIN Ambiences AS a3 ON a3.id = fa3.ambience_id
    INNER JOIN Films_Ambiences AS fa_all ON f.id = fa_all.film_id
    INNER JOIN Ambiences AS a_all ON a_all.id = fa_all.ambience_id
    WHERE (a1.ambienceName, a2.ambienceName, a3.ambienceName) = (?, ?, ?)
    GROUP BY f.id;
    

    is there a way to alter this query so that the result are only films that have the ambiences required but no more?

    The query above should do that.


    What the query does, I think, is to look for films that include the given ambiences (so it also find films that have more ambiences).

    Right, the query does not match a film unless it matches all three ambiences in the search criteria. But the film may have other ambiences beyond those in the search criteria, and all of the film's ambiences (those in the search criteria plus others) are collected as GROUP_CONCAT(a_all.ambienceName).

    I tested this example:

    mysql> INSERT INTO Ambiences (ambienceName) 
     VALUES ('funny'), ('scary'), ('1950s'), ('London'), ('bank'), ('crime'), ('stupid');
    mysql> INSERT INTO Films (title) 
     VALUES ('Mary Poppins'), ('Heist'), ('Scary Movie'), ('Godzilla'), ('Signs');
    mysql> INSERT INTO Films_Ambiences 
     VALUES (1,1),(1,2),(1,4),(1,5), (2,1),(2,2),(2,5),(2,6), (3,1),(3,2),(3,7), (4,2),(4,3), (5,2),(5,7);
    
    mysql> SELECT f.*, GROUP_CONCAT(a_all.ambienceName) AS ambiences 
     FROM Films AS f  
     INNER JOIN Films_Ambiences as fa1 ON f.id = fa1.film_id            
     INNER JOIN Ambiences AS a1 ON a1.id = fa1.ambience_id 
     INNER JOIN Films_Ambiences as fa2 ON f.id = fa2.film_id            
     INNER JOIN Ambiences AS a2 ON a2.id = fa2.ambience_id 
     INNER JOIN Films_Ambiences as fa3 ON f.id = fa3.film_id            
     INNER JOIN Ambiences AS a3 ON a3.id = fa3.ambience_id 
     INNER JOIN Films_Ambiences AS fa_all ON f.id = fa_all.film_id 
     INNER JOIN Ambiences AS a_all ON a_all.id = fa_all.ambience_id 
     WHERE (a1.ambienceName, a2.ambienceName, a3.ambienceName) = ('funny','scary','bank') 
     GROUP BY f.id;
    +----+--------------+-------------------------+
    | id | Title        | ambiences               |
    +----+--------------+-------------------------+
    |  1 | Mary Poppins | funny,scary,London,bank |
    |  2 | Heist        | funny,scary,bank,crime  |
    +----+--------------+-------------------------+
    

    By the way, here's the EXPLAIN showing usage of indexes:

    +----+-------------+--------+--------+----------------------+--------------+---------+-----------------------------+------+-----------------------------------------------------------+
    | id | select_type | table  | type   | possible_keys        | key          | key_len | ref                         | rows | Extra                                                     |
    +----+-------------+--------+--------+----------------------+--------------+---------+-----------------------------+------+-----------------------------------------------------------+
    |  1 | SIMPLE      | a1     | ref    | PRIMARY,ambienceName | ambienceName | 258     | const                       |    1 | Using where; Using index; Using temporary; Using filesort |
    |  1 | SIMPLE      | a2     | ref    | PRIMARY,ambienceName | ambienceName | 258     | const                       |    1 | Using where; Using index                                  |
    |  1 | SIMPLE      | a3     | ref    | PRIMARY,ambienceName | ambienceName | 258     | const                       |    1 | Using where; Using index                                  |
    |  1 | SIMPLE      | fa1    | ref    | PRIMARY,ambience_id  | ambience_id  | 4       | test.a1.id                  |    1 | Using index                                               |
    |  1 | SIMPLE      | f      | eq_ref | PRIMARY              | PRIMARY      | 4       | test.fa1.film_id            |    1 | NULL                                                      |
    |  1 | SIMPLE      | fa2    | eq_ref | PRIMARY,ambience_id  | PRIMARY      | 8       | test.fa1.film_id,test.a2.id |    1 | Using index                                               |
    |  1 | SIMPLE      | fa3    | eq_ref | PRIMARY,ambience_id  | PRIMARY      | 8       | test.fa1.film_id,test.a3.id |    1 | Using index                                               |
    |  1 | SIMPLE      | fa_all | ref    | PRIMARY,ambience_id  | PRIMARY      | 4       | test.fa1.film_id            |    1 | Using index                                               |
    |  1 | SIMPLE      | a_all  | eq_ref | PRIMARY              | PRIMARY      | 4       | test.fa_all.ambience_id     |    1 | NULL                                                      |
    +----+-------------+--------+--------+----------------------+--------------+---------+-----------------------------+------+-----------------------------------------------------------+
    

    I have a film1 which is scary, funny, stupid. When I search for a film which is only scary, stupid I will get film1 anyway. What if I dont want that?

    Oh, okay, I totally didn't understand that was what you meant, and it's an unusual requirement in these types of problems.

    Here's a solution:

    mysql> SELECT f.*, GROUP_CONCAT(a_all.ambienceName) AS ambiences
     FROM Films AS f
     INNER JOIN Films_Ambiences as fa1 ON f.id = fa1.film_id
     INNER JOIN Ambiences AS a1 ON a1.id = fa1.ambience_id
     INNER JOIN Films_Ambiences as fa2 ON f.id = fa2.film_id
     INNER JOIN Ambiences AS a2 ON a2.id = fa2.ambience_id
     INNER JOIN Films_Ambiences AS fa_all ON f.id = fa_all.film_id
     WHERE (a1.ambienceName, a2.ambienceName) = ('scary','stupid')
     GROUP BY f.id
     HAVING COUNT(*) = 2
    +----+-------+--------------+
    | id | Title | ambiences    |
    +----+-------+--------------+
    |  5 | Signs | scary,stupid |
    +----+-------+--------------+
    

    There's no need to join to a_all in this case, because we don't need the list of ambiences names, we only need the count of ambiences, which we can get just by joining to fa_all.

    qid & accept id: (17566573, 17566745) query: Convert month shortname to month number soup:

    Use STR_TO_DATE() function to convert String to Date like this:

    \n
    SELECT STR_TO_DATE('Apr','%b')\n
    \n

    And use MONTH() to get month number from the date like this:

    \n
    SELECT MONTH(STR_TO_DATE('Apr','%b'))\n
    \n

    See this SQLFiddle

    \n soup wrap:

    Use STR_TO_DATE() function to convert String to Date like this:

    SELECT STR_TO_DATE('Apr','%b')
    

    And use MONTH() to get month number from the date like this:

    SELECT MONTH(STR_TO_DATE('Apr','%b'))
    

    See this SQLFiddle

    qid & accept id: (17596708, 17597540) query: SQL Restrict Column values using another Table soup:

    Add a unique constraint to AllowedColors. (And consider dropping the column "ID".)

    \n
    alter table AllowedColors\nadd constraint your_constraint_name\nunique (FamilyID, ColorID);\n
    \n

    You probably want each of those columns to be declared NOT NULL, too. I'll leave that to you.

    \n

    Now you can use that pair of columns as the target of a foreign key constraint.

    \n
    alter table fruit\nadd constraint another_constraint_name\nforeign key (FamilyID, ColorID) \n  references AllowedColors (FamilyID, ColorID);\n
    \n

    You'll also want a foreign key from AllowedColors.FamilyID to Family.FamilyID.

    \n soup wrap:

    Add a unique constraint to AllowedColors. (And consider dropping the column "ID".)

    alter table AllowedColors
    add constraint your_constraint_name
    unique (FamilyID, ColorID);
    

    You probably want each of those columns to be declared NOT NULL, too. I'll leave that to you.

    Now you can use that pair of columns as the target of a foreign key constraint.

    alter table fruit
    add constraint another_constraint_name
    foreign key (FamilyID, ColorID) 
      references AllowedColors (FamilyID, ColorID);
    

    You'll also want a foreign key from AllowedColors.FamilyID to Family.FamilyID.

    qid & accept id: (17598953, 17599702) query: postgresql: select non-outliers from view soup:

    Before Postgres 8.4 there is no built-in way to get a percentage of rows with a single query. Consider this closely related thread on the pgsql-sql list

    \n

    You could write a function doing the work in a single call. this should work in Postgres 8.3:

    \n
    CREATE OR REPLACE FUNCTION foo(_pct int)\n  RETURNS SETOF v_t AS\n$func$\nDECLARE\n   _ct     int := (SELECT count(*) FROM v_t);\n   _offset int := (_ct * $1) / 100;\n   _limit  int := (_ct * (100 - 2 * $1)) / 100;\nBEGIN\n\nRETURN QUERY\nSELECT *\nFROM   v_t\nOFFSET _offset\nLIMIT  _limit;\n\nEND\n$func$ LANGUAGE plpgsql;\n
    \n

    Call:

    \n
    SELECT * FROM foo(5)\n
    \n

    This actually crops 5% from top and bottom.

    \n

    The return type RETURNS SETOF v_t is derived from a view named v_t directly.

    \n

    -> SQLfiddle for Postgres 8.3.

    \n soup wrap:

    Before Postgres 8.4 there is no built-in way to get a percentage of rows with a single query. Consider this closely related thread on the pgsql-sql list

    You could write a function doing the work in a single call. this should work in Postgres 8.3:

    CREATE OR REPLACE FUNCTION foo(_pct int)
      RETURNS SETOF v_t AS
    $func$
    DECLARE
       _ct     int := (SELECT count(*) FROM v_t);
       _offset int := (_ct * $1) / 100;
       _limit  int := (_ct * (100 - 2 * $1)) / 100;
    BEGIN
    
    RETURN QUERY
    SELECT *
    FROM   v_t
    OFFSET _offset
    LIMIT  _limit;
    
    END
    $func$ LANGUAGE plpgsql;
    

    Call:

    SELECT * FROM foo(5)
    

    This actually crops 5% from top and bottom.

    The return type RETURNS SETOF v_t is derived from a view named v_t directly.

    -> SQLfiddle for Postgres 8.3.

    qid & accept id: (17665628, 17665993) query: alias column name by lookup query soup:

    How to solve this problem has to do with what you are doing with the result. If you have a front end with some ability program you can do a select like this (I'm assuming all column names are the same in both tables)

    \n
        SELECT "Column Head" as RowType, * FROM TABLEA\nUNION ALL\n    SELECT "Column Value" as RowType, * FROM TABLEB\n
    \n

    This will give you something like this:

    \n
    RowType         DPSF0010001     DPSF0010002     DPSF0010003     DPSF0010004     DPSF0010005     DPSF0010006     DPSF0010007     DPSF0010008     DPSF0010009     DPSF0010010     DPSF0010011     DPSF0010012     DPSF0010013     DPSF0010014     DPSF0010015\nColumn Head     Total:          Under 5 years   5 to 9 years    10 to 14 years  15 to 19 years  20 to 24 years  25 to 29 years  30 to 34 years  35 to 39 years  40 to 44 years  45 to 49 years  50 to 54 years  55 to 59 years  60 to 64 years  65 to 69 years\nColumn Value    4973            139             266             437             391             146             100             78              141             253             425             491             501             477             382\n
    \n

    Which should be easy to display in whatever your front end is.

    \n soup wrap:

    How to solve this problem has to do with what you are doing with the result. If you have a front end with some ability program you can do a select like this (I'm assuming all column names are the same in both tables)

        SELECT "Column Head" as RowType, * FROM TABLEA
    UNION ALL
        SELECT "Column Value" as RowType, * FROM TABLEB
    

    This will give you something like this:

    RowType         DPSF0010001     DPSF0010002     DPSF0010003     DPSF0010004     DPSF0010005     DPSF0010006     DPSF0010007     DPSF0010008     DPSF0010009     DPSF0010010     DPSF0010011     DPSF0010012     DPSF0010013     DPSF0010014     DPSF0010015
    Column Head     Total:          Under 5 years   5 to 9 years    10 to 14 years  15 to 19 years  20 to 24 years  25 to 29 years  30 to 34 years  35 to 39 years  40 to 44 years  45 to 49 years  50 to 54 years  55 to 59 years  60 to 64 years  65 to 69 years
    Column Value    4973            139             266             437             391             146             100             78              141             253             425             491             501             477             382
    

    Which should be easy to display in whatever your front end is.

    qid & accept id: (17670284, 17671028) query: How to return record from an Oracle function with JOIN query? soup:

    You can use a strongly typed cursor and its rowtype:

    \n
    -- example data\ncreate table t1(pk number not null primary key, val varchar2(30));\ncreate table t2(\n  pk number not null primary key, \n  t1_fk references t1(pk), \n  val varchar2(30));\n\ninsert into t1(pk, val) values(1, 'value1');\ninsert into t2(pk, t1_fk, val) values(1, 1, 'value2a');\ninsert into t2(pk, t1_fk, val) values(2, 1, 'value2b');\n\ndeclare\n  cursor cur is \n  select t1.*, t2.val as t2_val \n  from t1\n  join t2 on t1.pk = t2.t1_fk;\n\n  function get_data(arg in pls_integer) return cur%rowtype is\n      l_result cur%rowtype;\n    begin\n      select t1.*, t2.val as t2_val \n        into l_result \n        from t1 \n        join t2 on t1.pk = t2.t1_fk\n        where t2.pk = arg;\n      return l_result;\n    end;\nbegin\n  dbms_output.put_line(get_data(2).t2_val);\nend;\n
    \n

    UPDATE: you can easily wrap the cursor and function inside a PL/SQL package:

    \n
    create or replace package pkg_get_data as \n\n  cursor cur is \n  select t1.*, t2.val as t2_val \n  from t1\n  join t2 on t1.pk = t2.t1_fk;\n\n  function get_data(arg in pls_integer) return cur%rowtype;\nend;\n
    \n

    (package body omitted)

    \n soup wrap:

    You can use a strongly typed cursor and its rowtype:

    -- example data
    create table t1(pk number not null primary key, val varchar2(30));
    create table t2(
      pk number not null primary key, 
      t1_fk references t1(pk), 
      val varchar2(30));
    
    insert into t1(pk, val) values(1, 'value1');
    insert into t2(pk, t1_fk, val) values(1, 1, 'value2a');
    insert into t2(pk, t1_fk, val) values(2, 1, 'value2b');
    
    declare
      cursor cur is 
      select t1.*, t2.val as t2_val 
      from t1
      join t2 on t1.pk = t2.t1_fk;
    
      function get_data(arg in pls_integer) return cur%rowtype is
          l_result cur%rowtype;
        begin
          select t1.*, t2.val as t2_val 
            into l_result 
            from t1 
            join t2 on t1.pk = t2.t1_fk
            where t2.pk = arg;
          return l_result;
        end;
    begin
      dbms_output.put_line(get_data(2).t2_val);
    end;
    

    UPDATE: you can easily wrap the cursor and function inside a PL/SQL package:

    create or replace package pkg_get_data as 
    
      cursor cur is 
      select t1.*, t2.val as t2_val 
      from t1
      join t2 on t1.pk = t2.t1_fk;
    
      function get_data(arg in pls_integer) return cur%rowtype;
    end;
    

    (package body omitted)

    qid & accept id: (17703008, 17703055) query: Counting with SQL soup:
    SELECT count(letter) occurences,\n       letter\nFROM table\nGROUP BY letter\nORDER BY letter ASC\n
    \n

    basically you're looking for the COUNT() function. Be aware that it is an aggregate function and you must use GROUP BY at the end of your SELECT statement

    \n
    \n

    if you have your letters on two columns (say col1 and col2) you should first union them in a single one and do the count afterwards, like this:

    \n
    SELECT count(letter) occurences,\n       letter\nFROM (SELECT col1 letter\n      FROM table\n      UNION \n      SELECT col2 letter\n      FROM table)\nGROUP BY letter \nORDER BY letter;\n
    \n

    the inner SELECT query appends the content of col2 to col1 and renames the resulting column to "letter". The outer select, counts the occurrences of each letter in this resulting column.

    \n soup wrap:
    SELECT count(letter) occurences,
           letter
    FROM table
    GROUP BY letter
    ORDER BY letter ASC
    

    basically you're looking for the COUNT() function. Be aware that it is an aggregate function and you must use GROUP BY at the end of your SELECT statement


    if you have your letters on two columns (say col1 and col2) you should first union them in a single one and do the count afterwards, like this:

    SELECT count(letter) occurences,
           letter
    FROM (SELECT col1 letter
          FROM table
          UNION 
          SELECT col2 letter
          FROM table)
    GROUP BY letter 
    ORDER BY letter;
    

    the inner SELECT query appends the content of col2 to col1 and renames the resulting column to "letter". The outer select, counts the occurrences of each letter in this resulting column.

    qid & accept id: (17703863, 17704210) query: Finding Top level parent of each row of a table [SQL Server 2008] soup:

    I have also updated the answer in the original question, but never-mind, here is a copy also:

    \n
    ;WITH RCTE AS\n(\n    SELECT  ParentId, ChildId, 1 AS Lvl FROM RelationHierarchy \n\n    UNION ALL\n\n    SELECT rh.ParentId, rc.ChildId, Lvl+1 AS Lvl \n    FROM dbo.RelationHierarchy rh\n    INNER JOIN RCTE rc ON rh.ChildId = rc.ParentId\n)\n,CTE_RN AS \n(\n    SELECT *, ROW_NUMBER() OVER (PARTITION BY r.ChildID ORDER BY r.Lvl DESC) RN\n    FROM RCTE r\n\n)\nSELECT pc.Id AS ChildID, pc.Name AS ChildName, r.ParentId, pp.Name AS ParentName\nFROM dbo.Person pc \nLEFT JOIN CTE_RN r ON pc.id = r.CHildId AND  RN =1\nLEFT JOIN dbo.Person pp ON pp.id = r.ParentId\n
    \n

    SQLFiddle DEMO

    \n

    Note that the slight difference is in recursive part of CTE. ChildID is now rewritten each time from the anchor part. Also addition is ROW_NUMBER() function (and new CTE) to get the top level for each child at the end.

    \n

    EDIT - Version2

    \n

    After finding a performance issues with first query, here is an improved version. Going from top-to-bottom, instead of other way around - eliminating creating of extra rows in CTE, should be much faster on high number of recursions:

    \n
    ;WITH RCTE AS\n(\n    SELECT  ParentId, CHildId, 1 AS Lvl FROM RelationHierarchy r1\n    WHERE NOT EXISTS (SELECT * FROM RelationHierarchy r2 WHERE r2.CHildId = r1.ParentId)\n\n    UNION ALL\n\n    SELECT rc.ParentId, rh.CHildId, Lvl+1 AS Lvl \n    FROM dbo.RelationHierarchy rh\n    INNER JOIN RCTE rc ON rc.CHildId = rh.ParentId\n)\nSELECT pc.Id AS ChildID, pc.Name AS ChildName, r.ParentId, pp.Name AS ParentName\nFROM dbo.Person pc \nLEFT JOIN RCTE r ON pc.id = r.CHildId\nLEFT JOIN dbo.Person pp ON pp.id = r.ParentId \n
    \n

    SQLFiddle DEMO

    \n soup wrap:

    I have also updated the answer in the original question, but never-mind, here is a copy also:

    ;WITH RCTE AS
    (
        SELECT  ParentId, ChildId, 1 AS Lvl FROM RelationHierarchy 
    
        UNION ALL
    
        SELECT rh.ParentId, rc.ChildId, Lvl+1 AS Lvl 
        FROM dbo.RelationHierarchy rh
        INNER JOIN RCTE rc ON rh.ChildId = rc.ParentId
    )
    ,CTE_RN AS 
    (
        SELECT *, ROW_NUMBER() OVER (PARTITION BY r.ChildID ORDER BY r.Lvl DESC) RN
        FROM RCTE r
    
    )
    SELECT pc.Id AS ChildID, pc.Name AS ChildName, r.ParentId, pp.Name AS ParentName
    FROM dbo.Person pc 
    LEFT JOIN CTE_RN r ON pc.id = r.CHildId AND  RN =1
    LEFT JOIN dbo.Person pp ON pp.id = r.ParentId
    

    SQLFiddle DEMO

    Note that the slight difference is in recursive part of CTE. ChildID is now rewritten each time from the anchor part. Also addition is ROW_NUMBER() function (and new CTE) to get the top level for each child at the end.

    EDIT - Version2

    After finding a performance issues with first query, here is an improved version. Going from top-to-bottom, instead of other way around - eliminating creating of extra rows in CTE, should be much faster on high number of recursions:

    ;WITH RCTE AS
    (
        SELECT  ParentId, CHildId, 1 AS Lvl FROM RelationHierarchy r1
        WHERE NOT EXISTS (SELECT * FROM RelationHierarchy r2 WHERE r2.CHildId = r1.ParentId)
    
        UNION ALL
    
        SELECT rc.ParentId, rh.CHildId, Lvl+1 AS Lvl 
        FROM dbo.RelationHierarchy rh
        INNER JOIN RCTE rc ON rc.CHildId = rh.ParentId
    )
    SELECT pc.Id AS ChildID, pc.Name AS ChildName, r.ParentId, pp.Name AS ParentName
    FROM dbo.Person pc 
    LEFT JOIN RCTE r ON pc.id = r.CHildId
    LEFT JOIN dbo.Person pp ON pp.id = r.ParentId 
    

    SQLFiddle DEMO

    qid & accept id: (17707945, 17708118) query: SQL - Find the binary representation from the place of '1's soup:

    You can use binary and and string concatenation:

    \n
    select (case when test&4 > 0 then '1' else '0' end) +\n       (case when test&2 > 0 then '1' else '0' end) +\n       (case when test&1 > 0 then '1' else '0' end)\nfrom (select 6 as test) t;\n
    \n

    If you are allergic to case statements, you could do this:

    \n
    select CHAR(ascii(0) + (test&4)/4) +\n       CHAR(ascii(0) + (test&2)/2) +\n       CHAR(ascii(0) + (test&1)/1)\nfrom (select 6 as test) t\n
    \n soup wrap:

    You can use binary and and string concatenation:

    select (case when test&4 > 0 then '1' else '0' end) +
           (case when test&2 > 0 then '1' else '0' end) +
           (case when test&1 > 0 then '1' else '0' end)
    from (select 6 as test) t;
    

    If you are allergic to case statements, you could do this:

    select CHAR(ascii(0) + (test&4)/4) +
           CHAR(ascii(0) + (test&2)/2) +
           CHAR(ascii(0) + (test&1)/1)
    from (select 6 as test) t
    
    qid & accept id: (17750801, 17750863) query: Check if mysql field contains a certain number in mysql query soup:

    http://dev.mysql.com/doc/refman/5.6/en/string-functions.html#function_find-in-set

    \n
    SELECT ...\nWHERE FIND_IN_SET(5, list_column)\n
    \n

    But understand that this search is bound to be very slow. It cannot use an index, and it will cause a full table-scan (reading every row in the table). As the table grows, the query will become unusably slow.

    \n

    Please read my answer to Is storing a delimited list in a database column really that bad?

    \n
    \n

    You can use @MikeChristensen's answer to be more standard. Another trick with standard SQL is this:

    \n
    select * from TableName\nwhere ',' || ids || ',' LIKE '%,5,%'\n
    \n

    (in standard SQL, || is the string concatenation operator, but in MySQL, you have to SET SQL_MODE=PIPES_AS_CONCAT or SET SQL_MODE=ANSI to get that behavior.)

    \n

    Another MySQL-specific solution is to use a special word-boundary regular expression, which will match either the comma punctuation or beginning/end of string:

    \n
    select * from TableName\nwhere ids RLIKE '[[:<:]]5[[:>:]]'\n
    \n

    None of these solutions scale well; they all cause table-scans. Sorry I understand you cannot change the database design, but if your project next requires to make the query faster, you can tell them it's not possible without redesigning the table.

    \n soup wrap:

    http://dev.mysql.com/doc/refman/5.6/en/string-functions.html#function_find-in-set

    SELECT ...
    WHERE FIND_IN_SET(5, list_column)
    

    But understand that this search is bound to be very slow. It cannot use an index, and it will cause a full table-scan (reading every row in the table). As the table grows, the query will become unusably slow.

    Please read my answer to Is storing a delimited list in a database column really that bad?


    You can use @MikeChristensen's answer to be more standard. Another trick with standard SQL is this:

    select * from TableName
    where ',' || ids || ',' LIKE '%,5,%'
    

    (in standard SQL, || is the string concatenation operator, but in MySQL, you have to SET SQL_MODE=PIPES_AS_CONCAT or SET SQL_MODE=ANSI to get that behavior.)

    Another MySQL-specific solution is to use a special word-boundary regular expression, which will match either the comma punctuation or beginning/end of string:

    select * from TableName
    where ids RLIKE '[[:<:]]5[[:>:]]'
    

    None of these solutions scale well; they all cause table-scans. Sorry I understand you cannot change the database design, but if your project next requires to make the query faster, you can tell them it's not possible without redesigning the table.

    qid & accept id: (17769111, 17769271) query: How to insert data into a table and get value of a column soup:

    when using a Stored Procedure there are two basic methods:
    \n 1. right after the INSERTinto Invoice use SCOPE_IDENTITY()
    \n 2. use the INSERT with the OUTPOUT clause

    \n
    \n

    after comment.

    \n

    in the Stored Procedure:

    \n
    DECLARE @Scope_Ident INT\nINSERT [Table] ()\nVALUES ()\n\nSET @Scope_Ident = SCOPE_IDENTITY() \n
    \n

    if then you need return the ID to the application do:

    \n
    SELECT @Scope_Ident\n
    \n soup wrap:

    when using a Stored Procedure there are two basic methods:
    1. right after the INSERTinto Invoice use SCOPE_IDENTITY()
    2. use the INSERT with the OUTPOUT clause


    after comment.

    in the Stored Procedure:

    DECLARE @Scope_Ident INT
    INSERT [Table] ()
    VALUES ()
    
    SET @Scope_Ident = SCOPE_IDENTITY() 
    

    if then you need return the ID to the application do:

    SELECT @Scope_Ident
    
    qid & accept id: (17810221, 17810289) query: SQL - return records on the fist date where records exist soup:
    Select Min(Date) \nfrom #DATEDATA\nWhere Date>=@WeekendDate\n
    \n

    or

    \n
    Select * from #DATEDATA\nwhere Date=\n(\nSelect Min(Date) \nfrom #DATEDATA\nWhere Date>=@WeekendDate\n)\n
    \n soup wrap:
    Select Min(Date) 
    from #DATEDATA
    Where Date>=@WeekendDate
    

    or

    Select * from #DATEDATA
    where Date=
    (
    Select Min(Date) 
    from #DATEDATA
    Where Date>=@WeekendDate
    )
    
    qid & accept id: (17828198, 17828548) query: Sql subquery with inner join soup:

    Your database schema is not completely clear to me, but it seems you can link tourists from the Tourist table to their extra charges in the EXTRA_CHARGES table via the Tourist_Extra_Charges table like this:

    \n
    SELECT  T.Tourist_ID\n        ,T.Tourist_Name\n        ,EC.Extra_Charge_ID\n        ,EC.Extra_Charge_Description\nFROM    Tourist AS T\nINNER JOIN Tourist_Extra_Charges AS TEC ON T.Tourist_ID= TEC.Tourist_ID\nINNER JOIN EXTRA_CHARGES AS EC ON TEC.Extra_Charge_ID = EC.Extra_Charge_ID;\n
    \n

    EDIT

    \n

    If you want to be able to filter on Reservation_ID, you'll have to join the tables Tourist_Reservations and Reservations as well, like this:

    \n
    SELECT  T.Tourist_ID\n        ,T.Tourist_Name\n        ,EC.Extra_Charge_ID\n        ,EC.Extra_Charge_Description\nFROM    Tourist AS T\nINNER JOIN Tourist_Extra_Charges AS TEC ON T.Tourist_ID= TEC.Tourist_ID\nINNER JOIN EXTRA_CHARGES AS EC ON TEC.Extra_Charge_ID = EC.Extra_Charge_ID\nINNER JOIN Tourist_Reservations AS TR ON T.Tourist_ID = TR.Tourist_ID\nINNER JOIN Reservations AS R ON TR.Reservation_ID = R.Reservation_ID\nWHERE   R.Reservation_ID = 27;\n
    \n

    As for your database schema: please note that the field Extra_Charge_ID is not necessary in your Tourist table: you already link tourists to extra charges via the Tourist_Extra_Charges table. It can be dangerous to the sanity of your data to make these kind of double connections.

    \n soup wrap:

    Your database schema is not completely clear to me, but it seems you can link tourists from the Tourist table to their extra charges in the EXTRA_CHARGES table via the Tourist_Extra_Charges table like this:

    SELECT  T.Tourist_ID
            ,T.Tourist_Name
            ,EC.Extra_Charge_ID
            ,EC.Extra_Charge_Description
    FROM    Tourist AS T
    INNER JOIN Tourist_Extra_Charges AS TEC ON T.Tourist_ID= TEC.Tourist_ID
    INNER JOIN EXTRA_CHARGES AS EC ON TEC.Extra_Charge_ID = EC.Extra_Charge_ID;
    

    EDIT

    If you want to be able to filter on Reservation_ID, you'll have to join the tables Tourist_Reservations and Reservations as well, like this:

    SELECT  T.Tourist_ID
            ,T.Tourist_Name
            ,EC.Extra_Charge_ID
            ,EC.Extra_Charge_Description
    FROM    Tourist AS T
    INNER JOIN Tourist_Extra_Charges AS TEC ON T.Tourist_ID= TEC.Tourist_ID
    INNER JOIN EXTRA_CHARGES AS EC ON TEC.Extra_Charge_ID = EC.Extra_Charge_ID
    INNER JOIN Tourist_Reservations AS TR ON T.Tourist_ID = TR.Tourist_ID
    INNER JOIN Reservations AS R ON TR.Reservation_ID = R.Reservation_ID
    WHERE   R.Reservation_ID = 27;
    

    As for your database schema: please note that the field Extra_Charge_ID is not necessary in your Tourist table: you already link tourists to extra charges via the Tourist_Extra_Charges table. It can be dangerous to the sanity of your data to make these kind of double connections.

    qid & accept id: (17833022, 17833133) query: Do arithmatic inside database. Is this possible? soup:

    Try this:

    \n
    update cartable set total = stage_1 + stage_2\n
    \n

    In fact, instead of storing the column total in the database, you could just create a view:

    \n
    create view carview as \n       select Car, state_1, stage_2, stage_1 + stage_2 as total\n       from cartable\n
    \n soup wrap:

    Try this:

    update cartable set total = stage_1 + stage_2
    

    In fact, instead of storing the column total in the database, you could just create a view:

    create view carview as 
           select Car, state_1, stage_2, stage_1 + stage_2 as total
           from cartable
    
    qid & accept id: (17851492, 17851604) query: Getting count from 2 table and group by month soup:

    Join both tables with month:

    \n
    SELECT MONTH(I.date) AS `month`\n     , COUNT(I.ID) AS `countin`\n     , COUNT(O.ID) AS `countOUT`\n  FROM TableIN I\n LEFT JOIN TableOUT O\n    ON MONTH(I.Date) = MONTH(O.Date)\n GROUP BY MONTH(I.date)\nUNION\nSELECT MONTH(O.date) AS `month`\n     , COUNT(I.ID) AS `countin`\n     , COUNT(O.ID) AS `countOUT`\n  FROM TableIN I\n RIGHT JOIN TableOUT O\n    ON MONTH(I.Date) = MONTH(O.Date)\n GROUP BY MONTH(I.date);\n
    \n

    Result:

    \n
    | MONTH | COUNTIN | COUNTOUT |\n------------------------------\n|     5 |       1 |        1 |\n|     7 |       1 |        1 |\n|     6 |       0 |        1 |\n
    \n

    See this SQLFiddle

    \n

    Also to order your result by month you need to use a sub-query like this:

    \n
    SELECT * FROM\n(\n    SELECT MONTH(I.date) AS `month`\n         , COUNT(I.ID) AS `countin`\n         , COUNT(O.ID) AS `countOUT`\n      FROM TableIN I\n     LEFT JOIN TableOUT O\n        ON MONTH(I.Date) = MONTH(O.Date)\n     GROUP BY MONTH(I.date)\n    UNION\n    SELECT MONTH(O.date) AS `month`\n         , COUNT(I.ID) AS `countin`\n         , COUNT(O.ID) AS `countOUT`\n      FROM TableIN I\n     RIGHT JOIN TableOUT O\n        ON MONTH(I.Date) = MONTH(O.Date)\n     GROUP BY MONTH(I.date)\n    ) tbl\nORDER BY Month;\n
    \n

    See this SQLFiddle

    \n soup wrap:

    Join both tables with month:

    SELECT MONTH(I.date) AS `month`
         , COUNT(I.ID) AS `countin`
         , COUNT(O.ID) AS `countOUT`
      FROM TableIN I
     LEFT JOIN TableOUT O
        ON MONTH(I.Date) = MONTH(O.Date)
     GROUP BY MONTH(I.date)
    UNION
    SELECT MONTH(O.date) AS `month`
         , COUNT(I.ID) AS `countin`
         , COUNT(O.ID) AS `countOUT`
      FROM TableIN I
     RIGHT JOIN TableOUT O
        ON MONTH(I.Date) = MONTH(O.Date)
     GROUP BY MONTH(I.date);
    

    Result:

    | MONTH | COUNTIN | COUNTOUT |
    ------------------------------
    |     5 |       1 |        1 |
    |     7 |       1 |        1 |
    |     6 |       0 |        1 |
    

    See this SQLFiddle

    Also to order your result by month you need to use a sub-query like this:

    SELECT * FROM
    (
        SELECT MONTH(I.date) AS `month`
             , COUNT(I.ID) AS `countin`
             , COUNT(O.ID) AS `countOUT`
          FROM TableIN I
         LEFT JOIN TableOUT O
            ON MONTH(I.Date) = MONTH(O.Date)
         GROUP BY MONTH(I.date)
        UNION
        SELECT MONTH(O.date) AS `month`
             , COUNT(I.ID) AS `countin`
             , COUNT(O.ID) AS `countOUT`
          FROM TableIN I
         RIGHT JOIN TableOUT O
            ON MONTH(I.Date) = MONTH(O.Date)
         GROUP BY MONTH(I.date)
        ) tbl
    ORDER BY Month;
    

    See this SQLFiddle

    qid & accept id: (17875720, 17878753) query: SQL Joining 4 Tables soup:

    You don't need to join subqueries back onto the tables that they are sourced from, and you can JOIN directly onto them.

    \n

    Rather than JOINING a whole bunch of tabels directly, you could look at forming subqueries that get the correct constituent parts

    \n

    Something like the following may be what you are after:

    \n
    SELECT tbl_hardware.HW_ID,\n       tbl_hardware.Aktiv,\n       tbl_hardware.typebradmodelID,\n       typebradmodel.Type,\n       typebradmodel.Brand,\n       typebradmodel.Model,\n       lastentry.Login,\n       lastentry.since\nFROM (SELECT\n        tbl_typebradmodel.typebradmodelID,\n        tbl_type.tabel AS Type,\n        tbl_brand.tabel AS Brand,\n        tbl_model.tabel AS Model\n    FROM tbl_typebradmodel\n    LEFT OUTER JOIN tbl_type ON tbl_typebradmodel.TypID = tbl_type.TypID\n    LEFT OUTER JOIN tbl_brand ON tbl_typebradmodel.MarkeID = tbl_brand.MarkeID\n    LEFT OUTER JOIN tbl_model ON tbl_typebradmodel.ModelID = tbl_model.ModelID\n    ) typebradmodel\nLEFT JOIN tbl_hardware ON tbl_hardware.typebradmodelID = typebradmodel.typebradmodelID\nLEFT JOIN      \n    (SELECT \n        MAX(tbl_hardware_assignment.since) AS lastchange, \n        tbl_hardware_assignment.HW_ID,\n        tbl_accounts.Login\n    FROM tbl_hardware_assignment\n    LEFT OUTER JOIN tbl_accounts ON tbl_hardware_assignment.namenID = tbl_accounts.PersID\n    GROUP BY tbl_hardware_assignment.HW_ID,tbl_accounts.Login ) lastentry ON tbl_hardware.HW_ID = lastentry.HW_ID\nWHERE tbl_hardware.Aktiv = 1 AND \n    typebradmodel.Brand LIKE 'Samsung' AND\n    lastentry.Login = 'MY_USERNAME'\n
    \n

    Update\nThe critical part here is getting the lastchange subquery correct, i.e. using all the columns that describe the relation between tbl_hardware_assignment and tbl_accounts

    \n
    SELECT \n    MAX(tbl_hardware_assignment.since) AS lastchange, \n    tbl_hardware_assignment.HW_ID,\n    tbl_accounts.Login\nFROM tbl_hardware_assignment\nLEFT OUTER JOIN tbl_accounts ON tbl_hardware_assignment.namenID = tbl_accounts.PersID\nAND MAX(tbl_hardware_assignment.since) = tbl_accounts.lastchange\nGROUP BY tbl_hardware_assignment.HW_ID,tbl_accounts.Login \n
    \n

    does this get the right ID's? and if it doesn't, are you able to find out what the relation between these two tables should involve?

    \n soup wrap:

    You don't need to join subqueries back onto the tables that they are sourced from, and you can JOIN directly onto them.

    Rather than JOINING a whole bunch of tabels directly, you could look at forming subqueries that get the correct constituent parts

    Something like the following may be what you are after:

    SELECT tbl_hardware.HW_ID,
           tbl_hardware.Aktiv,
           tbl_hardware.typebradmodelID,
           typebradmodel.Type,
           typebradmodel.Brand,
           typebradmodel.Model,
           lastentry.Login,
           lastentry.since
    FROM (SELECT
            tbl_typebradmodel.typebradmodelID,
            tbl_type.tabel AS Type,
            tbl_brand.tabel AS Brand,
            tbl_model.tabel AS Model
        FROM tbl_typebradmodel
        LEFT OUTER JOIN tbl_type ON tbl_typebradmodel.TypID = tbl_type.TypID
        LEFT OUTER JOIN tbl_brand ON tbl_typebradmodel.MarkeID = tbl_brand.MarkeID
        LEFT OUTER JOIN tbl_model ON tbl_typebradmodel.ModelID = tbl_model.ModelID
        ) typebradmodel
    LEFT JOIN tbl_hardware ON tbl_hardware.typebradmodelID = typebradmodel.typebradmodelID
    LEFT JOIN      
        (SELECT 
            MAX(tbl_hardware_assignment.since) AS lastchange, 
            tbl_hardware_assignment.HW_ID,
            tbl_accounts.Login
        FROM tbl_hardware_assignment
        LEFT OUTER JOIN tbl_accounts ON tbl_hardware_assignment.namenID = tbl_accounts.PersID
        GROUP BY tbl_hardware_assignment.HW_ID,tbl_accounts.Login ) lastentry ON tbl_hardware.HW_ID = lastentry.HW_ID
    WHERE tbl_hardware.Aktiv = 1 AND 
        typebradmodel.Brand LIKE 'Samsung' AND
        lastentry.Login = 'MY_USERNAME'
    

    Update The critical part here is getting the lastchange subquery correct, i.e. using all the columns that describe the relation between tbl_hardware_assignment and tbl_accounts

    SELECT 
        MAX(tbl_hardware_assignment.since) AS lastchange, 
        tbl_hardware_assignment.HW_ID,
        tbl_accounts.Login
    FROM tbl_hardware_assignment
    LEFT OUTER JOIN tbl_accounts ON tbl_hardware_assignment.namenID = tbl_accounts.PersID
    AND MAX(tbl_hardware_assignment.since) = tbl_accounts.lastchange
    GROUP BY tbl_hardware_assignment.HW_ID,tbl_accounts.Login 
    

    does this get the right ID's? and if it doesn't, are you able to find out what the relation between these two tables should involve?

    qid & accept id: (17890157, 17890206) query: mysql show db column in multiple returned columns soup:

    If you know already that you only have two values for the week, you could use this query:

    \n
    SELECT\n  CodeID,\n  MAX(CASE WHEN Week=1 THEN ItemID END) Week1,\n  MAX(CASE WHEN Week=2 THEN ItemID END) Week2\nFROM\n  yourtable\nGROUP BY\n  CodeID\n
    \n

    but if the number of weeks is not known, you should use a dynamic query, like this:

    \n
    SELECT\n  CONCAT(\n    'SELECT CodeID,',\n    GROUP_CONCAT(\n      DISTINCT\n      CONCAT('MAX(CASE WHEN Week=', Week, ' THEN ItemID END) Week', Week)),\n    ' FROM yourtable GROUP BY CodeID;')\nFROM\n  yourtable\nINTO @sql;\n\nPREPARE stmt FROM @sql;\nEXECUTE stmt;\n
    \n

    Please see fiddle here.

    \n

    Edit

    \n

    If there are multiple items in the same week, you could use GROUP_CONCAT aggregated function instead of MAX:

    \n
    SELECT\n  CodeID,\n  GROUP_CONCAT(DISTINCT CASE WHEN Week=1 THEN ItemID END) Week1,\n  GROUP_CONCAT(DISTINCT CASE WHEN Week=2 THEN ItemID END) Week2\nFROM\n  yourtable\nGROUP BY\n  CodeID;\n
    \n soup wrap:

    If you know already that you only have two values for the week, you could use this query:

    SELECT
      CodeID,
      MAX(CASE WHEN Week=1 THEN ItemID END) Week1,
      MAX(CASE WHEN Week=2 THEN ItemID END) Week2
    FROM
      yourtable
    GROUP BY
      CodeID
    

    but if the number of weeks is not known, you should use a dynamic query, like this:

    SELECT
      CONCAT(
        'SELECT CodeID,',
        GROUP_CONCAT(
          DISTINCT
          CONCAT('MAX(CASE WHEN Week=', Week, ' THEN ItemID END) Week', Week)),
        ' FROM yourtable GROUP BY CodeID;')
    FROM
      yourtable
    INTO @sql;
    
    PREPARE stmt FROM @sql;
    EXECUTE stmt;
    

    Please see fiddle here.

    Edit

    If there are multiple items in the same week, you could use GROUP_CONCAT aggregated function instead of MAX:

    SELECT
      CodeID,
      GROUP_CONCAT(DISTINCT CASE WHEN Week=1 THEN ItemID END) Week1,
      GROUP_CONCAT(DISTINCT CASE WHEN Week=2 THEN ItemID END) Week2
    FROM
      yourtable
    GROUP BY
      CodeID;
    
    qid & accept id: (17910415, 17910512) query: using where clause in REPLACE statment soup:

    REPLACE works by matching the primary key. If you specify a primary key value in the REPLACE and no row with that value exists, it works like INSERT. If the primary key value you try to insert already exists in the table, it overwrites the other columns of the row.

    \n

    So there is no need for a WHERE clause. It's implicitly looking for WHERE pk = value.

    \n

    If you want it to detect the package detail for a given user and you want to use REPLACE, you must make the userid the primary key.

    \n
    CREATE TABLE userpackages (\n  userid INT PRIMARY KEY,\n  package_detail TEXT,\n  FOREIGN KEY (userid) REFERENCES users(userid)\n);\n
    \n

    First we add the user's first package:

    \n
    REPLACE INTO userpackages (userid, package_detail) \nVALUES (1234, 'some package');\n
    \n

    Next we change the package for user 1234:

    \n
    REPLACE INTO userpackages (userid, package_detail) \nVALUES (1234, 'some other package');\n
    \n

    If userid isn't your primary key, then REPLACE isn't going to work.

    \n soup wrap:

    REPLACE works by matching the primary key. If you specify a primary key value in the REPLACE and no row with that value exists, it works like INSERT. If the primary key value you try to insert already exists in the table, it overwrites the other columns of the row.

    So there is no need for a WHERE clause. It's implicitly looking for WHERE pk = value.

    If you want it to detect the package detail for a given user and you want to use REPLACE, you must make the userid the primary key.

    CREATE TABLE userpackages (
      userid INT PRIMARY KEY,
      package_detail TEXT,
      FOREIGN KEY (userid) REFERENCES users(userid)
    );
    

    First we add the user's first package:

    REPLACE INTO userpackages (userid, package_detail) 
    VALUES (1234, 'some package');
    

    Next we change the package for user 1234:

    REPLACE INTO userpackages (userid, package_detail) 
    VALUES (1234, 'some other package');
    

    If userid isn't your primary key, then REPLACE isn't going to work.

    qid & accept id: (17925232, 17936398) query: How to put data from the database to the template soup:

    Magento works in this way, in MVC design pattern, is different to the usual MVC.\nIn Magento we have:\n- Model\n- View :\n - Blocks\n - Layouts\n - Templates\n The blocks grabs the data from the model, and pass this data to the template, all through the layout system.\n- Controllers

    \n

    So, the answer to your question is that your need a method in one model, and invoke it through the block, and then, pass the data in this way:\nblock:

    \n
    class Mynamespace_Mymodule_Block_Myblock extends Mage_Core_Block_Template\n{\n    public function getMyProductData()\n    {\n        $product = Mage::getModel('catalog/product')->load($id);\n        return $product;    \n    }\n} \n
    \n

    And then you can retrieve it in your phtml like this:

    \n
    $_product = $this->getMyProductData();\necho $_product->getName();\n
    \n

    Greetings from México :D

    \n soup wrap:

    Magento works in this way, in MVC design pattern, is different to the usual MVC. In Magento we have: - Model - View : - Blocks - Layouts - Templates The blocks grabs the data from the model, and pass this data to the template, all through the layout system. - Controllers

    So, the answer to your question is that your need a method in one model, and invoke it through the block, and then, pass the data in this way: block:

    class Mynamespace_Mymodule_Block_Myblock extends Mage_Core_Block_Template
    {
        public function getMyProductData()
        {
            $product = Mage::getModel('catalog/product')->load($id);
            return $product;    
        }
    } 
    

    And then you can retrieve it in your phtml like this:

    $_product = $this->getMyProductData();
    echo $_product->getName();
    

    Greetings from México :D

    qid & accept id: (18001322, 18001398) query: User Defined Variable in MySQL Insert Query soup:

    try this

    \n
        INSERT INTO msMenu (column1, column2 , column3)\n    SELECT  COALESCE( MAX( menuId ) , 0 ) +1 ,'My Menu', '1'  \n    FROM msMenu;\n
    \n

    EDIT2:

    \n
     SET @newId = (select COALESCE( MAX( menuId ) , 0 ) +1 from msMenu)\n INSERT INTO msMenu (column1, column2 , column3)\n SELECT  @newId ,'My Menu', '1'  \n FROM msMenu;\n
    \n soup wrap:

    try this

        INSERT INTO msMenu (column1, column2 , column3)
        SELECT  COALESCE( MAX( menuId ) , 0 ) +1 ,'My Menu', '1'  
        FROM msMenu;
    

    EDIT2:

     SET @newId = (select COALESCE( MAX( menuId ) , 0 ) +1 from msMenu)
     INSERT INTO msMenu (column1, column2 , column3)
     SELECT  @newId ,'My Menu', '1'  
     FROM msMenu;
    
    qid & accept id: (18020825, 18021203) query: Convert datetime to MM/dd/yyyy HH:MM:SS AM/PM soup:

    Your current SET doesn't even work. When you have a valid datetime value coming in from a string literal, you can do this:

    \n
    DECLARE @adddate DATETIME;\n\nSET @adddate = '2011-07-06T22:30:07.521';\n\nSELECT CONVERT(CHAR(11), @adddate, 103) \n  + LTRIM(RIGHT(CONVERT(CHAR(20), @adddate, 22), 11));\n
    \n

    Result:

    \n
    06/07/2011 10:30:07 PM\n
    \n

    If you actually want m/d/y (your question is ambiguous), there is a slightly shorter path using style 22:

    \n
    DECLARE @adddate DATETIME;\n\nSET @adddate = '2011-07-06T22:30:07.521';\n\nSELECT STUFF(CONVERT(CHAR(20), @adddate, 22), 7, 2, YEAR(@adddate));\n
    \n

    Result:

    \n
    07/06/2011 10:30:07 PM\n
    \n

    However, this is a bad idea for two reasons:

    \n
      \n
    1. regional formats are confusing (will a reader know 05/06/2013 is May 6th and not June 5th? Depends on where they're from) and even dangerous (if they pass that string back in, you might store June 5th when they meant May 6th).

    2. \n
    3. your client language is better off using it's own Format() or ToString() methods to format this for display at the very last moment possible.

    4. \n
    \n soup wrap:

    Your current SET doesn't even work. When you have a valid datetime value coming in from a string literal, you can do this:

    DECLARE @adddate DATETIME;
    
    SET @adddate = '2011-07-06T22:30:07.521';
    
    SELECT CONVERT(CHAR(11), @adddate, 103) 
      + LTRIM(RIGHT(CONVERT(CHAR(20), @adddate, 22), 11));
    

    Result:

    06/07/2011 10:30:07 PM
    

    If you actually want m/d/y (your question is ambiguous), there is a slightly shorter path using style 22:

    DECLARE @adddate DATETIME;
    
    SET @adddate = '2011-07-06T22:30:07.521';
    
    SELECT STUFF(CONVERT(CHAR(20), @adddate, 22), 7, 2, YEAR(@adddate));
    

    Result:

    07/06/2011 10:30:07 PM
    

    However, this is a bad idea for two reasons:

    1. regional formats are confusing (will a reader know 05/06/2013 is May 6th and not June 5th? Depends on where they're from) and even dangerous (if they pass that string back in, you might store June 5th when they meant May 6th).

    2. your client language is better off using it's own Format() or ToString() methods to format this for display at the very last moment possible.

    qid & accept id: (18105224, 18105394) query: Convert varchar data to datetime in SQL server when source data is w/o format soup:

    You can make it a little more compact by not forcing the dashes, and using STUFF instead of SUBSTRING:

    \n
    DECLARE @Var VARCHAR(100) = '20130120161643730';\n\nSET @Var = LEFT(@Var, 8) + ' ' \n  + STUFF(STUFF(STUFF(RIGHT(@Var, 9),3,0,':'),6,0,':'),9,0,'.');\n\nSELECT [string] = @Var, [datetime] = CONVERT(DATETIME, @Var);\n
    \n

    Results:

    \n
    string                  datetime\n---------------------   -----------------------\n20130120 16:16:43.730   2013-01-20 16:16:43.730\n
    \n soup wrap:

    You can make it a little more compact by not forcing the dashes, and using STUFF instead of SUBSTRING:

    DECLARE @Var VARCHAR(100) = '20130120161643730';
    
    SET @Var = LEFT(@Var, 8) + ' ' 
      + STUFF(STUFF(STUFF(RIGHT(@Var, 9),3,0,':'),6,0,':'),9,0,'.');
    
    SELECT [string] = @Var, [datetime] = CONVERT(DATETIME, @Var);
    

    Results:

    string                  datetime
    ---------------------   -----------------------
    20130120 16:16:43.730   2013-01-20 16:16:43.730
    
    qid & accept id: (18107553, 18107667) query: How to replace an int with text in a query soup:

    You should consider storing the lookup in a new table... but just so you're aware of your options, you can also use the DATENAME(WEEKDAY) function:

    \n
    SELECT DATENAME(WEEKDAY, 0)\n
    \n

    Returns:

    \n
    Monday\n
    \n

    SQL Fiddle

    \n soup wrap:

    You should consider storing the lookup in a new table... but just so you're aware of your options, you can also use the DATENAME(WEEKDAY) function:

    SELECT DATENAME(WEEKDAY, 0)
    

    Returns:

    Monday
    

    SQL Fiddle

    qid & accept id: (18111896, 18111953) query: Filling in missing data soup:

    Something pretty basic could be

    \n
    SELECT MT.Date, MT.Text, \n       CASE WHEN MT.Text = 'bbb' THEN Number\n            ELSE (SELECT TOP 1 Number \n                               FROM MyTable MT2 \n                               WHERE MT2.Date < MT.Date AND \n                                     MT2.Text = 'bbb'\n                               ORDER BY MT2.Date DESC)\n            END Number,\n       CASE WHEN MT.Text = 'bbb' THEN Number2\n            ELSE (SELECT TOP 1 Number2 \n                               FROM MyTable MT2 \n                               WHERE MT2.Date < MT.Date AND \n                                     MT2.Text = 'bbb'\n                               ORDER BY MT2.Date DESC)\n            END Number2 \n       FROM MyTable MT\n
    \n

    SQLFiddle: http://sqlfiddle.com/#!3/cbee5/3

    \n

    or using OUTER APPLY (it should be faster)

    \n
    SELECT MT.Date, MT.Text, \n       CASE WHEN MT.Text = 'bbb' THEN MT.Number\n            ELSE MT2.Number \n            END Number,\n       CASE WHEN MT.Text = 'bbb' THEN MT.Number2\n            ELSE MT2.Number2\n            END Number2\n       FROM MyTable MT\n       OUTER APPLY (SELECT TOP 1 MT2.Number, MT2.Number2 \n                                 FROM MyTable MT2\n                                 WHERE MT.Text <> 'bbb' AND \n                                       MT2.Text = 'bbb' AND \n                                       MT2.Date < MT.Date\n                                 ORDER BY MT2.Date DESC\n                   ) MT2\n
    \n

    SQLFiddle: http://sqlfiddle.com/#!3/cbee5/7

    \n soup wrap:

    Something pretty basic could be

    SELECT MT.Date, MT.Text, 
           CASE WHEN MT.Text = 'bbb' THEN Number
                ELSE (SELECT TOP 1 Number 
                                   FROM MyTable MT2 
                                   WHERE MT2.Date < MT.Date AND 
                                         MT2.Text = 'bbb'
                                   ORDER BY MT2.Date DESC)
                END Number,
           CASE WHEN MT.Text = 'bbb' THEN Number2
                ELSE (SELECT TOP 1 Number2 
                                   FROM MyTable MT2 
                                   WHERE MT2.Date < MT.Date AND 
                                         MT2.Text = 'bbb'
                                   ORDER BY MT2.Date DESC)
                END Number2 
           FROM MyTable MT
    

    SQLFiddle: http://sqlfiddle.com/#!3/cbee5/3

    or using OUTER APPLY (it should be faster)

    SELECT MT.Date, MT.Text, 
           CASE WHEN MT.Text = 'bbb' THEN MT.Number
                ELSE MT2.Number 
                END Number,
           CASE WHEN MT.Text = 'bbb' THEN MT.Number2
                ELSE MT2.Number2
                END Number2
           FROM MyTable MT
           OUTER APPLY (SELECT TOP 1 MT2.Number, MT2.Number2 
                                     FROM MyTable MT2
                                     WHERE MT.Text <> 'bbb' AND 
                                           MT2.Text = 'bbb' AND 
                                           MT2.Date < MT.Date
                                     ORDER BY MT2.Date DESC
                       ) MT2
    

    SQLFiddle: http://sqlfiddle.com/#!3/cbee5/7

    qid & accept id: (18146788, 18149151) query: From XML to list of paths in Oracle PL/SQL environment soup:

    You can use XMLTable to produce list of paths with XQuery.

    \n

    E.g.

    \n

    (SQLFiddle)

    \n
    with params as (\n  select \n    xmltype('\n      \n        0123\n        2345\n        \n           3\n        \n      \n    ') p_xml\n  from dual  \n)    \nselect\n  path_name || '/text()'\nfrom\n  XMLTable(\n    '\n      for $i in $doc/descendant-or-self::*\n        return  {$i/string-join(ancestor-or-self::*/name(.), ''/'')} \n    '\n    passing (select p_xml from params) as "doc"\n    columns path_name varchar2(4000) path '//element_path'\n  )\n
    \n

    but it's a wrong way at least because it's not effective as it can.

    \n

    Just extract all values with same XQuery:\n(SQLFiddle)

    \n
    with params as (\n  select \n    xmltype('\n      \n        0123\n        2345\n        \n           3\n        \n      \n    ') p_xml\n  from dual  \n)    \nselect\n  element_path, element_text\nfrom\n  XMLTable(\n    '              \n      for $i in $doc/descendant-or-self::*\n        return \n                  {$i/string-join(ancestor-or-self::*/name(.), ''/'')} \n                  {$i/text()}\n                 \n    '\n    passing (select p_xml from params) as "doc"\n    columns \n      element_path   varchar2(4000) path '//element_path',\n      element_text   varchar2(4000) path '//element_content'\n  )\n
    \n soup wrap:

    You can use XMLTable to produce list of paths with XQuery.

    E.g.

    (SQLFiddle)

    with params as (
      select 
        xmltype('
          
            0123
            2345
            
               3
            
          
        ') p_xml
      from dual  
    )    
    select
      path_name || '/text()'
    from
      XMLTable(
        '
          for $i in $doc/descendant-or-self::*
            return  {$i/string-join(ancestor-or-self::*/name(.), ''/'')} 
        '
        passing (select p_xml from params) as "doc"
        columns path_name varchar2(4000) path '//element_path'
      )
    

    but it's a wrong way at least because it's not effective as it can.

    Just extract all values with same XQuery: (SQLFiddle)

    with params as (
      select 
        xmltype('
          
            0123
            2345
            
               3
            
          
        ') p_xml
      from dual  
    )    
    select
      element_path, element_text
    from
      XMLTable(
        '              
          for $i in $doc/descendant-or-self::*
            return 
                      {$i/string-join(ancestor-or-self::*/name(.), ''/'')} 
                      {$i/text()}
                     
        '
        passing (select p_xml from params) as "doc"
        columns 
          element_path   varchar2(4000) path '//element_path',
          element_text   varchar2(4000) path '//element_content'
      )
    
    qid & accept id: (18186212, 18335335) query: How to escape the "." reserved symbol when using an input for an sql script soup:

    While i waited for an answer i found the following solutions:

    \n
        "set define off" and using \.\n
    \n

    OR

    \n
        "set escape ON" and using .\n
    \n

    And turning the properties to its default value after using them. I ended up using Nicholas Krasnov's solution of using a "&1..TABLEX" because it didnt require any property change. Thank you!

    \n soup wrap:

    While i waited for an answer i found the following solutions:

        "set define off" and using \.
    

    OR

        "set escape ON" and using .
    

    And turning the properties to its default value after using them. I ended up using Nicholas Krasnov's solution of using a "&1..TABLEX" because it didnt require any property change. Thank you!

    qid & accept id: (18187989, 18188052) query: Query to get only one row from multiple rows having same values soup:

    To get the latest row in MySQL, you need to use a join or correlated subquery:

    \n
    SELECT id, user_receiver, user_sender, post_id, action, date, is_read\nFROM notification n\nWHERE user_receiver=$ses_user and\n      date = (select max(date)\n              from notification n2\n              where n2.user_sender = n.user_sender and\n                    n2.action = n.action and\n                    n2.post_id = n.post_id and\n                    n2.is_read = n.is_read\n             )\norder by date desc;\n
    \n

    In other databases, you would simply use the row_number() function (or distinct on in Postgres).

    \n

    EDIT:

    \n

    For the biggest id:

    \n
    SELECT id, user_receiver, user_sender, post_id, action, date, is_read\nFROM notification n\nWHERE user_receiver=$ses_user and\n      id   = (select max(id)\n              from notification n2\n              where n2.user_sender = n.user_sender and\n                    n2.action = n.action and\n                    n2.post_id = n.post_id\n             )\norder by date desc;\n
    \n

    If you want the number of rows where isread = 1, then you can do something like:

    \n
    SELECT sum(is_read = 1)\nFROM notification n\nWHERE user_receiver=$ses_user and\n      id   = (select max(id)\n              from notification n2\n              where n2.user_sender = n.user_sender and\n                    n2.action = n.action and\n                    n2.post_id = n.post_id\n             );\n
    \n soup wrap:

    To get the latest row in MySQL, you need to use a join or correlated subquery:

    SELECT id, user_receiver, user_sender, post_id, action, date, is_read
    FROM notification n
    WHERE user_receiver=$ses_user and
          date = (select max(date)
                  from notification n2
                  where n2.user_sender = n.user_sender and
                        n2.action = n.action and
                        n2.post_id = n.post_id and
                        n2.is_read = n.is_read
                 )
    order by date desc;
    

    In other databases, you would simply use the row_number() function (or distinct on in Postgres).

    EDIT:

    For the biggest id:

    SELECT id, user_receiver, user_sender, post_id, action, date, is_read
    FROM notification n
    WHERE user_receiver=$ses_user and
          id   = (select max(id)
                  from notification n2
                  where n2.user_sender = n.user_sender and
                        n2.action = n.action and
                        n2.post_id = n.post_id
                 )
    order by date desc;
    

    If you want the number of rows where isread = 1, then you can do something like:

    SELECT sum(is_read = 1)
    FROM notification n
    WHERE user_receiver=$ses_user and
          id   = (select max(id)
                  from notification n2
                  where n2.user_sender = n.user_sender and
                        n2.action = n.action and
                        n2.post_id = n.post_id
                 );
    
    qid & accept id: (18251762, 18251844) query: Remove duplicates if you have only one column with value soup:

    if you are allowed to use CTE:

    \n
    with cte as (\n    select\n        row_number() over(partition by Value order by Value) as row_num,\n        Value\n    from Table1\n)\ndelete from cte where row_num > 1\n
    \n

    sql fiddle demo

    \n

    as t-clausen.dk suggested in comments, you don't even need value inside the CTE:

    \n
    with cte as (\n    select\n        row_number() over(partition by Value order by Value) as row_num\n    from Table1\n)\ndelete from cte where row_num > 1;\n
    \n soup wrap:

    if you are allowed to use CTE:

    with cte as (
        select
            row_number() over(partition by Value order by Value) as row_num,
            Value
        from Table1
    )
    delete from cte where row_num > 1
    

    sql fiddle demo

    as t-clausen.dk suggested in comments, you don't even need value inside the CTE:

    with cte as (
        select
            row_number() over(partition by Value order by Value) as row_num
        from Table1
    )
    delete from cte where row_num > 1;
    
    qid & accept id: (18277282, 18277521) query: Time Since Last Purchase soup:

    I think this is most easily done with a correlated subquery:

    \n
    select t.*,\n       datediff((select t2.TransactionDate\n                 from t t2\n                 where t2.CustomerId = t.CustomerId and\n                       t2.TransactionDate < t.TransactionDate\n                 order by t2.TransactionDate desc\n                 limit 1\n                ), t.TransactionDate) as daysSinceLastPurchase\nfrom t;\n
    \n

    This makes the assumption that transactions occur on different days.

    \n

    If this assumption is not true and the transaction ids are in ascending order, you can use:

    \n
    select t.*,\n       datediff((select t2.TransactionDate\n                 from t t2\n                 where t2.CustomerId = t.CustomerId and\n                       t2.TransactionId < t.TransactionId\n                 order by t2.TransactionId desc\n                 limit 1\n                ), t.TransactionDate) as daysSinceLastPurchase\nfrom t;\n
    \n soup wrap:

    I think this is most easily done with a correlated subquery:

    select t.*,
           datediff((select t2.TransactionDate
                     from t t2
                     where t2.CustomerId = t.CustomerId and
                           t2.TransactionDate < t.TransactionDate
                     order by t2.TransactionDate desc
                     limit 1
                    ), t.TransactionDate) as daysSinceLastPurchase
    from t;
    

    This makes the assumption that transactions occur on different days.

    If this assumption is not true and the transaction ids are in ascending order, you can use:

    select t.*,
           datediff((select t2.TransactionDate
                     from t t2
                     where t2.CustomerId = t.CustomerId and
                           t2.TransactionId < t.TransactionId
                     order by t2.TransactionId desc
                     limit 1
                    ), t.TransactionDate) as daysSinceLastPurchase
    from t;
    
    qid & accept id: (18289563, 18289719) query: How to retrieve samples from the database? soup:

    If you want to get all posts that have tags in a comma delimited list:

    \n
    select postid\nfrom post_tags\nwhere find_in_set(tagid, @LIST) > 0\ngroup by postid\nhaving count(distinct tagid) = 1+length(@LIST) - length(replace(',', @LIST, ''));\n
    \n

    If you want just a "sample" of them:

    \n
    select postid\nfrom (select postid\n      from post_tags\n      where find_in_set(tagid, @LIST) > 0\n      group by postid\n      having count(distinct tagid) = 1+length(@LIST) - length(replace(',', @LIST, ''))\n     ) t\norder by rand()\nlimit 5\n
    \n soup wrap:

    If you want to get all posts that have tags in a comma delimited list:

    select postid
    from post_tags
    where find_in_set(tagid, @LIST) > 0
    group by postid
    having count(distinct tagid) = 1+length(@LIST) - length(replace(',', @LIST, ''));
    

    If you want just a "sample" of them:

    select postid
    from (select postid
          from post_tags
          where find_in_set(tagid, @LIST) > 0
          group by postid
          having count(distinct tagid) = 1+length(@LIST) - length(replace(',', @LIST, ''))
         ) t
    order by rand()
    limit 5
    
    qid & accept id: (18320028, 18320074) query: Get the names of all Triggers currently in the database via SQL statement (Oracle SQL Developer) soup:

    What you have is pretty close:

    \n
    select owner, object_name\nfrom all_objects\nwhere object_type = 'TRIGGER'\n
    \n

    Or more usefully:

    \n
    select owner, trigger_name, table_owner, table_name, triggering_event\nfrom all_triggers\n
    \n

    all_triggers has other columns to give you more information that all_objects does, like when the trigger fires. You can get more information about this and other useful data dictionary view in the documentation.

    \n soup wrap:

    What you have is pretty close:

    select owner, object_name
    from all_objects
    where object_type = 'TRIGGER'
    

    Or more usefully:

    select owner, trigger_name, table_owner, table_name, triggering_event
    from all_triggers
    

    all_triggers has other columns to give you more information that all_objects does, like when the trigger fires. You can get more information about this and other useful data dictionary view in the documentation.

    qid & accept id: (18359263, 18359482) query: Copying Data from one table into another and simultaneously add another column soup:

    Here is an example using create table as syntax:

    \n
    CREATE TABLE NEW_TBL AS\n    SELECT Col1, Col2, Col3, 'Newcol' as Col4\n    FROM OLD_TBL;\n
    \n

    To assign a data type, use cast() or convert() to get the type you want:

    \n
    CREATE TABLE NEW_TBL AS\n    SELECT Col1, Col2, Col3, cast('Newcol' as varchar(255) as Col4,\n           cast(123 as decimal(18, 2)) as col4\n    FROM OLD_TBL;\n
    \n

    By the way, you can also add the column directly to the old table:

    \n
    alter table old_tbl add col4 varchar(255);\n
    \n

    You can then update the value there, if you wish.

    \n soup wrap:

    Here is an example using create table as syntax:

    CREATE TABLE NEW_TBL AS
        SELECT Col1, Col2, Col3, 'Newcol' as Col4
        FROM OLD_TBL;
    

    To assign a data type, use cast() or convert() to get the type you want:

    CREATE TABLE NEW_TBL AS
        SELECT Col1, Col2, Col3, cast('Newcol' as varchar(255) as Col4,
               cast(123 as decimal(18, 2)) as col4
        FROM OLD_TBL;
    

    By the way, you can also add the column directly to the old table:

    alter table old_tbl add col4 varchar(255);
    

    You can then update the value there, if you wish.

    qid & accept id: (18377746, 18378117) query: Change/Update part of string in MySQL soup:

    Try

    \n
    UPDATE ifns_code INNER JOIN\n( SELECT name n, REPLACE(fio,'**!!!**','**???**') f FROM ifns_code ) t ON n=name\nSET ifns_code.fio=REPLACE(REPLACE(f,'**!!!**',code),'**???**',name)\n
    \n

    This will do both replace operations, first the three letter code (whose name I don't know, I have used code as a name) and then the name. If you want to leave the last **!!!** instance to remain as it is, just replace name with **!!!** in the outer REPLACEfunction.

    \n

    Edit:

    \n

    Now, having a clear description of what you want, I can provide you with the desired UPDATE statement:

    \n
    UPDATE ifns_code INNER JOIN (\n  SELECT name n,instr(fio,'Profile/') i,instr(fio,'">http://sqlfiddle.com/#!8/3c1a4/1

    \n

    In the derived table expression I evaluate the positions before (i) and after (j) the string portion I want to change. The rest is just a combination of substring and concat.

    \n soup wrap:

    Try

    UPDATE ifns_code INNER JOIN
    ( SELECT name n, REPLACE(fio,'**!!!**','**???**') f FROM ifns_code ) t ON n=name
    SET ifns_code.fio=REPLACE(REPLACE(f,'**!!!**',code),'**???**',name)
    

    This will do both replace operations, first the three letter code (whose name I don't know, I have used code as a name) and then the name. If you want to leave the last **!!!** instance to remain as it is, just replace name with **!!!** in the outer REPLACEfunction.

    Edit:

    Now, having a clear description of what you want, I can provide you with the desired UPDATE statement:

    UPDATE ifns_code INNER JOIN (
      SELECT name n,instr(fio,'Profile/') i,instr(fio,'">http://sqlfiddle.com/#!8/3c1a4/1

    In the derived table expression I evaluate the positions before (i) and after (j) the string portion I want to change. The rest is just a combination of substring and concat.

    qid & accept id: (18404055, 18405706) query: Index for finding an element in a JSON array soup:

    jsonb in Postgres 9.4+

    \n

    With the new binary JSON data type jsonb, Postgres 9.4 introduced largely improved index options. You can now have a GIN index on a jsonb array directly:

    \n
    CREATE TABLE tracks (id serial, artists jsonb);\nCREATE INDEX tracks_artists_gin_idx ON tracks USING gin (artists);
    \n

    No need for a function to convert the array. This would support a query:

    \n
    SELECT * FROM tracks WHERE artists @> '[{"name": "The Dirty Heads"}]';\n
    \n

    @> being the new jsonb "contains" operator, which can use the GIN index. (Not for type json, only jsonb!)

    \n

    Or you use the more specialized, non-default GIN operator class jsonb_path_ops for the index:

    \n
    CREATE INDEX tracks_artists_gin_idx ON tracks\nUSING  gin (artists jsonb_path_ops);
    \n

    Same query.

    \n
    \n

    If artists only holds names as displayed in the example, it would be more efficient to store a less redundant JSON value to begin with: just the values as text primitives and the redundant key can be in the column name.

    \n

    Note the difference between JSON objects and primitive types:

    \n\n
    CREATE TABLE tracks (id serial, artistnames jsonb);\nINSERT INTO tracks  VALUES (2, '["The Dirty Heads", "Louis Richards"]');\n\nCREATE INDEX tracks_artistnames_gin_idx ON tracks USING gin (artistnames);
    \n

    Query:

    \n
    SELECT * FROM tracks WHERE artistnames ? 'The Dirty Heads';\n
    \n

    ? does not work for object values, just keys and array elements.
    \nOr (more efficient if names are repeated often):

    \n
    CREATE INDEX tracks_artistnames_gin_idx ON tracks\nUSING  gin (artistnames jsonb_path_ops);\n
    \n

    Query:

    \n
    SELECT * FROM tracks WHERE artistnames @> '"The Dirty Heads"'::jsonb;\n
    \n

    jsonb_path_ops currently only supports indexing the @> operator.
    \nThere are more index options, details in the manual.

    \n

    json in Postgres 9.3+

    \n

    This should work with an IMMUTABLE function:

    \n
    CREATE OR REPLACE FUNCTION json2arr(_j json, _key text)\n  RETURNS text[] LANGUAGE sql IMMUTABLE AS\n'SELECT ARRAY(SELECT elem->>_key FROM json_array_elements(_j) elem)';\n
    \n

    Create this functional index:

    \n
    CREATE INDEX tracks_artists_gin_idx ON tracks\nUSING  gin (json2arr(artists, 'name'));\n
    \n

    And use a query like this. The expression in the WHERE clause has to match the one in the index:

    \n
    SELECT * FROM tracks\nWHERE  '{"The Dirty Heads"}'::text[] <@ (json2arr(artists, 'name'));\n
    \n

    Updated with feedback in comments. We need to use array operators to support the GIN index.
    \nThe "is contained by" operator <@ in this case.

    \n

    Notes on function volatility

    \n

    You can declare your function IMMUTABLE even if json_array_elements() isn't wasn't.
    \nMost JSON functions used to be only STABLE, not IMMUTABLE. There was a discussion on the hackers list to change that. Most are IMMUTABLE now. Check with:

    \n
    SELECT p.proname, p.provolatile\nFROM   pg_proc p\nJOIN   pg_namespace n ON n.oid = p.pronamespace\nWHERE  n.nspname = 'pg_catalog'\nAND    p.proname ~~* '%json%';\n
    \n

    Functional indexes only work with IMMUTABLE functions.

    \n soup wrap:

    jsonb in Postgres 9.4+

    With the new binary JSON data type jsonb, Postgres 9.4 introduced largely improved index options. You can now have a GIN index on a jsonb array directly:

    CREATE TABLE tracks (id serial, artists jsonb);
    CREATE INDEX tracks_artists_gin_idx ON tracks USING gin (artists);

    No need for a function to convert the array. This would support a query:

    SELECT * FROM tracks WHERE artists @> '[{"name": "The Dirty Heads"}]';
    

    @> being the new jsonb "contains" operator, which can use the GIN index. (Not for type json, only jsonb!)

    Or you use the more specialized, non-default GIN operator class jsonb_path_ops for the index:

    CREATE INDEX tracks_artists_gin_idx ON tracks
    USING  gin (artists jsonb_path_ops);

    Same query.


    If artists only holds names as displayed in the example, it would be more efficient to store a less redundant JSON value to begin with: just the values as text primitives and the redundant key can be in the column name.

    Note the difference between JSON objects and primitive types:

    CREATE TABLE tracks (id serial, artistnames jsonb);
    INSERT INTO tracks  VALUES (2, '["The Dirty Heads", "Louis Richards"]');
    
    CREATE INDEX tracks_artistnames_gin_idx ON tracks USING gin (artistnames);

    Query:

    SELECT * FROM tracks WHERE artistnames ? 'The Dirty Heads';
    

    ? does not work for object values, just keys and array elements.
    Or (more efficient if names are repeated often):

    CREATE INDEX tracks_artistnames_gin_idx ON tracks
    USING  gin (artistnames jsonb_path_ops);
    

    Query:

    SELECT * FROM tracks WHERE artistnames @> '"The Dirty Heads"'::jsonb;
    

    jsonb_path_ops currently only supports indexing the @> operator.
    There are more index options, details in the manual.

    json in Postgres 9.3+

    This should work with an IMMUTABLE function:

    CREATE OR REPLACE FUNCTION json2arr(_j json, _key text)
      RETURNS text[] LANGUAGE sql IMMUTABLE AS
    'SELECT ARRAY(SELECT elem->>_key FROM json_array_elements(_j) elem)';
    

    Create this functional index:

    CREATE INDEX tracks_artists_gin_idx ON tracks
    USING  gin (json2arr(artists, 'name'));
    

    And use a query like this. The expression in the WHERE clause has to match the one in the index:

    SELECT * FROM tracks
    WHERE  '{"The Dirty Heads"}'::text[] <@ (json2arr(artists, 'name'));
    

    Updated with feedback in comments. We need to use array operators to support the GIN index.
    The "is contained by" operator <@ in this case.

    Notes on function volatility

    You can declare your function IMMUTABLE even if json_array_elements() isn't wasn't.
    Most JSON functions used to be only STABLE, not IMMUTABLE. There was a discussion on the hackers list to change that. Most are IMMUTABLE now. Check with:

    SELECT p.proname, p.provolatile
    FROM   pg_proc p
    JOIN   pg_namespace n ON n.oid = p.pronamespace
    WHERE  n.nspname = 'pg_catalog'
    AND    p.proname ~~* '%json%';
    

    Functional indexes only work with IMMUTABLE functions.

    qid & accept id: (18410600, 18410959) query: Selecting the most recent, lowest price from multiple vendors for an inventory item soup:

    Much simpler with DISTINCT ON in Postgres:

    \n

    Current price per item for each vendor

    \n
    SELECT DISTINCT ON (p.item_id, p.vendor_id)\n       i.title, p.price, p.vendor_id\nFROM   prices p\nJOIN   items  i ON i.id = p.item_id\nORDER  BY p.item_id, p.vendor_id, p.created_at DESC;\n
    \n

    Optimal vendor for each item

    \n
    SELECT DISTINCT ON (item_id) \n       i.title, p.price, p.vendor_id -- add more columns as you need\nFROM (\n   SELECT DISTINCT ON (item_id, vendor_id)\n          item_id, price, vendor_id -- add more columns as you need\n   FROM   prices p\n   ORDER  BY item_id, vendor_id, created_at DESC\n   ) p\nJOIN   items i ON i.id = p.item_id\nORDER  BY item_id, price;\n
    \n

    ->SQLfiddle demo

    \n

    Detailed explanation:
    \nSelect first row in each GROUP BY group?

    \n soup wrap:

    Much simpler with DISTINCT ON in Postgres:

    Current price per item for each vendor

    SELECT DISTINCT ON (p.item_id, p.vendor_id)
           i.title, p.price, p.vendor_id
    FROM   prices p
    JOIN   items  i ON i.id = p.item_id
    ORDER  BY p.item_id, p.vendor_id, p.created_at DESC;
    

    Optimal vendor for each item

    SELECT DISTINCT ON (item_id) 
           i.title, p.price, p.vendor_id -- add more columns as you need
    FROM (
       SELECT DISTINCT ON (item_id, vendor_id)
              item_id, price, vendor_id -- add more columns as you need
       FROM   prices p
       ORDER  BY item_id, vendor_id, created_at DESC
       ) p
    JOIN   items i ON i.id = p.item_id
    ORDER  BY item_id, price;
    

    ->SQLfiddle demo

    Detailed explanation:
    Select first row in each GROUP BY group?

    qid & accept id: (18415438, 18415525) query: SQL Query Sum and total of rows soup:

    Try this query:

    \n
    SELECT ITEM\n  ,SUM(CASE WHEN LOCATION = 001 THEN QUANTITY ELSE 0 END) AS Location_001\n  ,SUM(CASE WHEN LOCATION = 002 THEN QUANTITY ELSE 0 END) AS Location_002\n  ,SUM(CASE WHEN LOCATION = 003 THEN QUANTITY ELSE 0 END) AS Location_003\n  ,SUM(Quantity) AS Total\nFROM Table1\nGROUP BY ITEM;\n
    \n

    In case if you don't know Locations, you can try this dynamic query:

    \n
    SET @sql = NULL;\nSELECT\n  GROUP_CONCAT(DISTINCT\n    CONCAT(\n      'SUM(CASE WHEN `LOCATION` = ''',\n      `LOCATION`,\n      ''' THEN QUANTITY ELSE 0 END) AS `',\n      `LOCATION`, '`'\n    )\n  ) INTO @sql\nFROM Table1;\n\nSET @sql = CONCAT('SELECT ITEM, ', @sql,'\n                     ,SUM(Quantity) AS Total \n                     FROM Table1\n                    GROUP BY ITEM\n                  ');\n\nPREPARE stmt FROM @sql;\nEXECUTE stmt;\nDEALLOCATE PREPARE stmt;\n
    \n

    Result:

    \n
    |     ITEM | 1 | 2 | 3 | TOTAL |\n|----------|---|---|---|-------|\n| BLUE CAR | 0 | 2 | 5 |     7 |\n|  RED CAR | 3 | 8 | 0 |    11 |\n
    \n

    See this SQLFiddle

    \n soup wrap:

    Try this query:

    SELECT ITEM
      ,SUM(CASE WHEN LOCATION = 001 THEN QUANTITY ELSE 0 END) AS Location_001
      ,SUM(CASE WHEN LOCATION = 002 THEN QUANTITY ELSE 0 END) AS Location_002
      ,SUM(CASE WHEN LOCATION = 003 THEN QUANTITY ELSE 0 END) AS Location_003
      ,SUM(Quantity) AS Total
    FROM Table1
    GROUP BY ITEM;
    

    In case if you don't know Locations, you can try this dynamic query:

    SET @sql = NULL;
    SELECT
      GROUP_CONCAT(DISTINCT
        CONCAT(
          'SUM(CASE WHEN `LOCATION` = ''',
          `LOCATION`,
          ''' THEN QUANTITY ELSE 0 END) AS `',
          `LOCATION`, '`'
        )
      ) INTO @sql
    FROM Table1;
    
    SET @sql = CONCAT('SELECT ITEM, ', @sql,'
                         ,SUM(Quantity) AS Total 
                         FROM Table1
                        GROUP BY ITEM
                      ');
    
    PREPARE stmt FROM @sql;
    EXECUTE stmt;
    DEALLOCATE PREPARE stmt;
    

    Result:

    |     ITEM | 1 | 2 | 3 | TOTAL |
    |----------|---|---|---|-------|
    | BLUE CAR | 0 | 2 | 5 |     7 |
    |  RED CAR | 3 | 8 | 0 |    11 |
    

    See this SQLFiddle

    qid & accept id: (18420123, 18423693) query: Count preceding rows that match criteria soup:

    This seems to do it:

    \n
    library(data.table)\nset.seed(50)\nDT <- data.table(NETSALES=ifelse(runif(40)<.15,0,runif(40,1,100)), cust=rep(1:2, each=20), dt=1:20)\nDT[,dir:=ifelse(NETSALES>0,1,0)]\ndir.rle <- rle(DT$dir)\nDT <- transform(DT, indexer = rep(1:length(dir.rle$lengths), dir.rle$lengths))\nDT[,runl:=cumsum(dir),by=indexer]\n
    \n

    credit to Cumulative sums over run lengths. Can this loop be vectorized?

    \n
    \n

    Edit by Roland:

    \n

    Here is the same with better performance and also considering different customers:

    \n
    #no need for ifelse\nDT[,dir:= NETSALES>0]\n\n#use a function to avoid storing the rle, which could be huge\nrunseq <- function(x) {\n  x.rle <- rle(x)\n  rep(1:length(x.rle$lengths), x.rle$lengths)\n}\n\n#never use transform with data.table\nDT[,indexer := runseq(dir)]\n\n#include cust in by\nDT[,runl:=cumsum(dir),by=list(indexer,cust)]\n
    \n
    \n

    Edit: joe added SQL solution\nhttp://sqlfiddle.com/#!6/990eb/22

    \n

    SQL solution is 48 minutes on a machine with 128gig of ram across 22m rows. R solution is about 20 seconds on a workstation with 4 gig of ram. Go R!

    \n soup wrap:

    This seems to do it:

    library(data.table)
    set.seed(50)
    DT <- data.table(NETSALES=ifelse(runif(40)<.15,0,runif(40,1,100)), cust=rep(1:2, each=20), dt=1:20)
    DT[,dir:=ifelse(NETSALES>0,1,0)]
    dir.rle <- rle(DT$dir)
    DT <- transform(DT, indexer = rep(1:length(dir.rle$lengths), dir.rle$lengths))
    DT[,runl:=cumsum(dir),by=indexer]
    

    credit to Cumulative sums over run lengths. Can this loop be vectorized?


    Edit by Roland:

    Here is the same with better performance and also considering different customers:

    #no need for ifelse
    DT[,dir:= NETSALES>0]
    
    #use a function to avoid storing the rle, which could be huge
    runseq <- function(x) {
      x.rle <- rle(x)
      rep(1:length(x.rle$lengths), x.rle$lengths)
    }
    
    #never use transform with data.table
    DT[,indexer := runseq(dir)]
    
    #include cust in by
    DT[,runl:=cumsum(dir),by=list(indexer,cust)]
    

    Edit: joe added SQL solution http://sqlfiddle.com/#!6/990eb/22

    SQL solution is 48 minutes on a machine with 128gig of ram across 22m rows. R solution is about 20 seconds on a workstation with 4 gig of ram. Go R!

    qid & accept id: (18477582, 18477634) query: One column, two names, mysql soup:

    I believe you are looking for a view. You can define the view as:

    \n
    create view v_table as\n    select t.*, `old` as `new`\n    from `table` t;\n
    \n

    Assuming no naming conflict, this will give you both.

    \n

    Now, you might want to go a step further. You can rename the old table and have the view take the name of the old table:

    \n
    rename table `table` to `old_table`;\ncreate view t as\n    select t.*, `old` as `new`\n    from `old_table` t;\n
    \n

    That way, everything that references table will start using the view with the new column name.

    \n soup wrap:

    I believe you are looking for a view. You can define the view as:

    create view v_table as
        select t.*, `old` as `new`
        from `table` t;
    

    Assuming no naming conflict, this will give you both.

    Now, you might want to go a step further. You can rename the old table and have the view take the name of the old table:

    rename table `table` to `old_table`;
    create view t as
        select t.*, `old` as `new`
        from `old_table` t;
    

    That way, everything that references table will start using the view with the new column name.

    qid & accept id: (18486580, 18486818) query: Oracle - calculate number of rows before some condition is applied soup:

    You can use the analytic version of COUNT() in a nested query, e.g.:

    \n
    SELECT * FROM\n(\n  SELECT table_name,\n    COUNT(*) OVER() AS numberofrows\n  FROM all_tables\n  WHERE owner = 'SYS'\n  ORDER BY table_name\n)\nWHERE rownum < 10;\n
    \n

    You need to nest it anyway to apply an order-by before the rownum filter to get consistent results, otherwise you get a random(ish) set of rows.

    \n

    You can also replace rownum with the analytic ROW_NUMBER() function:

    \n
    SELECT table_name, cnt FROM\n(\n  SELECT table_name,\n    COUNT(*) OVER () AS numberofrows,\n    ROW_NUMBER() OVER (ORDER BY table_name) AS rn\n  FROM all_tables\n  WHERE owner = 'SYS'\n)\nWHERE rn < 10;\n
    \n soup wrap:

    You can use the analytic version of COUNT() in a nested query, e.g.:

    SELECT * FROM
    (
      SELECT table_name,
        COUNT(*) OVER() AS numberofrows
      FROM all_tables
      WHERE owner = 'SYS'
      ORDER BY table_name
    )
    WHERE rownum < 10;
    

    You need to nest it anyway to apply an order-by before the rownum filter to get consistent results, otherwise you get a random(ish) set of rows.

    You can also replace rownum with the analytic ROW_NUMBER() function:

    SELECT table_name, cnt FROM
    (
      SELECT table_name,
        COUNT(*) OVER () AS numberofrows,
        ROW_NUMBER() OVER (ORDER BY table_name) AS rn
      FROM all_tables
      WHERE owner = 'SYS'
    )
    WHERE rn < 10;
    
    qid & accept id: (18499562, 18499651) query: Connecting to a SQL Server through another Sever connection that's not linked soup:

    You'd need either OPENROWSET\nor OPENDATASOURCE

    \n

    Found examples here:

    \n

    OPENROWSET:

    \n
    SELECT *\nFROM OPENROWSET('SQLNCLI',\n   'DRIVER={SQL Server};SERVER=MyServer;UID=MyUserID;PWD=MyCleverPassword',\n   'select @@ServerName') \n
    \n

    OPENDATASOURCE:

    \n
    SELECT * \nFROM OPENDATASOURCE ('SQLNCLI', -- or SQLNCLI\n   'Data Source=OtherServer\InstanceName;Catalog=RemoteDB;User ID=SQLLogin;Password=Secret;').RemoteDB.dbo.SomeTable\n
    \n soup wrap:

    You'd need either OPENROWSET or OPENDATASOURCE

    Found examples here:

    OPENROWSET:

    SELECT *
    FROM OPENROWSET('SQLNCLI',
       'DRIVER={SQL Server};SERVER=MyServer;UID=MyUserID;PWD=MyCleverPassword',
       'select @@ServerName') 
    

    OPENDATASOURCE:

    SELECT * 
    FROM OPENDATASOURCE ('SQLNCLI', -- or SQLNCLI
       'Data Source=OtherServer\InstanceName;Catalog=RemoteDB;User ID=SQLLogin;Password=Secret;').RemoteDB.dbo.SomeTable
    
    qid & accept id: (18513029, 18513282) query: MySQL order by points from 2nd table soup:

    You want to move your expression into the select clause:

    \n
    SELECT i.*,\n       (SELECT count(*) AS points \n        FROM `amenities_index` ai\n        WHERE amenity_id in (1, 2) AND\n              ai.item_id = i.id\n       ) as points\nFROM items i\nORDER BY points desc;\n
    \n

    You can also do this as a join query with aggregation:

    \n
    SELECT i.*, ai.points\nFROM items i join\n     (select ai.item_id, count(*) as points\n      from amenities_index ai\n      where amenity_id in (1, 2)\n     ) ai\n     on ai.item_id = i.id\nORDER BY ai.points desc;\n
    \n

    In most databases, I would prefer this version over the first one. However, MySQL would allow the first in a view but not the second, so it has some strange limitations under some circumstances.

    \n soup wrap:

    You want to move your expression into the select clause:

    SELECT i.*,
           (SELECT count(*) AS points 
            FROM `amenities_index` ai
            WHERE amenity_id in (1, 2) AND
                  ai.item_id = i.id
           ) as points
    FROM items i
    ORDER BY points desc;
    

    You can also do this as a join query with aggregation:

    SELECT i.*, ai.points
    FROM items i join
         (select ai.item_id, count(*) as points
          from amenities_index ai
          where amenity_id in (1, 2)
         ) ai
         on ai.item_id = i.id
    ORDER BY ai.points desc;
    

    In most databases, I would prefer this version over the first one. However, MySQL would allow the first in a view but not the second, so it has some strange limitations under some circumstances.

    qid & accept id: (18534648, 18534798) query: Custom ordering using Analytical Functions soup:

    I assume that you want to assign row_number() based on the ordering, because the analytic functions do not "order" tables. Did you try this?

    \n
    SELECT empno, ename, deptno,\n       row_number() over (ORDER BY DECODE (deptno, NULL, 0, 2, 1, 3) as seqnum\nFROM emp ;\n
    \n

    You could also do this without analytic functions at all:

    \n
    select e.*, rownum as seqnum\nfrom (SELECT empno, ename, deptno\n      FROM emp\n      ORDER BY DECODE (deptno, NULL, 0, 2, 1, 3)\n     ) e\n
    \n soup wrap:

    I assume that you want to assign row_number() based on the ordering, because the analytic functions do not "order" tables. Did you try this?

    SELECT empno, ename, deptno,
           row_number() over (ORDER BY DECODE (deptno, NULL, 0, 2, 1, 3) as seqnum
    FROM emp ;
    

    You could also do this without analytic functions at all:

    select e.*, rownum as seqnum
    from (SELECT empno, ename, deptno
          FROM emp
          ORDER BY DECODE (deptno, NULL, 0, 2, 1, 3)
         ) e
    
    qid & accept id: (18547311, 18547696) query: Complex rolling scenario (CROSS APPLY and OUTER APPLY example) soup:

    I assume that you have a DimDate table with the following structure:

    \n
    CREATE TABLE DimDate\n(\nDateKey INT PRIMARY KEY\n);\n
    \n

    and DateKey column doesn't has gaps.

    \n

    Solution:

    \n
    DECLARE @NumDays INT = 3;\n\nWITH    basic_cte AS\n        (\n            SELECT  x.DateKey,\n                    d.Name,\n                    Amount = ISNULL(f.Amount,0)\n            FROM    \n            (\n                SELECT  t.*, CONVERT(INT,CONVERT(CHAR(8),CONVERT(DATETIME,CONVERT(DATETIME,CONVERT(CHAR(8),t.LiveKey,112))+@NumDays),112)) AS EndLiveKey\n                FROM    #target t\n            ) d \n            CROSS APPLY\n            (\n                SELECT  dm.DateKey\n                FROM    DimDate dm\n                WHERE   dm.DateKey >= d.LiveKey \n                AND     dm.DateKey < d.EndLiveKey           \n            ) x\n            LEFT OUTER JOIN #Fact f \n            ON f.PlayerKey = d.PlayerKey \n            AND f.DateKey = x.DateKey\n        )\nSELECT  rn = ROW_NUMBER() OVER(PARTITION BY Name ORDER BY DateKey),\n        y.*,\n        "RollingAmount" = SUM(Amount) OVER(PARTITION BY Name ORDER BY DateKey)\nFROM    basic_cte y;\n
    \n

    Edit #1:

    \n
    DECLARE @NumDays INT = 3;\n\nWITH    basic_cte AS\n        (\n            SELECT  rn = ROW_NUMBER() OVER(PARTITION BY Name ORDER BY x.DateKey),\n                    x.DateKey,\n                    d.Name,\n                    Amount      = ISNULL(f.Amount,0),\n                    AmountAll   = ISNULL(fall.AmountAll,0)\n            FROM    \n            (\n                SELECT  t.*, CONVERT(INT,CONVERT(CHAR(8),CONVERT(DATETIME,CONVERT(DATETIME,CONVERT(CHAR(8),t.LiveKey,112))+@NumDays),112)) AS EndLiveKey\n                FROM    #target t\n            ) d \n            CROSS APPLY\n            (\n                SELECT  dm.DateKey\n                FROM    DimDate dm\n                WHERE   dm.DateKey >= d.LiveKey \n                AND     dm.DateKey < d.EndLiveKey           \n            ) x\n            OUTER APPLY\n            (\n                SELECT  SUM(fct.Amount) AS Amount\n                FROM    #Fact fct \n                WHERE   fct.DateKey = x.DateKey\n                AND     fct.PlayerKey = d.PlayerKey\n            ) f\n            OUTER APPLY\n            (\n                SELECT  SUM(fct.Amount) AS AmountAll \n                FROM    #Fact fct \n                WHERE   fct.DateKey = x.DateKey\n            ) fall\n        )\nSELECT  \n        y.*,\n        "RollingAmount"     = SUM(Amount) OVER(PARTITION BY Name ORDER BY DateKey),\n        "RollingAmountAll"  = SUM(AmountAll) OVER(PARTITION BY Name ORDER BY DateKey)\nFROM    basic_cte y;\n
    \n

    Edit #2:

    \n
    DECLARE @NumDays INT = 3;\n\nWITH    basic_cte AS\n        (\n            SELECT  rn = ROW_NUMBER() OVER(PARTITION BY Name ORDER BY x.DateKey),\n                    x.DateKey,\n                    d.Name,\n                    Amount      = ISNULL(f.Amount,0),\n                    AmountAll   = ISNULL(f.AmountAll,0)\n            FROM    \n            (\n                SELECT  t.*, EndLiveKey = CONVERT(INT,CONVERT(CHAR(8),CONVERT(DATETIME,CONVERT(DATETIME,CONVERT(CHAR(8),t.LiveKey,112))+@NumDays),112))\n                FROM    #target t\n            ) d \n            CROSS APPLY\n            (\n                SELECT  dm.DateKey\n                FROM    DimDate dm\n                WHERE   dm.DateKey >= d.LiveKey \n                AND     dm.DateKey < d.EndLiveKey           \n            ) x\n            OUTER APPLY\n            (\n                SELECT  AmountAll   = SUM(fbase.Amount),\n                        Amount      = SUM(CASE WHEN PlayerKey1 = PlayerKey2 THEN fbase.Amount END)\n                FROM\n                (\n                    SELECT  fct.Amount, fct.PlayerKey AS PlayerKey1, d.PlayerKey AS PlayerKey2\n                    FROM    #Fact fct \n                    WHERE   fct.DateKey = x.DateKey\n                ) fbase\n            ) f\n        )\nSELECT  \n        y.*,\n        "RollingAmount"     = SUM(Amount) OVER(PARTITION BY Name ORDER BY DateKey),\n        "RollingAmountAll"  = SUM(AmountAll) OVER(PARTITION BY Name ORDER BY DateKey)\nFROM    basic_cte y;\n
    \n soup wrap:

    I assume that you have a DimDate table with the following structure:

    CREATE TABLE DimDate
    (
    DateKey INT PRIMARY KEY
    );
    

    and DateKey column doesn't has gaps.

    Solution:

    DECLARE @NumDays INT = 3;
    
    WITH    basic_cte AS
            (
                SELECT  x.DateKey,
                        d.Name,
                        Amount = ISNULL(f.Amount,0)
                FROM    
                (
                    SELECT  t.*, CONVERT(INT,CONVERT(CHAR(8),CONVERT(DATETIME,CONVERT(DATETIME,CONVERT(CHAR(8),t.LiveKey,112))+@NumDays),112)) AS EndLiveKey
                    FROM    #target t
                ) d 
                CROSS APPLY
                (
                    SELECT  dm.DateKey
                    FROM    DimDate dm
                    WHERE   dm.DateKey >= d.LiveKey 
                    AND     dm.DateKey < d.EndLiveKey           
                ) x
                LEFT OUTER JOIN #Fact f 
                ON f.PlayerKey = d.PlayerKey 
                AND f.DateKey = x.DateKey
            )
    SELECT  rn = ROW_NUMBER() OVER(PARTITION BY Name ORDER BY DateKey),
            y.*,
            "RollingAmount" = SUM(Amount) OVER(PARTITION BY Name ORDER BY DateKey)
    FROM    basic_cte y;
    

    Edit #1:

    DECLARE @NumDays INT = 3;
    
    WITH    basic_cte AS
            (
                SELECT  rn = ROW_NUMBER() OVER(PARTITION BY Name ORDER BY x.DateKey),
                        x.DateKey,
                        d.Name,
                        Amount      = ISNULL(f.Amount,0),
                        AmountAll   = ISNULL(fall.AmountAll,0)
                FROM    
                (
                    SELECT  t.*, CONVERT(INT,CONVERT(CHAR(8),CONVERT(DATETIME,CONVERT(DATETIME,CONVERT(CHAR(8),t.LiveKey,112))+@NumDays),112)) AS EndLiveKey
                    FROM    #target t
                ) d 
                CROSS APPLY
                (
                    SELECT  dm.DateKey
                    FROM    DimDate dm
                    WHERE   dm.DateKey >= d.LiveKey 
                    AND     dm.DateKey < d.EndLiveKey           
                ) x
                OUTER APPLY
                (
                    SELECT  SUM(fct.Amount) AS Amount
                    FROM    #Fact fct 
                    WHERE   fct.DateKey = x.DateKey
                    AND     fct.PlayerKey = d.PlayerKey
                ) f
                OUTER APPLY
                (
                    SELECT  SUM(fct.Amount) AS AmountAll 
                    FROM    #Fact fct 
                    WHERE   fct.DateKey = x.DateKey
                ) fall
            )
    SELECT  
            y.*,
            "RollingAmount"     = SUM(Amount) OVER(PARTITION BY Name ORDER BY DateKey),
            "RollingAmountAll"  = SUM(AmountAll) OVER(PARTITION BY Name ORDER BY DateKey)
    FROM    basic_cte y;
    

    Edit #2:

    DECLARE @NumDays INT = 3;
    
    WITH    basic_cte AS
            (
                SELECT  rn = ROW_NUMBER() OVER(PARTITION BY Name ORDER BY x.DateKey),
                        x.DateKey,
                        d.Name,
                        Amount      = ISNULL(f.Amount,0),
                        AmountAll   = ISNULL(f.AmountAll,0)
                FROM    
                (
                    SELECT  t.*, EndLiveKey = CONVERT(INT,CONVERT(CHAR(8),CONVERT(DATETIME,CONVERT(DATETIME,CONVERT(CHAR(8),t.LiveKey,112))+@NumDays),112))
                    FROM    #target t
                ) d 
                CROSS APPLY
                (
                    SELECT  dm.DateKey
                    FROM    DimDate dm
                    WHERE   dm.DateKey >= d.LiveKey 
                    AND     dm.DateKey < d.EndLiveKey           
                ) x
                OUTER APPLY
                (
                    SELECT  AmountAll   = SUM(fbase.Amount),
                            Amount      = SUM(CASE WHEN PlayerKey1 = PlayerKey2 THEN fbase.Amount END)
                    FROM
                    (
                        SELECT  fct.Amount, fct.PlayerKey AS PlayerKey1, d.PlayerKey AS PlayerKey2
                        FROM    #Fact fct 
                        WHERE   fct.DateKey = x.DateKey
                    ) fbase
                ) f
            )
    SELECT  
            y.*,
            "RollingAmount"     = SUM(Amount) OVER(PARTITION BY Name ORDER BY DateKey),
            "RollingAmountAll"  = SUM(AmountAll) OVER(PARTITION BY Name ORDER BY DateKey)
    FROM    basic_cte y;
    
    qid & accept id: (18570414, 18570443) query: how to pass parameter to procedure and call in where clause soup:

    You need to use glb_date = @d_date

    \n

    First you'll need to alter how the parameter is defined in the CREATE PROCEDURE definition, for example:

    \n
    CREATE PROCEDURE prac\n(\n   @d_date in DATE\n)\n
    \n

    Notice the @

    \n

    Then change your WHERE clause to use the variable:

    \n
     where glb_date= @d_date;\n
    \n soup wrap:

    You need to use glb_date = @d_date

    First you'll need to alter how the parameter is defined in the CREATE PROCEDURE definition, for example:

    CREATE PROCEDURE prac
    (
       @d_date in DATE
    )
    

    Notice the @

    Then change your WHERE clause to use the variable:

     where glb_date= @d_date;
    
    qid & accept id: (18575984, 18576134) query: Pivot a fixed multiple column table in sql server soup:

    This one will do what you want, but you have to specify all the dates

    \n
    select\n   c.Name,\n   max(case when t.DateCreated = '2013-08-26' then c.Value end) as [2013-08-26],\n   max(case when t.DateCreated = '2013-08-27' then c.Value end) as [2013-08-27],\n   max(case when t.DateCreated = '2013-08-28' then c.Value end) as [2013-08-28],\n   max(case when t.DateCreated = '2013-08-29' then c.Value end) as [2013-08-29],\n   max(case when t.DateCreated = '2013-08-30' then c.Value end) as [2013-08-30],\n   max(case when t.DateCreated = '2013-08-31' then c.Value end) as [2013-08-31],\n   max(case when t.DateCreated = '2013-09-01' then c.Value end) as [2013-09-01]\nfrom test as t\n   outer apply (\n       select 'Rands', Rands union all\n       select 'Units', Units union all\n       select 'Average Price', [Average Price] union all\n       select 'Success %', [Success %] union all\n       select 'Unique Users', [Unique Users]\n   ) as C(Name, Value)\ngroup by c.Name\n
    \n

    You can create a dynamic SQL for this, something like this:

    \n
    declare @stmt nvarchar(max)\n\nselect @stmt = isnull(@stmt + ',', '') + \n    'max(case when t.DateCreated = ''' + convert(nvarchar(8), t.DateCreated, 112) + ''' then c.Value end) as [' + convert(nvarchar(8), t.DateCreated, 112) + ']'\nfrom test as t\n\nselect @stmt = '\n   select\n       c.Name, ' + @stmt + ' from test as t\n   outer apply (\n       select ''Rands'', Rands union all\n       select ''Units'', Units union all\n       select ''Average Price'', [Average Price] union all\n       select ''Success %'', [Success %] union all\n       select ''Unique Users'', [Unique Users]\n   ) as C(Name, Value)\n   group by c.Name'\n\nexec sp_executesql @stmt = @stmt\n
    \n soup wrap:

    This one will do what you want, but you have to specify all the dates

    select
       c.Name,
       max(case when t.DateCreated = '2013-08-26' then c.Value end) as [2013-08-26],
       max(case when t.DateCreated = '2013-08-27' then c.Value end) as [2013-08-27],
       max(case when t.DateCreated = '2013-08-28' then c.Value end) as [2013-08-28],
       max(case when t.DateCreated = '2013-08-29' then c.Value end) as [2013-08-29],
       max(case when t.DateCreated = '2013-08-30' then c.Value end) as [2013-08-30],
       max(case when t.DateCreated = '2013-08-31' then c.Value end) as [2013-08-31],
       max(case when t.DateCreated = '2013-09-01' then c.Value end) as [2013-09-01]
    from test as t
       outer apply (
           select 'Rands', Rands union all
           select 'Units', Units union all
           select 'Average Price', [Average Price] union all
           select 'Success %', [Success %] union all
           select 'Unique Users', [Unique Users]
       ) as C(Name, Value)
    group by c.Name
    

    You can create a dynamic SQL for this, something like this:

    declare @stmt nvarchar(max)
    
    select @stmt = isnull(@stmt + ',', '') + 
        'max(case when t.DateCreated = ''' + convert(nvarchar(8), t.DateCreated, 112) + ''' then c.Value end) as [' + convert(nvarchar(8), t.DateCreated, 112) + ']'
    from test as t
    
    select @stmt = '
       select
           c.Name, ' + @stmt + ' from test as t
       outer apply (
           select ''Rands'', Rands union all
           select ''Units'', Units union all
           select ''Average Price'', [Average Price] union all
           select ''Success %'', [Success %] union all
           select ''Unique Users'', [Unique Users]
       ) as C(Name, Value)
       group by c.Name'
    
    exec sp_executesql @stmt = @stmt
    
    qid & accept id: (18613117, 18614557) query: Sorting data from two different sorted cursors data of different tables into One soup:

    You could combine both queries into a single query.

    \n

    First, ensure that both results have the same number of columns.\nIf not, you might need to add some dummy column(s) to one query.

    \n

    Then combine the two with UNION ALL:

    \n
    SELECT alpha, beeta, gamma, Remark, id,   number FROM X\nUNION ALL\nSELECT Type,  Date,  gamma, Obs,    NULL, number FROM Y\n
    \n

    Then pick one column of the entire result that you want to order by.\n(The column names of the result come from the first query.)\nIn this case, the Start column is not part of the result, so we have to add it (and the Date column is duplicated in the second query, but this is necessary for its values to end up in the result column that is used for sorting):

    \n
    SELECT alpha, beeta, gamma, Remark, id,   number, Start AS SortThis FROM X\nUNION ALL\nSELECT Type,  Date,  gamma, Obs,    NULL, number, Date              FROM Y\nORDER BY SortThis\n
    \n soup wrap:

    You could combine both queries into a single query.

    First, ensure that both results have the same number of columns. If not, you might need to add some dummy column(s) to one query.

    Then combine the two with UNION ALL:

    SELECT alpha, beeta, gamma, Remark, id,   number FROM X
    UNION ALL
    SELECT Type,  Date,  gamma, Obs,    NULL, number FROM Y
    

    Then pick one column of the entire result that you want to order by. (The column names of the result come from the first query.) In this case, the Start column is not part of the result, so we have to add it (and the Date column is duplicated in the second query, but this is necessary for its values to end up in the result column that is used for sorting):

    SELECT alpha, beeta, gamma, Remark, id,   number, Start AS SortThis FROM X
    UNION ALL
    SELECT Type,  Date,  gamma, Obs,    NULL, number, Date              FROM Y
    ORDER BY SortThis
    
    qid & accept id: (18619973, 18620578) query: Date a year from now and check what is the next Term from that Date soup:

    I think you are over complicating the problem, but as you requested, try this:

    \n
    DECLARE @terms TABLE(term varchar(50),termStartDate date, termEndDate date)\nINSERT INTO @terms VALUES('Fall 2012','8/27/2012','12/15/2012')\nINSERT INTO @terms VALUES('Spring 2013','1/14/2013','4/26/2013')\nINSERT INTO @terms VALUES('Sumr I 2013','5/6/2013','6/29/2013')\nINSERT INTO @terms VALUES('Sumr II 2013','7/1/2013','8/24/2013')\nINSERT INTO @terms VALUES('Fall 2013','8/26/2013','12/14/2013')\nINSERT INTO @terms VALUES('Spring 2014','1/13/2014','4/26/2014')\n\nDECLARE @today date =GETDATE()\nSELECT @today = termEndDate \n    FROM @terms \n    WHERE termStartDate<=@today AND termEndDate>=@today\nSELECT term \n    FROM @terms \n    WHERE termStartDate>=DATEADD(d,-360,@today) AND termStartDate<=GETDATE()\n
    \n

    This will list all terms included in the period 360 days prior to the end of the current term.

    \n

    UPDATE

    \n
    SELECT min(termStartDate)startDate FROM (\n    SELECT termStartDate \n        FROM @terms \n        GROUP BY termStartDate \n        HAVING termStartDate>=DATEADD(d,-360,@today) \n               AND termStartDate<=GETDATE()\n)z\n
    \n

    will get the startDate for the earliest term.

    \n soup wrap:

    I think you are over complicating the problem, but as you requested, try this:

    DECLARE @terms TABLE(term varchar(50),termStartDate date, termEndDate date)
    INSERT INTO @terms VALUES('Fall 2012','8/27/2012','12/15/2012')
    INSERT INTO @terms VALUES('Spring 2013','1/14/2013','4/26/2013')
    INSERT INTO @terms VALUES('Sumr I 2013','5/6/2013','6/29/2013')
    INSERT INTO @terms VALUES('Sumr II 2013','7/1/2013','8/24/2013')
    INSERT INTO @terms VALUES('Fall 2013','8/26/2013','12/14/2013')
    INSERT INTO @terms VALUES('Spring 2014','1/13/2014','4/26/2014')
    
    DECLARE @today date =GETDATE()
    SELECT @today = termEndDate 
        FROM @terms 
        WHERE termStartDate<=@today AND termEndDate>=@today
    SELECT term 
        FROM @terms 
        WHERE termStartDate>=DATEADD(d,-360,@today) AND termStartDate<=GETDATE()
    

    This will list all terms included in the period 360 days prior to the end of the current term.

    UPDATE

    SELECT min(termStartDate)startDate FROM (
        SELECT termStartDate 
            FROM @terms 
            GROUP BY termStartDate 
            HAVING termStartDate>=DATEADD(d,-360,@today) 
                   AND termStartDate<=GETDATE()
    )z
    

    will get the startDate for the earliest term.

    qid & accept id: (18629310, 18629411) query: Split string with proper format soup:

    Reverse the sting and search for the index of the first \. Then get the right of your column using this index.

    \n
    SELECT RIGHT(Filename,PATINDEX('%\%',REVERSE(Filename))-1)\n
    \n

    If you want to turn File_1.70837292036d41139fcf8fa6b4997d3c.pdf to File_1.pdf then you could try the following, though it might look uggly:

    \n
    SELECT \nLEFT\n(\n    RIGHT\n    (\n        Filepath,\n        CASE WHEN PATINDEX('%\%',REVERSE(Filepath)) > 0 \n        THEN PATINDEX('%\%',REVERSE(Filepath))-1 \n        ELSE LEN(Filepath) \n        END \n    ),\n    CASE WHEN \n    PATINDEX\n    (\n        '%.%',\n        RIGHT\n        (\n            Filepath,\n            CASE WHEN PATINDEX('%\%',REVERSE(Filepath)) > 0 \n            THEN PATINDEX('%\%',REVERSE(Filepath))-1 \n            ELSE LEN(Filepath)  \n            END\n        )\n    )>0\n    THEN\n    PATINDEX\n    (\n        '%.%',\n        RIGHT\n        (\n            Filepath,\n            CASE WHEN PATINDEX('%\%',REVERSE(Filepath)) > 0 \n            THEN PATINDEX('%\%',REVERSE(Filepath))-1 \n            ELSE LEN(Filepath)  \n            END\n        )\n    )-1\n    ELSE 0 END\n)\n+\nRIGHT\n(\n    Filepath,\n    CASE WHEN PATINDEX('%.%',REVERSE(Filepath)) > 0 \n    THEN PATINDEX('%.%',REVERSE(Filepath)) \n    ELSE LEN(Filepath)  \n    END\n)\n
    \n soup wrap:

    Reverse the sting and search for the index of the first \. Then get the right of your column using this index.

    SELECT RIGHT(Filename,PATINDEX('%\%',REVERSE(Filename))-1)
    

    If you want to turn File_1.70837292036d41139fcf8fa6b4997d3c.pdf to File_1.pdf then you could try the following, though it might look uggly:

    SELECT 
    LEFT
    (
        RIGHT
        (
            Filepath,
            CASE WHEN PATINDEX('%\%',REVERSE(Filepath)) > 0 
            THEN PATINDEX('%\%',REVERSE(Filepath))-1 
            ELSE LEN(Filepath) 
            END 
        ),
        CASE WHEN 
        PATINDEX
        (
            '%.%',
            RIGHT
            (
                Filepath,
                CASE WHEN PATINDEX('%\%',REVERSE(Filepath)) > 0 
                THEN PATINDEX('%\%',REVERSE(Filepath))-1 
                ELSE LEN(Filepath)  
                END
            )
        )>0
        THEN
        PATINDEX
        (
            '%.%',
            RIGHT
            (
                Filepath,
                CASE WHEN PATINDEX('%\%',REVERSE(Filepath)) > 0 
                THEN PATINDEX('%\%',REVERSE(Filepath))-1 
                ELSE LEN(Filepath)  
                END
            )
        )-1
        ELSE 0 END
    )
    +
    RIGHT
    (
        Filepath,
        CASE WHEN PATINDEX('%.%',REVERSE(Filepath)) > 0 
        THEN PATINDEX('%.%',REVERSE(Filepath)) 
        ELSE LEN(Filepath)  
        END
    )
    
    qid & accept id: (18644056, 18644112) query: multiple count conditions with single query soup:

    If you want to get number of students who got A in History in one column, number of students who got B in Maths in second column and number of students who got E in Geography in third then:

    \n
    select\n    sum(case when [History] = 'A' then 1 else 0 end) as HistoryA,\n    sum(case when [Maths] = 'B' then 1 else 0 end) as MathsB,\n    sum(case when [Geography] = 'E' then 1 else 0 end) as GeographyC\nfrom Table1\n
    \n

    If you want to count students who got A in history, B in maths and E in Geography:

    \n
    select count(*)\nfrom Table1\nwhere [History] = 'A' and [Maths] = 'B' and [Geography] = 'E'\n
    \n soup wrap:

    If you want to get number of students who got A in History in one column, number of students who got B in Maths in second column and number of students who got E in Geography in third then:

    select
        sum(case when [History] = 'A' then 1 else 0 end) as HistoryA,
        sum(case when [Maths] = 'B' then 1 else 0 end) as MathsB,
        sum(case when [Geography] = 'E' then 1 else 0 end) as GeographyC
    from Table1
    

    If you want to count students who got A in history, B in maths and E in Geography:

    select count(*)
    from Table1
    where [History] = 'A' and [Maths] = 'B' and [Geography] = 'E'
    
    qid & accept id: (18651768, 18652023) query: How to select data from another sql server server tables in sql script? soup:

    You can indeed use the

    \n
    OPENDATASOURCE\n
    \n

    or

    \n
    OPENROWSET\n
    \n

    Note that you have to turn on the ad hoc distributed queries option:

    \n
    sp_configure 'show advanced options', 1;\nRECONFIGURE;\nsp_configure 'Ad Hoc Distributed Queries', 1;\nRECONFIGURE;\nGO\n
    \n soup wrap:

    You can indeed use the

    OPENDATASOURCE
    

    or

    OPENROWSET
    

    Note that you have to turn on the ad hoc distributed queries option:

    sp_configure 'show advanced options', 1;
    RECONFIGURE;
    sp_configure 'Ad Hoc Distributed Queries', 1;
    RECONFIGURE;
    GO
    
    qid & accept id: (18669731, 18669821) query: Keyword search using query soup:

    First, the answer is no, but if you'll change it to:

    \n
    SELECT * FROM keywords WHERE column_name LIKE '%?%'\n
    \n

    it should work.

    \n

    Second, it's not clear from your question how is the table constructed. If it's something like:

    \n
     -----------------------------------------------------\n|column1 |column2 |column3 |column4 |column5 |column6 |\n -----------------------------------------------------\n|blablaa1|blablaa2|blablaa3|blablaa4|blabla?5|blablaa6|\n -----------------------------------------------------\n...\n
    \n

    then the answer I wrote in before won't work and the design is not good and should be replaced with one keyword per row. Another approach would be to query the table as follows:

    \n
    SELECT * FROM keywords WHERE column1 LIKE '%?%' OR \ncolumn2 LIKE '%?%' OR \ncolumn3 LIKE '%?%' OR \n...\n
    \n

    but, as I just mentioned, this is NOT a good way to construct your table and you'd better think how to re-design it for better performance & maintenance.

    \n soup wrap:

    First, the answer is no, but if you'll change it to:

    SELECT * FROM keywords WHERE column_name LIKE '%?%'
    

    it should work.

    Second, it's not clear from your question how is the table constructed. If it's something like:

     -----------------------------------------------------
    |column1 |column2 |column3 |column4 |column5 |column6 |
     -----------------------------------------------------
    |blablaa1|blablaa2|blablaa3|blablaa4|blabla?5|blablaa6|
     -----------------------------------------------------
    ...
    

    then the answer I wrote in before won't work and the design is not good and should be replaced with one keyword per row. Another approach would be to query the table as follows:

    SELECT * FROM keywords WHERE column1 LIKE '%?%' OR 
    column2 LIKE '%?%' OR 
    column3 LIKE '%?%' OR 
    ...
    

    but, as I just mentioned, this is NOT a good way to construct your table and you'd better think how to re-design it for better performance & maintenance.

    qid & accept id: (18708680, 19060795) query: Efficiently joining/merging based on matching part of a string soup:

    This is a partial answer that makes it run 4-5X faster, but it isn't ideal (it helps in my case, but wouldn't necessarily work in the general case of optimizing a Cartesian product join).

    \n

    I originally had 4 separate index() statements like in my example (my simplified sample had 2 for A.first and A.last).

    \n

    I was able to refactor all 4 of those index() statements (plus a 5th I was going to add) into a regular expression that solves the same problem. It won't return an identical result set, but I think it actually returns better results than the 5 separate indexes since you can specify word edges.

    \n

    In the datastep where I clean the names for matching, I create the following pattern:

    \n
    pattern = cats('/\b(',substr(upcase(first_name),1,1),'|',upcase(first_name),').?\s?',upcase(last_name),'\b/');\n
    \n

    This should create a regex along the lines of /\b(F|FIRST).?\s?LAST\b/ which will match anything like F. Last, First Last, flast@email.com, etc (there are combinations that it doesn't pick up, but I was only concerned with combinations that I observe in my data). Using '\b' also doesn't allow things where FLAST happens to be the same as the start/end of a word (such as "Edward Lo" getting matched to "Eloquent") which I find hard to avoid with index()

    \n

    Then I do my sql join like this:

    \n
    proc sql noprint;\ncreate table matched as\n  select  B.*, \n          prxparse(B.pattern) as prxm, \n          A.* \n  from  search_text as A,\n        search_names as B\n  where prxmatch(calculated prxm,A.notes)\n  order by A.id;\nquit;\nrun;\n
    \n

    Being able to compile the regex once per name in B, and then run it on each piece of text in A seems to be dramatically faster than a couple of index statements (not sure about the case of a regex vs a single index).

    \n

    Running it with A=250,000 Obs and B=4,000 Obs, took something like 90 minutes of CPU time for the index() method, while doing the same with prxmatch() took only 20 minutes of CPU time.

    \n soup wrap:

    This is a partial answer that makes it run 4-5X faster, but it isn't ideal (it helps in my case, but wouldn't necessarily work in the general case of optimizing a Cartesian product join).

    I originally had 4 separate index() statements like in my example (my simplified sample had 2 for A.first and A.last).

    I was able to refactor all 4 of those index() statements (plus a 5th I was going to add) into a regular expression that solves the same problem. It won't return an identical result set, but I think it actually returns better results than the 5 separate indexes since you can specify word edges.

    In the datastep where I clean the names for matching, I create the following pattern:

    pattern = cats('/\b(',substr(upcase(first_name),1,1),'|',upcase(first_name),').?\s?',upcase(last_name),'\b/');
    

    This should create a regex along the lines of /\b(F|FIRST).?\s?LAST\b/ which will match anything like F. Last, First Last, flast@email.com, etc (there are combinations that it doesn't pick up, but I was only concerned with combinations that I observe in my data). Using '\b' also doesn't allow things where FLAST happens to be the same as the start/end of a word (such as "Edward Lo" getting matched to "Eloquent") which I find hard to avoid with index()

    Then I do my sql join like this:

    proc sql noprint;
    create table matched as
      select  B.*, 
              prxparse(B.pattern) as prxm, 
              A.* 
      from  search_text as A,
            search_names as B
      where prxmatch(calculated prxm,A.notes)
      order by A.id;
    quit;
    run;
    

    Being able to compile the regex once per name in B, and then run it on each piece of text in A seems to be dramatically faster than a couple of index statements (not sure about the case of a regex vs a single index).

    Running it with A=250,000 Obs and B=4,000 Obs, took something like 90 minutes of CPU time for the index() method, while doing the same with prxmatch() took only 20 minutes of CPU time.

    qid & accept id: (18724492, 18724569) query: Deleting database existing record while asigning values from one row to other with unique values soup:

    This can only be done in multiple steps (i.e. not a single UPDATE statement) in MySQL, because of the following points

    \n

    Point 1: To get a list of rows that do not have the same pid as other rows, you would need to do a query before your update. For example:

    \n
    SELECT id FROM `order` \nWHERE pid NOT IN (\n   SELECT pid FROM `order`\n   GROUP BY pid\n   HAVING COUNT(*) > 1\n)\n
    \n

    That'll give you the list of IDs that don't share a pid with other rows. However we have to deal with Point 2, from http://dev.mysql.com/doc/refman/5.6/en/subquery-restrictions.html:

    \n
    \n

    In general, you cannot modify a table and select from the same table in a subquery.

    \n
    \n

    That means you can't use such a subquery in your UPDATE statement. You're going to have to use a staging table to store the pids and UPDATE based on that set.

    \n

    For example, the following code creates a temporary table called badpids that contains all pids that appear multiple times in the orders table. Then, we execute the UPDATE, but only for rows that don't have a pid in the list of badpids:

    \n
    CREATE TEMPORARY TABLE badpids (pid int);\n\nINSERT INTO badpids\n   SELECT pid FROM `order`\n   GROUP BY pid\n   HAVING COUNT(*) > 1;\n\nUPDATE `order` SET cid = 1\nWHERE cid= 2 \nAND pid NOT IN (SELECT pid FROM badpids);\n
    \n soup wrap:

    This can only be done in multiple steps (i.e. not a single UPDATE statement) in MySQL, because of the following points

    Point 1: To get a list of rows that do not have the same pid as other rows, you would need to do a query before your update. For example:

    SELECT id FROM `order` 
    WHERE pid NOT IN (
       SELECT pid FROM `order`
       GROUP BY pid
       HAVING COUNT(*) > 1
    )
    

    That'll give you the list of IDs that don't share a pid with other rows. However we have to deal with Point 2, from http://dev.mysql.com/doc/refman/5.6/en/subquery-restrictions.html:

    In general, you cannot modify a table and select from the same table in a subquery.

    That means you can't use such a subquery in your UPDATE statement. You're going to have to use a staging table to store the pids and UPDATE based on that set.

    For example, the following code creates a temporary table called badpids that contains all pids that appear multiple times in the orders table. Then, we execute the UPDATE, but only for rows that don't have a pid in the list of badpids:

    CREATE TEMPORARY TABLE badpids (pid int);
    
    INSERT INTO badpids
       SELECT pid FROM `order`
       GROUP BY pid
       HAVING COUNT(*) > 1;
    
    UPDATE `order` SET cid = 1
    WHERE cid= 2 
    AND pid NOT IN (SELECT pid FROM badpids);
    
    qid & accept id: (18737626, 18738555) query: SQL: selecting things ONLY associated with one value soup:

    I hope I understand your question correctly, you want a list of all fruits (with the same name/title) returned, only if there is only one kind of color for that , otherwise you want none in your results.

    \n

    This looks a bit dirty using a subquery but is the best I could come up with in short time:

    \n

    using this table structure:

    \n
    CREATE TABLE Fruits (Id INT PRIMARY KEY auto_increment, Title VARCHAR(63), Colour VARCHAR(63));\n\nINSERT INTO Fruits (Title, Colour)\n  SELECT 'Apple', 'Green'\n  UNION ALL\n  SELECT 'Apple', 'Green'\n  UNION ALL\n  SELECT 'Apple', 'Blue'\n  UNION\n  SELECT 'Orange', 'Yellow'\n  UNION ALL\n  SELECT 'Orange', 'Yellow';\n
    \n

    You can perform this query

    \n
    SELECT\n    Id\n  FROM Fruits AS OuterFruits\n  WHERE\n    Title = 'Orange'\n    AND\n    (\n      SELECT\n          COUNT(Colour)\n         FROM Fruits AS InnerFruits\n         WHERE\n          InnerFruits.Colour != OuterFruits.Colour\n          AND InnerFruits.Title = OuterFruits.Title\n    ) = 0;\n
    \n

    This will give the rows of the two oranges inserted, if you however where to replace 'Orange' with 'Apple' in that last query you would get an empty result set, because there are different colours of apples available.

    \n

    You can try that online in this fiddle also.

    \n

    Please note that this is mysql-syntax (since you did not include any special sql version, but I'm pretty sure only auto_increment is mysql-specific)

    \n soup wrap:

    I hope I understand your question correctly, you want a list of all fruits (with the same name/title) returned, only if there is only one kind of color for that , otherwise you want none in your results.

    This looks a bit dirty using a subquery but is the best I could come up with in short time:

    using this table structure:

    CREATE TABLE Fruits (Id INT PRIMARY KEY auto_increment, Title VARCHAR(63), Colour VARCHAR(63));
    
    INSERT INTO Fruits (Title, Colour)
      SELECT 'Apple', 'Green'
      UNION ALL
      SELECT 'Apple', 'Green'
      UNION ALL
      SELECT 'Apple', 'Blue'
      UNION
      SELECT 'Orange', 'Yellow'
      UNION ALL
      SELECT 'Orange', 'Yellow';
    

    You can perform this query

    SELECT
        Id
      FROM Fruits AS OuterFruits
      WHERE
        Title = 'Orange'
        AND
        (
          SELECT
              COUNT(Colour)
             FROM Fruits AS InnerFruits
             WHERE
              InnerFruits.Colour != OuterFruits.Colour
              AND InnerFruits.Title = OuterFruits.Title
        ) = 0;
    

    This will give the rows of the two oranges inserted, if you however where to replace 'Orange' with 'Apple' in that last query you would get an empty result set, because there are different colours of apples available.

    You can try that online in this fiddle also.

    Please note that this is mysql-syntax (since you did not include any special sql version, but I'm pretty sure only auto_increment is mysql-specific)

    qid & accept id: (18747853, 18748008) query: mySQL SELECT upcoming birthdays soup:

    To get all birthdays in next 7 days, add the year difference between the date of birth and today to the date of birth and then find if it falls within next seven days.

    \n
    SELECT * \nFROM  persons \nWHERE  DATE_ADD(birthday, \n                INTERVAL YEAR(CURDATE())-YEAR(birthday)\n                         + IF(DAYOFYEAR(CURDATE()) > DAYOFYEAR(birthday),1,0)\n                YEAR)  \n            BETWEEN CURDATE() AND DATE_ADD(CURDATE(), INTERVAL 7 DAY);\n
    \n

    If you want to exclude today's birthdays just change > to >=

    \n
    SELECT * \nFROM  persons \nWHERE  DATE_ADD(birthday, \n                INTERVAL YEAR(CURDATE())-YEAR(birthday)\n                         + IF(DAYOFYEAR(CURDATE()) >= DAYOFYEAR(birthday),1,0)\n                YEAR)  \n            BETWEEN CURDATE() AND DATE_ADD(CURDATE(), INTERVAL 7 DAY);\n\n-- Same as above query with another way to exclude today's birthdays \nSELECT * \nFROM  persons \nWHERE  DATE_ADD(birthday, \n                INTERVAL YEAR(CURDATE())-YEAR(birthday)\n                         + IF(DAYOFYEAR(CURDATE()) > DAYOFYEAR(birthday),1,0)\n                YEAR) \n            BETWEEN CURDATE() AND DATE_ADD(CURDATE(), INTERVAL 7 DAY)\n     AND DATE_ADD(birthday, INTERVAL YEAR(CURDATE())-YEAR(birthday) YEAR) <> CURDATE();\n\n\n-- Same as above query with another way to exclude today's birthdays \nSELECT * \nFROM  persons \nWHERE  DATE_ADD(birthday, \n                INTERVAL YEAR(CURDATE())-YEAR(birthday)\n                         + IF(DAYOFYEAR(CURDATE()) > DAYOFYEAR(birthday),1,0)\n                YEAR) \n            BETWEEN CURDATE() AND DATE_ADD(CURDATE(), INTERVAL 7 DAY)\n     AND (MONTH(birthday) <> MONTH(CURDATE()) OR DAY(birthday) <> DAY(CURDATE()));\n
    \n

    Here is a DEMO of all queries

    \n soup wrap:

    To get all birthdays in next 7 days, add the year difference between the date of birth and today to the date of birth and then find if it falls within next seven days.

    SELECT * 
    FROM  persons 
    WHERE  DATE_ADD(birthday, 
                    INTERVAL YEAR(CURDATE())-YEAR(birthday)
                             + IF(DAYOFYEAR(CURDATE()) > DAYOFYEAR(birthday),1,0)
                    YEAR)  
                BETWEEN CURDATE() AND DATE_ADD(CURDATE(), INTERVAL 7 DAY);
    

    If you want to exclude today's birthdays just change > to >=

    SELECT * 
    FROM  persons 
    WHERE  DATE_ADD(birthday, 
                    INTERVAL YEAR(CURDATE())-YEAR(birthday)
                             + IF(DAYOFYEAR(CURDATE()) >= DAYOFYEAR(birthday),1,0)
                    YEAR)  
                BETWEEN CURDATE() AND DATE_ADD(CURDATE(), INTERVAL 7 DAY);
    
    -- Same as above query with another way to exclude today's birthdays 
    SELECT * 
    FROM  persons 
    WHERE  DATE_ADD(birthday, 
                    INTERVAL YEAR(CURDATE())-YEAR(birthday)
                             + IF(DAYOFYEAR(CURDATE()) > DAYOFYEAR(birthday),1,0)
                    YEAR) 
                BETWEEN CURDATE() AND DATE_ADD(CURDATE(), INTERVAL 7 DAY)
         AND DATE_ADD(birthday, INTERVAL YEAR(CURDATE())-YEAR(birthday) YEAR) <> CURDATE();
    
    
    -- Same as above query with another way to exclude today's birthdays 
    SELECT * 
    FROM  persons 
    WHERE  DATE_ADD(birthday, 
                    INTERVAL YEAR(CURDATE())-YEAR(birthday)
                             + IF(DAYOFYEAR(CURDATE()) > DAYOFYEAR(birthday),1,0)
                    YEAR) 
                BETWEEN CURDATE() AND DATE_ADD(CURDATE(), INTERVAL 7 DAY)
         AND (MONTH(birthday) <> MONTH(CURDATE()) OR DAY(birthday) <> DAY(CURDATE()));
    

    Here is a DEMO of all queries

    qid & accept id: (18749306, 18749534) query: Create table and get data from another table soup:

    Try this

    \n
    --create table without realization column\nCREATE TABLE [dbo].[CostCategory](\n[ID_CostCategory] [int] NOT NULL,\n[Name] [varchar](150) NOT NULL,\n[Plan] [money] NOT NULL\n) go\n\nCREATE TABLE [dbo].[Cost](\n[ID_Cost] [int] NOT NULL,\n[Name] [varchar](50) NULL,\n[ID_CostCategory] [int] NULL,\n[ID_Department] [int] NULL,\n[ID_Project] [int] NULL,\n[Value] [money] NULL,\n\n) go \n
    \n

    Create a UDF to calculate sum of the cost column:

    \n
    CREATE FUNCTION [dbo].[CalculateRealization](@Id INT) \nRETURNS money\nAS \nBEGIN\n  DECLARE @cost money\n\n  SELECT @cost = SUM(Value)\n  FROM [dbo].[Cost]\n  WHERE [ID_CostCategory] = @ID\n\n  return @cost\nEND\n
    \n

    Now Alter your CostCategory table to add computed column:

    \n
    ALTER TABLE [dbo].[CostCategory]\n   ADD [Realization] AS dbo.CalculateRealization(ID_CostCategory);\n
    \n

    Now you can select Realization from Costcategory

    \n
    SELECT ID_CostCategory, Realization\nFROM [dbo].[CostCategory]\n
    \n

    Answer to your comment below:

    \n

    Create Another UDF

    \n
    CREATE FUNCTION [dbo].[CheckValue](@Id INT, @value Money) \nRETURNS INT\nAS \nBEGIN\n  DECLARE @flg INT\n  SELECT @flg = CASE WHEN [Plan] >= @value THEN 1 ELSE 0 END\n  FROM [dbo].[CostCategory]\n  WHERE [ID_CostCategory] = @ID\n\n  return @flg;\nEND\n
    \n

    Now add Constraint on Cost Table:

    \n
    ALTER TABLE ALTER TABLE [dbo].[Cost]\n  ADD CONSTRAINT CHK_VAL_PLAN_COSTCATG\n    CHECK(dbo.CheckValue(ID_CostCategory, Value) = 1)\n
    \n soup wrap:

    Try this

    --create table without realization column
    CREATE TABLE [dbo].[CostCategory](
    [ID_CostCategory] [int] NOT NULL,
    [Name] [varchar](150) NOT NULL,
    [Plan] [money] NOT NULL
    ) go
    
    CREATE TABLE [dbo].[Cost](
    [ID_Cost] [int] NOT NULL,
    [Name] [varchar](50) NULL,
    [ID_CostCategory] [int] NULL,
    [ID_Department] [int] NULL,
    [ID_Project] [int] NULL,
    [Value] [money] NULL,
    
    ) go 
    

    Create a UDF to calculate sum of the cost column:

    CREATE FUNCTION [dbo].[CalculateRealization](@Id INT) 
    RETURNS money
    AS 
    BEGIN
      DECLARE @cost money
    
      SELECT @cost = SUM(Value)
      FROM [dbo].[Cost]
      WHERE [ID_CostCategory] = @ID
    
      return @cost
    END
    

    Now Alter your CostCategory table to add computed column:

    ALTER TABLE [dbo].[CostCategory]
       ADD [Realization] AS dbo.CalculateRealization(ID_CostCategory);
    

    Now you can select Realization from Costcategory

    SELECT ID_CostCategory, Realization
    FROM [dbo].[CostCategory]
    

    Answer to your comment below:

    Create Another UDF

    CREATE FUNCTION [dbo].[CheckValue](@Id INT, @value Money) 
    RETURNS INT
    AS 
    BEGIN
      DECLARE @flg INT
      SELECT @flg = CASE WHEN [Plan] >= @value THEN 1 ELSE 0 END
      FROM [dbo].[CostCategory]
      WHERE [ID_CostCategory] = @ID
    
      return @flg;
    END
    

    Now add Constraint on Cost Table:

    ALTER TABLE ALTER TABLE [dbo].[Cost]
      ADD CONSTRAINT CHK_VAL_PLAN_COSTCATG
        CHECK(dbo.CheckValue(ID_CostCategory, Value) = 1)
    
    qid & accept id: (18757944, 18758087) query: How change non nullable column to nullable column soup:

    If you just want "fake" the value of a column in a result set, try

    \n
    select id, name, NULL as [date] from samp\n
    \n

    If you want to change the underlying data, do

    \n
    UPDATE samp set [date] = NULL\n
    \n soup wrap:

    If you just want "fake" the value of a column in a result set, try

    select id, name, NULL as [date] from samp
    

    If you want to change the underlying data, do

    UPDATE samp set [date] = NULL
    
    qid & accept id: (18764988, 18765166) query: SQL Query Comparing Date soup:

    If you are using MS SQL Server try this code:

    \n
    SELECT tb.date_added\n  FROM MyTable tb\n WHERE tb.date_added > DATEADD(week, -2, GETDATE())\n
    \n

    For MySQL try:

    \n
    SELECT tb.date_added\n  FROM MyTable tb\n WHERE DATE_ADD(tb.date_added, INTERVAL 2 WEEK) >= NOW();\n
    \n soup wrap:

    If you are using MS SQL Server try this code:

    SELECT tb.date_added
      FROM MyTable tb
     WHERE tb.date_added > DATEADD(week, -2, GETDATE())
    

    For MySQL try:

    SELECT tb.date_added
      FROM MyTable tb
     WHERE DATE_ADD(tb.date_added, INTERVAL 2 WEEK) >= NOW();
    
    qid & accept id: (18777437, 18777827) query: SQL: Linking Multiple Rows in Table Based on Data Chain in Select soup:
    SELECT * FROM LinkedTable lt\nWHERE ft.link_sequence IN \n   ( SELECT link_sequence FROM LinkedTable WHERE code = 3245 AND link_sequence IS NOT NULL ) \nORDER BY ft.ID;\n
    \n

    See my SQL Fiddle DEMO.

    \n

    SECOND ATTEMPT:

    \n
    SELECT DISTINCT * \nFROM LinkedTable\nSTART WITH code = 3245\nCONNECT BY NOCYCLE\n           PRIOR code = code  AND PRIOR link_sequence+1 = link_sequence OR\n           PRIOR code <> code AND PRIOR link_sequence =   link_sequence\nORDER BY link_sequence, code\n;\n
    \n

    Updated SQL Fiddle with this code. Please try to break it.

    \n

    Based on your data (starting with 3245) it gives the following chain:

    \n
    ID  CODE    LINK_SEQUENCE   NAME\n2   3245    1              Potato\n1   3267    1              Potato\n3   3245    2              Potato\n4   3975    2              Potato\n5   3975    3              Potato\n6   5478    3              Potato\n
    \n soup wrap:
    SELECT * FROM LinkedTable lt
    WHERE ft.link_sequence IN 
       ( SELECT link_sequence FROM LinkedTable WHERE code = 3245 AND link_sequence IS NOT NULL ) 
    ORDER BY ft.ID;
    

    See my SQL Fiddle DEMO.

    SECOND ATTEMPT:

    SELECT DISTINCT * 
    FROM LinkedTable
    START WITH code = 3245
    CONNECT BY NOCYCLE
               PRIOR code = code  AND PRIOR link_sequence+1 = link_sequence OR
               PRIOR code <> code AND PRIOR link_sequence =   link_sequence
    ORDER BY link_sequence, code
    ;
    

    Updated SQL Fiddle with this code. Please try to break it.

    Based on your data (starting with 3245) it gives the following chain:

    ID  CODE    LINK_SEQUENCE   NAME
    2   3245    1              Potato
    1   3267    1              Potato
    3   3245    2              Potato
    4   3975    2              Potato
    5   3975    3              Potato
    6   5478    3              Potato
    
    qid & accept id: (18778492, 18780465) query: MS Access Alter Statement: change column data type to DATETIME soup:

    Try running these:

    \n
    ALTER TABLE table1 ADD NewDate DATE\n
    \n

    Then run

    \n
    UPDATE table1\nSET NewDate = RecordTime\nWHERE RIGHT(RecordTime,4) <> '- ::'\n
    \n

    You can then delete the RecordTime and rename NewDate.

    \n

    I prefer adding a new column just in case there are any issues with the UPDATE and you can compare the 'cleaned' column and the initial data before proceeding.

    \n soup wrap:

    Try running these:

    ALTER TABLE table1 ADD NewDate DATE
    

    Then run

    UPDATE table1
    SET NewDate = RecordTime
    WHERE RIGHT(RecordTime,4) <> '- ::'
    

    You can then delete the RecordTime and rename NewDate.

    I prefer adding a new column just in case there are any issues with the UPDATE and you can compare the 'cleaned' column and the initial data before proceeding.

    qid & accept id: (18799810, 18800637) query: function with multiple where soup:

    First, I would use an inline UDF instead of scalar function for performance reasons.

    \n

    Second, there are two options:

    \n

    1) A function that shows total for every department

    \n
    CREATE FUNCTION [dbo].[Table2](@pID_CostCategory INT) \nRETURNS TABLE\nAS \nRETURN\n    SELECT  [ID_Department], SUM(Value) AS koszt\n    FROM    [dbo].[Cost]\n    WHERE   [ID_CostCategory] = @pID_CostCategory\n    GROUP BY[ID_Department];\nGO  \n
    \n

    or

    \n

    2) A function which has two parameters, the second parameter being optional

    \n
    CREATE FUNCTION [dbo].[Table2](@pID_CostCategory INT, @pID_Department INT=NULL) \nRETURNS TABLE\nAS \nRETURN\n    SELECT  SUM(Value) AS koszt\n    FROM    [dbo].[Cost]\n    WHERE   [ID_CostCategory] = @pID_CostCategory\n    AND     ([ID_Department] = @pID_Department OR @pID_Department IS NULL)\nGO\n
    \n soup wrap:

    First, I would use an inline UDF instead of scalar function for performance reasons.

    Second, there are two options:

    1) A function that shows total for every department

    CREATE FUNCTION [dbo].[Table2](@pID_CostCategory INT) 
    RETURNS TABLE
    AS 
    RETURN
        SELECT  [ID_Department], SUM(Value) AS koszt
        FROM    [dbo].[Cost]
        WHERE   [ID_CostCategory] = @pID_CostCategory
        GROUP BY[ID_Department];
    GO  
    

    or

    2) A function which has two parameters, the second parameter being optional

    CREATE FUNCTION [dbo].[Table2](@pID_CostCategory INT, @pID_Department INT=NULL) 
    RETURNS TABLE
    AS 
    RETURN
        SELECT  SUM(Value) AS koszt
        FROM    [dbo].[Cost]
        WHERE   [ID_CostCategory] = @pID_CostCategory
        AND     ([ID_Department] = @pID_Department OR @pID_Department IS NULL)
    GO
    
    qid & accept id: (18852505, 18853189) query: Join distant SQL tables without pulling data in between soup:

    Use DISTINCT to count the distinct Box.id in your query -

    \n
    SELECT \n    Box.expected_delivery_date, count(DISTINCT Box.id) num_boxes\nFROM\n    Box\n        JOIN\n    Subscription ON Box.subscription_id = Subscription.id\n        JOIN\n    BoxContent ON Subscription.id = BoxContent.subscription_id\n        JOIN\n    Schedule ON Schedule.id = BoxContent.schedule_id\nWHERE\n    Box.state = 3 AND Box.status = 2\nGROUP BY Box.expected_delivery_date;\n
    \n

    This should return

    \n

    2010-10-01 - 2
    \n2010-10-07 - 4

    \n

    Similarly, when you JOIN box with subscription, content, schedule tables you will get many duplicates. You need to analyze the data and see how you need to GROUP BY.

    \n

    Use this query to see the actual data used by the query before grouping and decide on which columns to group by. Mostly, it will be the columns where you see duplicate data in multiple rows.

    \n
    SELECT \n    Box.expected_delivery_date, Box.id BoxID, Schedule.id SchID\nFROM\n    Box\n        JOIN\n    Subscription ON Box.subscription_id = Subscription.id\n        JOIN\n    BoxContent ON Subscription.id = BoxContent.subscription_id\n        JOIN\n    Schedule ON Schedule.id = BoxContent.schedule_id\nWHERE\n    Box.state = 3 AND Box.status = 2\n
    \n

    You may even try SELECT Box.*, Schedule.* in above query to come up with a final grouping.

    \n

    If you need any more specific answer, you will have to provide the dummy data for all those table and the result you are looking for.

    \n soup wrap:

    Use DISTINCT to count the distinct Box.id in your query -

    SELECT 
        Box.expected_delivery_date, count(DISTINCT Box.id) num_boxes
    FROM
        Box
            JOIN
        Subscription ON Box.subscription_id = Subscription.id
            JOIN
        BoxContent ON Subscription.id = BoxContent.subscription_id
            JOIN
        Schedule ON Schedule.id = BoxContent.schedule_id
    WHERE
        Box.state = 3 AND Box.status = 2
    GROUP BY Box.expected_delivery_date;
    

    This should return

    2010-10-01 - 2
    2010-10-07 - 4

    Similarly, when you JOIN box with subscription, content, schedule tables you will get many duplicates. You need to analyze the data and see how you need to GROUP BY.

    Use this query to see the actual data used by the query before grouping and decide on which columns to group by. Mostly, it will be the columns where you see duplicate data in multiple rows.

    SELECT 
        Box.expected_delivery_date, Box.id BoxID, Schedule.id SchID
    FROM
        Box
            JOIN
        Subscription ON Box.subscription_id = Subscription.id
            JOIN
        BoxContent ON Subscription.id = BoxContent.subscription_id
            JOIN
        Schedule ON Schedule.id = BoxContent.schedule_id
    WHERE
        Box.state = 3 AND Box.status = 2
    

    You may even try SELECT Box.*, Schedule.* in above query to come up with a final grouping.

    If you need any more specific answer, you will have to provide the dummy data for all those table and the result you are looking for.

    qid & accept id: (18858779, 18859176) query: T-SQL "Dynamic" Join soup:

    This SQL will compute the permutations without repetitions:

    \n
    WITH recurse(Result, Depth) AS\n(\n    SELECT CAST(Value AS VarChar(100)), 1\n    FROM MyTable\n\n    UNION ALL\n\n    SELECT CAST(r.Result + '+' + a.Value AS VarChar(100)), r.Depth + 1\n    FROM MyTable a\n    INNER JOIN recurse r\n    ON CHARINDEX(a.Value, r.Result) = 0\n)\n\nSELECT Result\nFROM recurse\nWHERE Depth = (SELECT COUNT(*) FROM MyTable)\nORDER BY Result\n
    \n

    If MyTable contains 9 rows, it will take some time to compute, but it will return 362,880 rows.

    \n

    Update with explanation:

    \n

    The WITH statement is used to define a recursive common table expression. In effect, the WITH statement is looping multiple times performing a UNION until the recursion is finished.

    \n

    The first part of SQL sets the starting records. Assuming 3 rows named 'A', 'B', and 'C' in MyTable, this will generate these rows:

    \n
        Result     Depth\n    ------     -----\n    A          1\n    B          1\n    C          1\n
    \n

    Then the next block of SQL performs the first level of recursion:

    \n
        SELECT CAST(r.Result + '+' + a.Value AS VarChar(100)), r.Depth + 1\n    FROM MyTable a\n    INNER JOIN recurse r\n    ON CHARINDEX(a.Value, r.Result) = 0\n
    \n

    This takes all of the records generated so far (which will be in the recurse table) and joins them to all of the records in MyTable again. The ON clause filters the list of records in MyTable to only return the ones that do not exist already in this row's permutation. This would result in these rows:

    \n
        Result     Depth\n    ------     -----\n    A          1\n    B          1\n    C          1\n    A+B        2\n    A+C        2\n    B+A        2\n    B+C        2\n    C+A        2\n    C+B        2\n
    \n

    Then the recursion loops again giving these rows:

    \n
        Result     Depth\n    ------     -----\n    A          1\n    B          1\n    C          1\n    A+B        2\n    A+C        2\n    B+A        2\n    B+C        2\n    C+A        2\n    C+B        2\n    A+B+C      3\n    A+C+B      3\n    B+A+C      3\n    B+C+A      3\n    C+A+B      3\n    C+B+A      3\n
    \n

    At this point, the recursion stops because the UNION does not create any more rows because the CHARINDEX will always be 0.

    \n

    The last SQL filters all of the resulting rows where the computed Depth column matches the # of records in MyTable. This throws out all of the rows except for the ones generated by the last depth of recursion. So the final result will be these rows:

    \n
        Result\n    ------\n    A+B+C\n    A+C+B\n    B+A+C\n    B+C+A\n    C+A+B\n    C+B+A\n
    \n soup wrap:

    This SQL will compute the permutations without repetitions:

    WITH recurse(Result, Depth) AS
    (
        SELECT CAST(Value AS VarChar(100)), 1
        FROM MyTable
    
        UNION ALL
    
        SELECT CAST(r.Result + '+' + a.Value AS VarChar(100)), r.Depth + 1
        FROM MyTable a
        INNER JOIN recurse r
        ON CHARINDEX(a.Value, r.Result) = 0
    )
    
    SELECT Result
    FROM recurse
    WHERE Depth = (SELECT COUNT(*) FROM MyTable)
    ORDER BY Result
    

    If MyTable contains 9 rows, it will take some time to compute, but it will return 362,880 rows.

    Update with explanation:

    The WITH statement is used to define a recursive common table expression. In effect, the WITH statement is looping multiple times performing a UNION until the recursion is finished.

    The first part of SQL sets the starting records. Assuming 3 rows named 'A', 'B', and 'C' in MyTable, this will generate these rows:

        Result     Depth
        ------     -----
        A          1
        B          1
        C          1
    

    Then the next block of SQL performs the first level of recursion:

        SELECT CAST(r.Result + '+' + a.Value AS VarChar(100)), r.Depth + 1
        FROM MyTable a
        INNER JOIN recurse r
        ON CHARINDEX(a.Value, r.Result) = 0
    

    This takes all of the records generated so far (which will be in the recurse table) and joins them to all of the records in MyTable again. The ON clause filters the list of records in MyTable to only return the ones that do not exist already in this row's permutation. This would result in these rows:

        Result     Depth
        ------     -----
        A          1
        B          1
        C          1
        A+B        2
        A+C        2
        B+A        2
        B+C        2
        C+A        2
        C+B        2
    

    Then the recursion loops again giving these rows:

        Result     Depth
        ------     -----
        A          1
        B          1
        C          1
        A+B        2
        A+C        2
        B+A        2
        B+C        2
        C+A        2
        C+B        2
        A+B+C      3
        A+C+B      3
        B+A+C      3
        B+C+A      3
        C+A+B      3
        C+B+A      3
    

    At this point, the recursion stops because the UNION does not create any more rows because the CHARINDEX will always be 0.

    The last SQL filters all of the resulting rows where the computed Depth column matches the # of records in MyTable. This throws out all of the rows except for the ones generated by the last depth of recursion. So the final result will be these rows:

        Result
        ------
        A+B+C
        A+C+B
        B+A+C
        B+C+A
        C+A+B
        C+B+A
    
    qid & accept id: (18865590, 18865714) query: Applying multiple condition on a column soup:

    try it with the following for your Results in one row:

    \n
    SELECT\n(SELECT COUNT(*)\nFROM Table\nWHERE task = 'search' or task = 'Basic' or task = 'natural search') AS CountSearch,\n(SELECT COUNT(*)\nFROM Table\nWHERE task = 'Query1' or task = 'Query2' or task = 'Query3') AS CountQuery,\n(SELECT COUNT(*)\nFROM Table\nWHERE task = 'sample1' or task = 'sample2') AS CountSample,\n(SELECT COUNT(*)\nFROM Table\nWHERE task = 'test1' or task = 'test2' or task = 'test3') AS CountTest\n
    \n

    And the following for your results in several rows:

    \n
    SELECT 'CountSearch', COUNT(*)\nFROM Table\nWHERE task = 'search' or task = 'Basic' or task = 'natural search'\nUNION ALL\nSELECT 'CountQuery', COUNT(*)\nFROM Table\nWHERE task = 'Query1' or task = 'Query2' or task = 'Query3'\nUNION ALL\nSELECT 'CountSample', COUNT(*)\nFROM Table\nWHERE task = 'sample1' or task = 'sample2'\nUNION ALL\nSELECT 'CountTest', COUNT(*)\nFROM Table\nWHERE task = 'test1' or task = 'test2' or task = 'test3'\n
    \n

    I renamed your columns, because you can't use brackets as columnname in a sql-statement.

    \n soup wrap:

    try it with the following for your Results in one row:

    SELECT
    (SELECT COUNT(*)
    FROM Table
    WHERE task = 'search' or task = 'Basic' or task = 'natural search') AS CountSearch,
    (SELECT COUNT(*)
    FROM Table
    WHERE task = 'Query1' or task = 'Query2' or task = 'Query3') AS CountQuery,
    (SELECT COUNT(*)
    FROM Table
    WHERE task = 'sample1' or task = 'sample2') AS CountSample,
    (SELECT COUNT(*)
    FROM Table
    WHERE task = 'test1' or task = 'test2' or task = 'test3') AS CountTest
    

    And the following for your results in several rows:

    SELECT 'CountSearch', COUNT(*)
    FROM Table
    WHERE task = 'search' or task = 'Basic' or task = 'natural search'
    UNION ALL
    SELECT 'CountQuery', COUNT(*)
    FROM Table
    WHERE task = 'Query1' or task = 'Query2' or task = 'Query3'
    UNION ALL
    SELECT 'CountSample', COUNT(*)
    FROM Table
    WHERE task = 'sample1' or task = 'sample2'
    UNION ALL
    SELECT 'CountTest', COUNT(*)
    FROM Table
    WHERE task = 'test1' or task = 'test2' or task = 'test3'
    

    I renamed your columns, because you can't use brackets as columnname in a sql-statement.

    qid & accept id: (18873251, 18877065) query: Is it possible to reference columns from one common table expression in another, without using joins? soup:

    Her'es a vague outline of how I'd approach this. It makes a lot of assumptions, is missing key components, has not been debugged in any way, and is completely dependent on those queries you have no control over being "good" for hard-to-acertain values of good.

    \n

    Assumption: a set of queries that looks something like this:

    \n
    Level1Q:  select * from users where name=:param_user\nLevel2Q:  select * from projects where id=:param_id\nLevel3Q:  select * from details where id=:param_id\nLevel4Q:  \n
    \n

    So, for a "level 3" query, you'd want to generate the following:

    \n
    ;WITH\n   Level1Q as (select * from users where name=:param_user)\n  ,Level2Q as (select * from projects where id=:param_id)\n  ,Level3Q as (select * from details where id=:param_id)\n select * from Level3Q\n
    \n

    This, or something much like it, should produce that query:

    \n
    DECLARE\n  @Command   nvarchar(max)\n ,@Query     nvarchar(max)\n ,@Loop      int\n ,@MaxDepth  int\n ,@CRLF      char(2) = char(13) + char(10)  --  Makes the dynamic code more legible\n\nSET @Command = 'WITH'\n\n\n--  Set @MaxDepth to the level you want to query at\nSET @MaxDepth = 3\nSET @Loop = 0\n\nWHILE @Loop < @MaxDepth\n BEGIN\n    SET @Loop = @Looop + 1\n\n    --  Get the query for this level\n    SET @Query = \n\n    SET @Command = replace(@Command + @CRLF\n                           + case @Loop when 1 then '  ' else ' ,' end\n                           + 'Level<<@Loop>>Q as (' + @Query + ')'\n     ,':param_user', >Q.id')  --  This assumes the link to the prior query is always by a column named "id"\n     ,'<<@Loop>>', @Loop)  --  Done last, as the prior replace added another <<@Loop>>\n\n END\n\n--  Add the final pull\nSET @Command = @Command + @CRLF + replace(' select * from Level<<@Loop>>Q', '<<@Loop>>', @Loop - 1)\n\n--  The most important command, because debugging this mess will be a pain\nPRINT @Command\n\n--EXECUTE sp_executeSQL @Command \n
    \n soup wrap:

    Her'es a vague outline of how I'd approach this. It makes a lot of assumptions, is missing key components, has not been debugged in any way, and is completely dependent on those queries you have no control over being "good" for hard-to-acertain values of good.

    Assumption: a set of queries that looks something like this:

    Level1Q:  select * from users where name=:param_user
    Level2Q:  select * from projects where id=:param_id
    Level3Q:  select * from details where id=:param_id
    Level4Q:  
    

    So, for a "level 3" query, you'd want to generate the following:

    ;WITH
       Level1Q as (select * from users where name=:param_user)
      ,Level2Q as (select * from projects where id=:param_id)
      ,Level3Q as (select * from details where id=:param_id)
     select * from Level3Q
    

    This, or something much like it, should produce that query:

    DECLARE
      @Command   nvarchar(max)
     ,@Query     nvarchar(max)
     ,@Loop      int
     ,@MaxDepth  int
     ,@CRLF      char(2) = char(13) + char(10)  --  Makes the dynamic code more legible
    
    SET @Command = 'WITH'
    
    
    --  Set @MaxDepth to the level you want to query at
    SET @MaxDepth = 3
    SET @Loop = 0
    
    WHILE @Loop < @MaxDepth
     BEGIN
        SET @Loop = @Looop + 1
    
        --  Get the query for this level
        SET @Query = 
    
        SET @Command = replace(@Command + @CRLF
                               + case @Loop when 1 then '  ' else ' ,' end
                               + 'Level<<@Loop>>Q as (' + @Query + ')'
         ,':param_user', >Q.id')  --  This assumes the link to the prior query is always by a column named "id"
         ,'<<@Loop>>', @Loop)  --  Done last, as the prior replace added another <<@Loop>>
    
     END
    
    --  Add the final pull
    SET @Command = @Command + @CRLF + replace(' select * from Level<<@Loop>>Q', '<<@Loop>>', @Loop - 1)
    
    --  The most important command, because debugging this mess will be a pain
    PRINT @Command
    
    --EXECUTE sp_executeSQL @Command 
    
    qid & accept id: (18885583, 18887078) query: CTE to build hierarchy from source table soup:

    You can use OUTPUT in combination with Merge to get a Mapping from ID's to new ID's.

    \n

    The essential part:

    \n
    --this is where you got stuck\nDeclare @MapIds Table (aOldID int,aNewID int)\n\n;MERGE INTO @NewSeed AS TargetTable\nUsing @DefaultSeed as Source on 1=0\nWHEN NOT MATCHED then\n Insert (Code,RequiredID)\n Values\n (Source.Code,Source.RequiredID)\nOUTPUT Source.ID ,inserted.ID into @MapIds;\n\n\nUpdate @NewSeed Set RequiredID=aNewID\nfrom @MapIds\nWhere RequiredID=aOldID\n
    \n

    and the whole example:

    \n
    DECLARE @Table TABLE (ID INT, Code NVARCHAR(50), RequiredID INT);\n\nINSERT INTO @Table (ID, Code, RequiredID)   VALUES\n    (1, 'Physics', NULL),\n    (2, 'Advanced Physics', 1),\n    (3, 'Nuke', 2),\n    (4, 'Health', NULL);    \n\nDECLARE @DefaultSeed TABLE (ID INT, Code NVARCHAR(50), RequiredID INT);\n\nWITH hierarchy \nAS (\n    --anchor\n    SELECT  t.ID , t.Code , t.RequiredID\n    FROM @Table AS t\n    WHERE t.RequiredID IS NULL\n\n    UNION ALL   \n\n    --recursive\n    SELECT  t.ID \n          , t.Code \n          , h.ID        \n    FROM hierarchy AS h\n        JOIN @Table AS t \n            ON t.RequiredID = h.ID\n    )\n\nINSERT INTO @DefaultSeed (ID, Code, RequiredID)\nSELECT  ID \n        , Code \n        , RequiredID\nFROM hierarchy\nOPTION (MAXRECURSION 10)\n\n\nDECLARE @NewSeed TABLE (ID INT IDENTITY(10, 1), Code NVARCHAR(50), RequiredID INT)\n\nDeclare @MapIds Table (aOldID int,aNewID int)\n\n;MERGE INTO @NewSeed AS TargetTable\nUsing @DefaultSeed as Source on 1=0\nWHEN NOT MATCHED then\n Insert (Code,RequiredID)\n Values\n (Source.Code,Source.RequiredID)\nOUTPUT Source.ID ,inserted.ID into @MapIds;\n\n\nUpdate @NewSeed Set RequiredID=aNewID\nfrom @MapIds\nWhere RequiredID=aOldID\n\n\n/*\n--@NewSeed should read like the following...\n[ID]  [Code]           [RequiredID]\n10....Physics..........NULL\n11....Health...........NULL\n12....AdvancedPhysics..10\n13....Nuke.............12\n*/\n\nSELECT *\nFROM @NewSeed\n
    \n soup wrap:

    You can use OUTPUT in combination with Merge to get a Mapping from ID's to new ID's.

    The essential part:

    --this is where you got stuck
    Declare @MapIds Table (aOldID int,aNewID int)
    
    ;MERGE INTO @NewSeed AS TargetTable
    Using @DefaultSeed as Source on 1=0
    WHEN NOT MATCHED then
     Insert (Code,RequiredID)
     Values
     (Source.Code,Source.RequiredID)
    OUTPUT Source.ID ,inserted.ID into @MapIds;
    
    
    Update @NewSeed Set RequiredID=aNewID
    from @MapIds
    Where RequiredID=aOldID
    

    and the whole example:

    DECLARE @Table TABLE (ID INT, Code NVARCHAR(50), RequiredID INT);
    
    INSERT INTO @Table (ID, Code, RequiredID)   VALUES
        (1, 'Physics', NULL),
        (2, 'Advanced Physics', 1),
        (3, 'Nuke', 2),
        (4, 'Health', NULL);    
    
    DECLARE @DefaultSeed TABLE (ID INT, Code NVARCHAR(50), RequiredID INT);
    
    WITH hierarchy 
    AS (
        --anchor
        SELECT  t.ID , t.Code , t.RequiredID
        FROM @Table AS t
        WHERE t.RequiredID IS NULL
    
        UNION ALL   
    
        --recursive
        SELECT  t.ID 
              , t.Code 
              , h.ID        
        FROM hierarchy AS h
            JOIN @Table AS t 
                ON t.RequiredID = h.ID
        )
    
    INSERT INTO @DefaultSeed (ID, Code, RequiredID)
    SELECT  ID 
            , Code 
            , RequiredID
    FROM hierarchy
    OPTION (MAXRECURSION 10)
    
    
    DECLARE @NewSeed TABLE (ID INT IDENTITY(10, 1), Code NVARCHAR(50), RequiredID INT)
    
    Declare @MapIds Table (aOldID int,aNewID int)
    
    ;MERGE INTO @NewSeed AS TargetTable
    Using @DefaultSeed as Source on 1=0
    WHEN NOT MATCHED then
     Insert (Code,RequiredID)
     Values
     (Source.Code,Source.RequiredID)
    OUTPUT Source.ID ,inserted.ID into @MapIds;
    
    
    Update @NewSeed Set RequiredID=aNewID
    from @MapIds
    Where RequiredID=aOldID
    
    
    /*
    --@NewSeed should read like the following...
    [ID]  [Code]           [RequiredID]
    10....Physics..........NULL
    11....Health...........NULL
    12....AdvancedPhysics..10
    13....Nuke.............12
    */
    
    SELECT *
    FROM @NewSeed
    
    qid & accept id: (18904109, 18904185) query: Link one record to multiple records in separate table soup:

    You have a Many-to-Many relationship. Typically this is implemented by adding a table in between the two data tables:

    \n
    Phones -> PhoneCarriers -> Carriers\n
    \n

    PhoneCarrier will look something like:

    \n
    PhoneCarrierID\nPhoneID (FK)\nCarrierID (FK)\n
    \n

    You won't have a foreign key directly from Phone to Carrier in that scenario.

    \n soup wrap:

    You have a Many-to-Many relationship. Typically this is implemented by adding a table in between the two data tables:

    Phones -> PhoneCarriers -> Carriers
    

    PhoneCarrier will look something like:

    PhoneCarrierID
    PhoneID (FK)
    CarrierID (FK)
    

    You won't have a foreign key directly from Phone to Carrier in that scenario.

    qid & accept id: (18920393, 18926121) query: SQL Server : get next relative day of week. (Next Monday, Tuesday, Wed.....) soup:

    1) Your solution uses a non-deterministic function: datepart(dw...) . Because of this aspect, changing DATEFIRST setting will gives different results. For example, you should try:

    \n
    SET DATEFIRST 7;\nyour solution;\n
    \n

    and then

    \n
    SET DATEFIRST 1;\nyour solution;\n
    \n

    2) Following solution is independent of DATEFIRST/LANGUAGE settings:

    \n
    DECLARE @NextDayID INT  = 0 -- 0=Mon, 1=Tue, 2 = Wed, ..., 5=Sat, 6=Sun\nSELECT DATEADD(DAY, (DATEDIFF(DAY, @NextDayID, GETDATE()) / 7) * 7 + 7, @NextDayID) AS NextDay\n
    \n

    Result:

    \n
    NextDay\n-----------------------\n2013-09-23 00:00:00.000\n
    \n

    This solution is based on following property of DATETIME type:

    \n
      \n
    • Day 0 = 19000101 = Mon

    • \n
    • Day 1 = 19000102 = Tue

    • \n
    • Day 2 = 19000103 = Wed

    • \n
    \n

    ...

    \n
      \n
    • Day 5 = 19000106 = Sat

    • \n
    • Day 6 = 19000107 = Sun

    • \n
    \n

    So, converting INT value 0 to DATETIME gives 19000101.

    \n

    If you want to find the next Wednesday then you should start from day 2 (19000103/Wed), compute days between day 2 and current day (20130921; 41534 days), divide by 7 (in order to get number of full weeks; 5933 weeks), multiple by 7 (41531 fays; in order to get the number of days - full weeks between the first Wednesday/19000103 and the last Wednesday) and then add 7 days (one week; 41538 days; in order to get following Wednesday). Add this number (41538 days) to the starting date: 19000103.

    \n

    Note: my current date is 20130921.

    \n

    Edit #1:

    \n
    DECLARE @NextDayID INT;\nSET @NextDayID = 1; -- Next Sunday\nSELECT DATEADD(DAY, (DATEDIFF(DAY, ((@NextDayID + 5) % 7), GETDATE()) / 7) * 7 + 7, ((@NextDayID + 5) % 7)) AS NextDay\n
    \n

    Result:

    \n
    NextDay\n-----------------------\n2013-09-29 00:00:00.000 \n
    \n

    Note: my current date is 20130923.

    \n soup wrap:

    1) Your solution uses a non-deterministic function: datepart(dw...) . Because of this aspect, changing DATEFIRST setting will gives different results. For example, you should try:

    SET DATEFIRST 7;
    your solution;
    

    and then

    SET DATEFIRST 1;
    your solution;
    

    2) Following solution is independent of DATEFIRST/LANGUAGE settings:

    DECLARE @NextDayID INT  = 0 -- 0=Mon, 1=Tue, 2 = Wed, ..., 5=Sat, 6=Sun
    SELECT DATEADD(DAY, (DATEDIFF(DAY, @NextDayID, GETDATE()) / 7) * 7 + 7, @NextDayID) AS NextDay
    

    Result:

    NextDay
    -----------------------
    2013-09-23 00:00:00.000
    

    This solution is based on following property of DATETIME type:

    • Day 0 = 19000101 = Mon

    • Day 1 = 19000102 = Tue

    • Day 2 = 19000103 = Wed

    ...

    • Day 5 = 19000106 = Sat

    • Day 6 = 19000107 = Sun

    So, converting INT value 0 to DATETIME gives 19000101.

    If you want to find the next Wednesday then you should start from day 2 (19000103/Wed), compute days between day 2 and current day (20130921; 41534 days), divide by 7 (in order to get number of full weeks; 5933 weeks), multiple by 7 (41531 fays; in order to get the number of days - full weeks between the first Wednesday/19000103 and the last Wednesday) and then add 7 days (one week; 41538 days; in order to get following Wednesday). Add this number (41538 days) to the starting date: 19000103.

    Note: my current date is 20130921.

    Edit #1:

    DECLARE @NextDayID INT;
    SET @NextDayID = 1; -- Next Sunday
    SELECT DATEADD(DAY, (DATEDIFF(DAY, ((@NextDayID + 5) % 7), GETDATE()) / 7) * 7 + 7, ((@NextDayID + 5) % 7)) AS NextDay
    

    Result:

    NextDay
    -----------------------
    2013-09-29 00:00:00.000 
    

    Note: my current date is 20130923.

    qid & accept id: (18922620, 18924755) query: MySQL: SELECT Row Based on Ratio of True to False in Second Table soup:

    Try this

    \n
        select r.mediaid, \n       count(*) as total_rows, \n       sum(rating) as id_sum,\n       SUM(rating)/count(*) AS score\n    from rating r, media m\n    where r.mediaid=m.mediaid\n    group by r.mediaid\n
    \n

    If you want to report only those records with a score above a threshold such as 0.75\nthen add the 'having' clause

    \n
     select r.mediaid, \n        count(*) as total_rows, \n        sum(rating) as id_sum,\n        SUM(rating)/count(*) AS score\n   from rating r, media m\n  where r.mediaid=m.mediaid\n  group by r.mediaid\n  having score > .75  \n
    \n

    Here's a demo SQL Fiddle

    \n

    After Comment

    \n

    One way is to sort by scores desc and then limit to 1 record like this SQL Fiddle#2

    \n
        select r.mediaid, \n     count(*) as total_rows, \n     sum(rating) as id_sum,\n     SUM(rating)/count(*) AS score\nfrom rating r, media m\n where r.mediaid=m.mediaid\n group by r.mediaid\norder by score desc limit 1\n
    \n soup wrap:

    Try this

        select r.mediaid, 
           count(*) as total_rows, 
           sum(rating) as id_sum,
           SUM(rating)/count(*) AS score
        from rating r, media m
        where r.mediaid=m.mediaid
        group by r.mediaid
    

    If you want to report only those records with a score above a threshold such as 0.75 then add the 'having' clause

     select r.mediaid, 
            count(*) as total_rows, 
            sum(rating) as id_sum,
            SUM(rating)/count(*) AS score
       from rating r, media m
      where r.mediaid=m.mediaid
      group by r.mediaid
      having score > .75  
    

    Here's a demo SQL Fiddle

    After Comment

    One way is to sort by scores desc and then limit to 1 record like this SQL Fiddle#2

        select r.mediaid, 
         count(*) as total_rows, 
         sum(rating) as id_sum,
         SUM(rating)/count(*) AS score
    from rating r, media m
     where r.mediaid=m.mediaid
     group by r.mediaid
    order by score desc limit 1
    
    qid & accept id: (18992088, 18992216) query: Order 2 tables by column names soup:

    You can use this to help build the query:

    \n
    SELECT ',' + name \nFROM sys.columns\nWHERE object_id IN (OBJECT_ID('Table1'),OBJECT_ID('Table2'))\nORDER BY name\n
    \n

    Update: Dynamic SQL version (still have to plop table names in manually):

    \n
    DECLARE @sql VARCHAR(MAX)\n       ,@cols VARCHAR(MAX)\nSET @cols = (SELECT STUFF((SELECT ',' + Name\n                           FROM (SELECT DISTINCT Name\n                                  FROM sys.columns\n                                  WHERE object_id IN (OBJECT_ID('Table1'),OBJECT_ID('Table2'))\n                                     AND Name <> 'ID'\n                                  )sub\n                            ORDER BY name\n                            FOR XML PATH('')        \n                            ), 1, 1, '' ))\nSET @sql = 'SELECT ' +@cols+'\n            FROM Table1 a\n            JOIN Table2 b\n              ON a.ID = b.ID\n           '\nEXEC (@sql)\n
    \n soup wrap:

    You can use this to help build the query:

    SELECT ',' + name 
    FROM sys.columns
    WHERE object_id IN (OBJECT_ID('Table1'),OBJECT_ID('Table2'))
    ORDER BY name
    

    Update: Dynamic SQL version (still have to plop table names in manually):

    DECLARE @sql VARCHAR(MAX)
           ,@cols VARCHAR(MAX)
    SET @cols = (SELECT STUFF((SELECT ',' + Name
                               FROM (SELECT DISTINCT Name
                                      FROM sys.columns
                                      WHERE object_id IN (OBJECT_ID('Table1'),OBJECT_ID('Table2'))
                                         AND Name <> 'ID'
                                      )sub
                                ORDER BY name
                                FOR XML PATH('')        
                                ), 1, 1, '' ))
    SET @sql = 'SELECT ' +@cols+'
                FROM Table1 a
                JOIN Table2 b
                  ON a.ID = b.ID
               '
    EXEC (@sql)
    
    qid & accept id: (19005246, 19006246) query: tracking customer retension on weekly basis soup:

    I see two ways to do it.\nI would go for an array approach, since it will probably be the fastest (single data step) and is not that complex:

    \n
    data RESULT (drop=start_week end_week);\n    set YOUR_DATA;\n    array week_array{62} week0-week61;\n    do week=0 to 61;\n        if week between start_week and end_week then week_array[week+1]=1;\n        else week_array[week+1]=0;\n    end;\nrun;\n
    \n

    Alternatively, you can prepare a table for the transpose to work by creating one record per week per id::

    \n
    data BEFORE_TRANSPOSE (drop=start_week end_week);\n    set YOUR_DATA;\n    do week=0 to 61;\n        if week between start_week and end_week then subscribed=1;\n        else subscribed=0;\n        output;\n    end;\nrun;\n
    \n soup wrap:

    I see two ways to do it. I would go for an array approach, since it will probably be the fastest (single data step) and is not that complex:

    data RESULT (drop=start_week end_week);
        set YOUR_DATA;
        array week_array{62} week0-week61;
        do week=0 to 61;
            if week between start_week and end_week then week_array[week+1]=1;
            else week_array[week+1]=0;
        end;
    run;
    

    Alternatively, you can prepare a table for the transpose to work by creating one record per week per id::

    data BEFORE_TRANSPOSE (drop=start_week end_week);
        set YOUR_DATA;
        do week=0 to 61;
            if week between start_week and end_week then subscribed=1;
            else subscribed=0;
            output;
        end;
    run;
    
    qid & accept id: (19006430, 19007015) query: Converting a pivot table to a flat table in SQL soup:

    In order to get the result, you will need to UNPIVOT the data. When you unpivot you convert the multiple columns into multiple rows, in doing so the datatypes of the data must be the same.

    \n

    I would use CROSS APPLY to unpivot the columns in pairs:

    \n
    select t.employee_id,\n  t.employee_name,\n  c.data,\n  c.old,\n  c.new\nfrom yourtable t\ncross apply\n(\n  values \n  ('Address', Address_Old, Address_new),\n  ('Income', cast(income_old as varchar(15)), cast(income_new as varchar(15)))\n) c (data, old, new);\n
    \n

    See SQL Fiddle with demo. As you can see this uses a cast on the income columns because I am guessing it is a different datatype from the address. Since the final result will have these values in the same column the data must be of the same type.

    \n

    This can also be written using CROSS APPLY with UNION ALL:

    \n
    select t.employee_id,\n  t.employee_name,\n  c.data,\n  c.old,\n  c.new\nfrom yourtable t\ncross apply\n(\n  select 'Address', Address_Old, Address_new union all\n  select 'Income', cast(income_old as varchar(15)), cast(income_new as varchar(15))\n) c (data, old, new)\n
    \n

    See Demo

    \n soup wrap:

    In order to get the result, you will need to UNPIVOT the data. When you unpivot you convert the multiple columns into multiple rows, in doing so the datatypes of the data must be the same.

    I would use CROSS APPLY to unpivot the columns in pairs:

    select t.employee_id,
      t.employee_name,
      c.data,
      c.old,
      c.new
    from yourtable t
    cross apply
    (
      values 
      ('Address', Address_Old, Address_new),
      ('Income', cast(income_old as varchar(15)), cast(income_new as varchar(15)))
    ) c (data, old, new);
    

    See SQL Fiddle with demo. As you can see this uses a cast on the income columns because I am guessing it is a different datatype from the address. Since the final result will have these values in the same column the data must be of the same type.

    This can also be written using CROSS APPLY with UNION ALL:

    select t.employee_id,
      t.employee_name,
      c.data,
      c.old,
      c.new
    from yourtable t
    cross apply
    (
      select 'Address', Address_Old, Address_new union all
      select 'Income', cast(income_old as varchar(15)), cast(income_new as varchar(15))
    ) c (data, old, new)
    

    See Demo

    qid & accept id: (19041847, 19042537) query: Best way to display number of overspent projects soup:

    You can do it like this

    \n
    CREATE VIEW OverBudgetProjects AS\n  SELECT p.department, p.projectid\n    FROM project p LEFT JOIN assignment a\n      ON p.projectid = a.projectid\n   GROUP BY p.department, p.projectid\n  HAVING MAX(p.maxhours) < SUM(a.hoursworked);\n\nCREATE VIEW Projects AS\n  SELECT DepartmentName, \n         COUNT(DISTINCT p.projectid) NumberOfProjects,\n         COUNT(DISTINCT o.Projectid) NumberOfOverBudgetProjects,\n         OfficeNumber,\n         Phone\n    FROM department d JOIN project p\n      ON d.DepartmentName = p.Department LEFT JOIN OverBudgetProjects o\n      ON d.DepartmentName = o.Department\n   GROUP BY p.Department;\n
    \n

    Sample output from issuing

    \n
    SELECT * FROM Projects\n
    \n

    is

    \n
    \n| DEPARTMENTNAME | NUMBEROFPROJECTS | NUMBEROFOVERBUDGETPROJECTS | OFFICENUMBER |        PHONE |\n|----------------|------------------|----------------------------|--------------|--------------|\n|     Accounting |                1 |                          0 |   BLDG01-100 | 360-285-8300 |\n|        Finance |                2 |                          0 |   BLDG01-140 | 360-285-8400 |\n|      Marketing |                2 |                          2 |   BLDG02-200 | 360-287-8700 |\n
    \n

    Here is SQLFiddle demo

    \n soup wrap:

    You can do it like this

    CREATE VIEW OverBudgetProjects AS
      SELECT p.department, p.projectid
        FROM project p LEFT JOIN assignment a
          ON p.projectid = a.projectid
       GROUP BY p.department, p.projectid
      HAVING MAX(p.maxhours) < SUM(a.hoursworked);
    
    CREATE VIEW Projects AS
      SELECT DepartmentName, 
             COUNT(DISTINCT p.projectid) NumberOfProjects,
             COUNT(DISTINCT o.Projectid) NumberOfOverBudgetProjects,
             OfficeNumber,
             Phone
        FROM department d JOIN project p
          ON d.DepartmentName = p.Department LEFT JOIN OverBudgetProjects o
          ON d.DepartmentName = o.Department
       GROUP BY p.Department;
    

    Sample output from issuing

    SELECT * FROM Projects
    

    is

    | DEPARTMENTNAME | NUMBEROFPROJECTS | NUMBEROFOVERBUDGETPROJECTS | OFFICENUMBER |        PHONE |
    |----------------|------------------|----------------------------|--------------|--------------|
    |     Accounting |                1 |                          0 |   BLDG01-100 | 360-285-8300 |
    |        Finance |                2 |                          0 |   BLDG01-140 | 360-285-8400 |
    |      Marketing |                2 |                          2 |   BLDG02-200 | 360-287-8700 |
    

    Here is SQLFiddle demo

    qid & accept id: (19053225, 19055971) query: Count the number of occurrences grouped by some rows soup:

    Since you seem to want every row in the result individually, you cannot aggregate. Use a window function instead to get the count per day. The well known aggregate function count() can also serve as window aggregate function:

    \n
    SELECT current_date - ped.data_envio::date AS days_out_of_stock\n      ,count(*) OVER (PARTITION BY ped.data_envio::date)\n                                        AS count_per_days_out_of_stock\n      ,ped.data_envio::date AS date\n      ,p.id                 AS product_id\n      ,opl.id               AS storage_id\nFROM   sub_produtos_pedidos spp\nLEFT   JOIN cad_produtos    p   ON p.cod_ean = spp.ean_produto\nLEFT   JOIN sub_pedidos     sp  ON sp.id     = spp.id_pedido\nLEFT   JOIN op_logisticos   opl ON opl.id    = sp.id_op_logistico\nLEFT   JOIN pedidos         ped ON ped.id    = sp.id_pedido\nWHERE  spp.motivo = '201'                   -- code for 'not in inventory'\nORDER  BY ped.data_envio::date, p.id, opl.id
    \n

    Sort order: Products having been out of stock for the longest time first.
    \nNote, you can just subtract dates to get an integer in Postgres.

    \n

    If you want a running count in the sense of "n rows have been out of stock for this number of days or more", use:

    \n
    count(*) OVER (ORDER BY ped.data_envio::date) -- ascending order!\n                                        AS running_count_per_days_out_of_stock\n
    \n

    You get the same count for the same day, peers are lumped together.

    \n soup wrap:

    Since you seem to want every row in the result individually, you cannot aggregate. Use a window function instead to get the count per day. The well known aggregate function count() can also serve as window aggregate function:

    SELECT current_date - ped.data_envio::date AS days_out_of_stock
          ,count(*) OVER (PARTITION BY ped.data_envio::date)
                                            AS count_per_days_out_of_stock
          ,ped.data_envio::date AS date
          ,p.id                 AS product_id
          ,opl.id               AS storage_id
    FROM   sub_produtos_pedidos spp
    LEFT   JOIN cad_produtos    p   ON p.cod_ean = spp.ean_produto
    LEFT   JOIN sub_pedidos     sp  ON sp.id     = spp.id_pedido
    LEFT   JOIN op_logisticos   opl ON opl.id    = sp.id_op_logistico
    LEFT   JOIN pedidos         ped ON ped.id    = sp.id_pedido
    WHERE  spp.motivo = '201'                   -- code for 'not in inventory'
    ORDER  BY ped.data_envio::date, p.id, opl.id

    Sort order: Products having been out of stock for the longest time first.
    Note, you can just subtract dates to get an integer in Postgres.

    If you want a running count in the sense of "n rows have been out of stock for this number of days or more", use:

    count(*) OVER (ORDER BY ped.data_envio::date) -- ascending order!
                                            AS running_count_per_days_out_of_stock
    

    You get the same count for the same day, peers are lumped together.

    qid & accept id: (19068044, 19068152) query: Select from list of values received from a subquery, possibly null soup:

    Use EXISTS instead of IN: exists is clearer (IMHO) and in most cases it is faster, too. (IN (...) needs to remove/suppress duplicates and NULLs, and thus: sort the set)

    \n

    In this particular case: the aggregating subquery is only needed to find out that the group count() > 1. The query optimiser may not realise this, and calculate the complete group counts (over the complete set of rows) before comparing them to 1.

    \n
    SELECT tt.id\nFROM thetable tt\nWHERE EXISTS (\n    SELECT * FROM thetable ex\n    WHERE ex.column1 = tt.column1 AND ex.id <> tt.id\n);\n
    \n

    WRT the suppression of NULLs: the WHERE ex.column1 = tt.column1 clause will always yield false if either ex.column1 or tt.column1 (or both) happen to be NULL.

    \n
    \n

    UPDATE. It appears that the OP also wants the tuples with column1 IS NULL, if there a more of them. The simple solution is to use a sentinel value (a value that is not natively present in columnn1) and use that as a surrogate: (in the fragment below -1 is used as a surrogate value)

    \n
    SELECT tt.id\nFROM thetable tt\nWHERE EXISTS (\n    SELECT * FROM thetable ex\n    WHERE COALESCE(ex.column1, -1) = COALESCE(tt.column1, -1)\n    AND ex.id <> tt.id\n);\n
    \n

    The other (obvious) way would be to explicitely check for NULLs, but this will require an OR clause and a bunch of parentheses, like:

    \n
    SELECT tt.id\nFROM thetable tt\nWHERE EXISTS (\n    SELECT * FROM thetable ex\n    WHERE (ex.column1 = tt.column1 \n          OR (ex.column1 IS NULL AND tt.column1 IS NULL)\n          )\n    AND ex.id <> tt.id\n);\n
    \n soup wrap:

    Use EXISTS instead of IN: exists is clearer (IMHO) and in most cases it is faster, too. (IN (...) needs to remove/suppress duplicates and NULLs, and thus: sort the set)

    In this particular case: the aggregating subquery is only needed to find out that the group count() > 1. The query optimiser may not realise this, and calculate the complete group counts (over the complete set of rows) before comparing them to 1.

    SELECT tt.id
    FROM thetable tt
    WHERE EXISTS (
        SELECT * FROM thetable ex
        WHERE ex.column1 = tt.column1 AND ex.id <> tt.id
    );
    

    WRT the suppression of NULLs: the WHERE ex.column1 = tt.column1 clause will always yield false if either ex.column1 or tt.column1 (or both) happen to be NULL.


    UPDATE. It appears that the OP also wants the tuples with column1 IS NULL, if there a more of them. The simple solution is to use a sentinel value (a value that is not natively present in columnn1) and use that as a surrogate: (in the fragment below -1 is used as a surrogate value)

    SELECT tt.id
    FROM thetable tt
    WHERE EXISTS (
        SELECT * FROM thetable ex
        WHERE COALESCE(ex.column1, -1) = COALESCE(tt.column1, -1)
        AND ex.id <> tt.id
    );
    

    The other (obvious) way would be to explicitely check for NULLs, but this will require an OR clause and a bunch of parentheses, like:

    SELECT tt.id
    FROM thetable tt
    WHERE EXISTS (
        SELECT * FROM thetable ex
        WHERE (ex.column1 = tt.column1 
              OR (ex.column1 IS NULL AND tt.column1 IS NULL)
              )
        AND ex.id <> tt.id
    );
    
    qid & accept id: (19073500, 19073575) query: SQL split comma separated row soup:

    You can do it with pure SQL like this

    \n
    SELECT SUBSTRING_INDEX(SUBSTRING_INDEX(t.values, ',', n.n), ',', -1) value\n  FROM table1 t CROSS JOIN \n(\n   SELECT a.N + b.N * 10 + 1 n\n     FROM \n    (SELECT 0 AS N UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9) a\n   ,(SELECT 0 AS N UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9) b\n    ORDER BY n\n) n\n WHERE n.n <= 1 + (LENGTH(t.values) - LENGTH(REPLACE(t.values, ',', '')))\n ORDER BY value\n
    \n

    Note: The trick is to leverage tally(numbers) table and a very handy in this case MySQL function SUBSTRING_INDEX(). If you do a lot of such queries (splitting) then you might consider to populate and use a persisted tally table instead of generating it on fly with a subquery like in this example. The subquery in this example generates a sequence of numbers from 1 to 100 effectively allowing you split up to 100 delimited values per row in source table. If you need more or less you can easily adjust it.

    \n

    Output:

    \n
    \n|          VALUE |\n|----------------|\n|     somethingA |\n|     somethingB |\n|     somethingC |\n| somethingElseA |\n| somethingElseB |\n
    \n

    Here is SQLFiddle demo

    \n
    \n

    This is how the query might look with a persisted tally table

    \n
    SELECT SUBSTRING_INDEX(SUBSTRING_INDEX(t.values, ',', n.n), ',', -1) value\n  FROM table1 t CROSS JOIN tally n\n WHERE n.n <= 1 + (LENGTH(t.values) - LENGTH(REPLACE(t.values, ',', '')))\n ORDER BY value\n
    \n

    Here is SQLFiddle demo

    \n soup wrap:

    You can do it with pure SQL like this

    SELECT SUBSTRING_INDEX(SUBSTRING_INDEX(t.values, ',', n.n), ',', -1) value
      FROM table1 t CROSS JOIN 
    (
       SELECT a.N + b.N * 10 + 1 n
         FROM 
        (SELECT 0 AS N UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9) a
       ,(SELECT 0 AS N UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9) b
        ORDER BY n
    ) n
     WHERE n.n <= 1 + (LENGTH(t.values) - LENGTH(REPLACE(t.values, ',', '')))
     ORDER BY value
    

    Note: The trick is to leverage tally(numbers) table and a very handy in this case MySQL function SUBSTRING_INDEX(). If you do a lot of such queries (splitting) then you might consider to populate and use a persisted tally table instead of generating it on fly with a subquery like in this example. The subquery in this example generates a sequence of numbers from 1 to 100 effectively allowing you split up to 100 delimited values per row in source table. If you need more or less you can easily adjust it.

    Output:

    |          VALUE |
    |----------------|
    |     somethingA |
    |     somethingB |
    |     somethingC |
    | somethingElseA |
    | somethingElseB |
    

    Here is SQLFiddle demo


    This is how the query might look with a persisted tally table

    SELECT SUBSTRING_INDEX(SUBSTRING_INDEX(t.values, ',', n.n), ',', -1) value
      FROM table1 t CROSS JOIN tally n
     WHERE n.n <= 1 + (LENGTH(t.values) - LENGTH(REPLACE(t.values, ',', '')))
     ORDER BY value
    

    Here is SQLFiddle demo

    qid & accept id: (19101688, 19103866) query: SQL: 2 same vowels regex soup:

    This isn't pretty or short but it is simple.

    \n
    SELECT word\nFROM tabl\nWHERE\n  -- assuming case sensitive based on your example\n  (word LIKE '%[Aa]%[Aa]%' AND word NOT LIKE '%[Aa]%[Aa]%[Aa]%')\n  OR\n  (word LIKE '%[Ee]%[Ee]%' AND word NOT LIKE '%[Ee]%[Ee]%[Ee]%')\n  OR\n  (word LIKE '%[Ii]%[Ii]%' AND word NOT LIKE '%[Ii]%[Ii]%[Ii]%')\n  OR\n  (word LIKE '%[Oo]%[Oo]%' AND word NOT LIKE '%[Oo]%[Oo]%[Oo]%')\n  OR\n  (word LIKE '%[Uu]%[Uu]%' AND word NOT LIKE '%[Uu]%[Uu]%[Uu]%')\n
    \n

    It occurs to me that you didn't specify what to do for a place that has two of one vowel and three of another. Does that qualify? If not (Say Alaska StatE PEak Park was bad even though it has exactly 2 E's in it) then you might want this instead:

    \n
    SELECT word \nFROM tabl \nWHERE\n  -- assuming case sensitive based on your example\n  ( word LIKE '%[Aa]%[Aa]%'\n    OR word LIKE '%[Ee]%[Ee]%'\n    OR word LIKE '%[Ii]%[Ii]%'\n    OR word LIKE '%[Oo]%[Oo]%'\n    OR word LIKE '%[Uu]%[Uu]%' \n  )\n  AND word NOT LIKE '%[Aa]%[Aa]%[Aa]%'\n  AND word NOT LIKE '%[Ee]%[Ee]%[Ee]%'\n  AND word NOT LIKE '%[Ii]%[Ii]%[Ii]%'\n  AND word NOT LIKE '%[Oo]%[Oo]%[Oo]%'\n  AND word NOT LIKE '%[Uu]%[Uu]%[Uu]%'\n
    \n soup wrap:

    This isn't pretty or short but it is simple.

    SELECT word
    FROM tabl
    WHERE
      -- assuming case sensitive based on your example
      (word LIKE '%[Aa]%[Aa]%' AND word NOT LIKE '%[Aa]%[Aa]%[Aa]%')
      OR
      (word LIKE '%[Ee]%[Ee]%' AND word NOT LIKE '%[Ee]%[Ee]%[Ee]%')
      OR
      (word LIKE '%[Ii]%[Ii]%' AND word NOT LIKE '%[Ii]%[Ii]%[Ii]%')
      OR
      (word LIKE '%[Oo]%[Oo]%' AND word NOT LIKE '%[Oo]%[Oo]%[Oo]%')
      OR
      (word LIKE '%[Uu]%[Uu]%' AND word NOT LIKE '%[Uu]%[Uu]%[Uu]%')
    

    It occurs to me that you didn't specify what to do for a place that has two of one vowel and three of another. Does that qualify? If not (Say Alaska StatE PEak Park was bad even though it has exactly 2 E's in it) then you might want this instead:

    SELECT word 
    FROM tabl 
    WHERE
      -- assuming case sensitive based on your example
      ( word LIKE '%[Aa]%[Aa]%'
        OR word LIKE '%[Ee]%[Ee]%'
        OR word LIKE '%[Ii]%[Ii]%'
        OR word LIKE '%[Oo]%[Oo]%'
        OR word LIKE '%[Uu]%[Uu]%' 
      )
      AND word NOT LIKE '%[Aa]%[Aa]%[Aa]%'
      AND word NOT LIKE '%[Ee]%[Ee]%[Ee]%'
      AND word NOT LIKE '%[Ii]%[Ii]%[Ii]%'
      AND word NOT LIKE '%[Oo]%[Oo]%[Oo]%'
      AND word NOT LIKE '%[Uu]%[Uu]%[Uu]%'
    
    qid & accept id: (19136921, 19144070) query: How to count all posts belonging to multiple tags in NHibernate? soup:

    I found a way of how to get this result without a sub query and this works with nHibernate Linq. It was actually not that easy because of the subset of linq expressions which are supported by nHibernate... but anyways

    \n

    query:

    \n
    var searchTags = new[] { "C#", "C++" };\nvar result = session.Query()\n        .Select(p => new { \n            Id = p.Id, \n            Count = p.Tags.Where(t => searchTags.Contains(t.Title)).Count() \n        })\n        .Where(s => s.Count >= 2)\n        .Count();\n
    \n

    It produces the following sql statment:

    \n
    select cast(count(*) as INT) as col_0_0_ \nfrom Posts post0_ \nwhere (\n    select cast(count(*) as INT)\n    from PostsToTags tags1_, Tags tag2_ \n    where post0_.Id=tags1_.Post_id \n    and tags1_.Tag_id=tag2_.Id \n    and (tag2_.Title='C#' or tag2_.Title='C++'))>=2\n
    \n

    you should be able to build your user restriction into this, I hope.

    \n

    The following is my test setup and random data which got generated

    \n
    public class Post\n{\n    public Post()\n    {\n        Tags = new List();\n    }\n    public virtual void AddTag(Tag tag)\n    {\n        this.Tags.Add(tag);\n        tag.Posts.Add(this);\n    }\n    public virtual string Title { get; set; }\n    public virtual string Content { get; set; }\n    public virtual ICollection Tags { get; set; }\n    public virtual int Id { get; set; }\n}\n\npublic class PostMap : ClassMap\n{\n    public PostMap()\n    {\n        Table("Posts");\n\n        Id(p => p.Id).GeneratedBy.Native();\n\n        Map(p => p.Content);\n\n        Map(p => p.Title);\n\n        HasManyToMany(map => map.Tags).Cascade.All();\n    }\n}\n\npublic class Tag\n{\n    public Tag()\n    {\n        Posts = new List();\n    }\n    public virtual string Title { get; set; }\n    public virtual string Description { get; set; }\n    public virtual ICollection Posts { get; set; }\n    public virtual int Id { get; set; }\n}\n\npublic class TagMap : ClassMap\n{\n    public TagMap()\n    {\n        Table("Tags");\n        Id(p => p.Id).GeneratedBy.Native();\n\n        Map(p => p.Description);\n        Map(p => p.Title);\n        HasManyToMany(map => map.Posts).LazyLoad().Inverse();\n    }\n}\n
    \n

    test run:

    \n
    var sessionFactory = Fluently.Configure()\n    .Database(FluentNHibernate.Cfg.Db.MsSqlConfiguration.MsSql2012\n        .ConnectionString(@"Server=.\SQLExpress;Database=TestDB;Trusted_Connection=True;")\n        .ShowSql)\n    .Mappings(m => m.FluentMappings\n        .AddFromAssemblyOf())\n    .ExposeConfiguration(cfg => new SchemaUpdate(cfg).Execute(false, true))\n    .BuildSessionFactory();\n\nusing (var session = sessionFactory.OpenSession())\n{\n    var t1 = new Tag() { Title = "C#", Description = "C#" };\n    session.Save(t1);\n    var t2 = new Tag() { Title = "C++", Description = "C/C++" };\n    session.Save(t2);\n    var t3 = new Tag() { Title = ".Net", Description = "Net" };\n    session.Save(t3);\n    var t4 = new Tag() { Title = "Java", Description = "Java" };\n    session.Save(t4);\n    var t5 = new Tag() { Title = "lol", Description = "lol" };\n    session.Save(t5);\n    var t6 = new Tag() { Title = "rofl", Description = "rofl" };\n    session.Save(t6);\n    var tags = session.Query().ToList();\n    var r = new Random();\n\n    for (int i = 0; i < 1000; i++)\n    {\n        var post = new Post()\n        {\n            Title = "Title" + i,\n            Content = "Something awesome" + i,\n        };\n\n        var manyTags = r.Next(1, 3);\n\n        while (post.Tags.Count() < manyTags)\n        {\n            var index = r.Next(0, 6);\n            if (!post.Tags.Contains(tags[index]))\n            {\n                post.AddTag(tags[index]);\n            }\n        }\n\n        session.Save(post);\n    }\n    session.Flush();\n\n    /* query test */\n    var searchTags = new[] { "C#", "C++" };\n    var result = session.Query()\n            .Select(p => new { \n                Id = p.Id, \n                Count = p.Tags.Where(t => searchTags.Contains(t.Title)).Count() \n            })\n            .Where(s => s.Count >= 2)\n            .Count();\n\n    var resultOriginal = session.CreateQuery(@"\n       SELECT COUNT(*) \n        FROM \n        (\n        SELECT count(Posts.Id)P FROM Posts\n        LEFT JOIN PostsToTags ON Posts.Id=PostsToTags.Post_id \n        LEFT JOIN Tags ON PostsToTags.Tag_id=Tags.Id \n        WHERE Tags.Title in ('c#', 'C++')\n        GROUP BY Posts.Id \n        HAVING COUNT(Posts.Id)>=2\n        )t\n    ").List()[0];\n\n    var isEqual = result == (int)resultOriginal;\n}\n
    \n

    As you can see at the end I do test against your original query (without the users) and it is actually the same count.

    \n soup wrap:

    I found a way of how to get this result without a sub query and this works with nHibernate Linq. It was actually not that easy because of the subset of linq expressions which are supported by nHibernate... but anyways

    query:

    var searchTags = new[] { "C#", "C++" };
    var result = session.Query()
            .Select(p => new { 
                Id = p.Id, 
                Count = p.Tags.Where(t => searchTags.Contains(t.Title)).Count() 
            })
            .Where(s => s.Count >= 2)
            .Count();
    

    It produces the following sql statment:

    select cast(count(*) as INT) as col_0_0_ 
    from Posts post0_ 
    where (
        select cast(count(*) as INT)
        from PostsToTags tags1_, Tags tag2_ 
        where post0_.Id=tags1_.Post_id 
        and tags1_.Tag_id=tag2_.Id 
        and (tag2_.Title='C#' or tag2_.Title='C++'))>=2
    

    you should be able to build your user restriction into this, I hope.

    The following is my test setup and random data which got generated

    public class Post
    {
        public Post()
        {
            Tags = new List();
        }
        public virtual void AddTag(Tag tag)
        {
            this.Tags.Add(tag);
            tag.Posts.Add(this);
        }
        public virtual string Title { get; set; }
        public virtual string Content { get; set; }
        public virtual ICollection Tags { get; set; }
        public virtual int Id { get; set; }
    }
    
    public class PostMap : ClassMap
    {
        public PostMap()
        {
            Table("Posts");
    
            Id(p => p.Id).GeneratedBy.Native();
    
            Map(p => p.Content);
    
            Map(p => p.Title);
    
            HasManyToMany(map => map.Tags).Cascade.All();
        }
    }
    
    public class Tag
    {
        public Tag()
        {
            Posts = new List();
        }
        public virtual string Title { get; set; }
        public virtual string Description { get; set; }
        public virtual ICollection Posts { get; set; }
        public virtual int Id { get; set; }
    }
    
    public class TagMap : ClassMap
    {
        public TagMap()
        {
            Table("Tags");
            Id(p => p.Id).GeneratedBy.Native();
    
            Map(p => p.Description);
            Map(p => p.Title);
            HasManyToMany(map => map.Posts).LazyLoad().Inverse();
        }
    }
    

    test run:

    var sessionFactory = Fluently.Configure()
        .Database(FluentNHibernate.Cfg.Db.MsSqlConfiguration.MsSql2012
            .ConnectionString(@"Server=.\SQLExpress;Database=TestDB;Trusted_Connection=True;")
            .ShowSql)
        .Mappings(m => m.FluentMappings
            .AddFromAssemblyOf())
        .ExposeConfiguration(cfg => new SchemaUpdate(cfg).Execute(false, true))
        .BuildSessionFactory();
    
    using (var session = sessionFactory.OpenSession())
    {
        var t1 = new Tag() { Title = "C#", Description = "C#" };
        session.Save(t1);
        var t2 = new Tag() { Title = "C++", Description = "C/C++" };
        session.Save(t2);
        var t3 = new Tag() { Title = ".Net", Description = "Net" };
        session.Save(t3);
        var t4 = new Tag() { Title = "Java", Description = "Java" };
        session.Save(t4);
        var t5 = new Tag() { Title = "lol", Description = "lol" };
        session.Save(t5);
        var t6 = new Tag() { Title = "rofl", Description = "rofl" };
        session.Save(t6);
        var tags = session.Query().ToList();
        var r = new Random();
    
        for (int i = 0; i < 1000; i++)
        {
            var post = new Post()
            {
                Title = "Title" + i,
                Content = "Something awesome" + i,
            };
    
            var manyTags = r.Next(1, 3);
    
            while (post.Tags.Count() < manyTags)
            {
                var index = r.Next(0, 6);
                if (!post.Tags.Contains(tags[index]))
                {
                    post.AddTag(tags[index]);
                }
            }
    
            session.Save(post);
        }
        session.Flush();
    
        /* query test */
        var searchTags = new[] { "C#", "C++" };
        var result = session.Query()
                .Select(p => new { 
                    Id = p.Id, 
                    Count = p.Tags.Where(t => searchTags.Contains(t.Title)).Count() 
                })
                .Where(s => s.Count >= 2)
                .Count();
    
        var resultOriginal = session.CreateQuery(@"
           SELECT COUNT(*) 
            FROM 
            (
            SELECT count(Posts.Id)P FROM Posts
            LEFT JOIN PostsToTags ON Posts.Id=PostsToTags.Post_id 
            LEFT JOIN Tags ON PostsToTags.Tag_id=Tags.Id 
            WHERE Tags.Title in ('c#', 'C++')
            GROUP BY Posts.Id 
            HAVING COUNT(Posts.Id)>=2
            )t
        ").List()[0];
    
        var isEqual = result == (int)resultOriginal;
    }
    

    As you can see at the end I do test against your original query (without the users) and it is actually the same count.

    qid & accept id: (19155321, 19556418) query: MySQL paging large data based on a specific order soup:

    Firstly you need to create an index based on the date field. This allows the rows to be retrieved in order without having to sort the entire table every time a request is made.

    \n

    Secondly, paging based on index gets slower the deeper you delve into the result set. To illustrate:

    \n
      \n
    • ORDER BY indexedcolumn LIMIT 0, 200 is very fast because it only has to scan 200 rows of the index.

    • \n
    • ORDER BY indexedcolumn LIMIT 200, 200 is relatively fast, but requires scanning 400 rows of the index.

    • \n
    • ORDER BY indexedcolumn LIMIT 660000, 200 is very slow because it requires scanning 660,200 rows of the index.

      \n

      Note: even so, this may still be significantly faster than not having an index at all.

    • \n
    \n

    You can fix this in a few different ways.

    \n
      \n
    1. Implement value-based paging, so you're paging based on the value of the last result on the previous page. For example:

      \n

      WHERE indexedcolumn>[lastval] ORDER BY indexedcolumn LIMIT 200 replacing [lastval] with the value of the last result of the current page. The index allows random access to a particular value, and proceeding forward or backwards from that value.

    2. \n
    3. Only allow users to view the first X rows (eg. 1000). This is no good if the value they want is the 2529th value.

    4. \n
    5. Think of some logical way of breaking up your large table, for example by the first letter, the year, etc so users never have to encounter the entire result set of millions of rows, instead they need to drill down into a specific subset first, which will be a smaller set and quicker to sort.

    6. \n
    \n

    If you're combining a WHERE and an ORDER BY you'll need to reflect this in the design of your index to enable MySQL to continue to benefit from the index for sorting. For example if your query is:

    \n
    SELECT * FROM mytable WHERE year='2012' ORDER BY date LIMIT 0, 200\n
    \n

    Then your index will need to be on two columns (year, date) in that order.

    \n

    If your query is:

    \n
    SELECT * FROM mytable WHERE firstletter='P' ORDER BY date LIMIT 0, 200\n
    \n

    Then your index will need to be on the two columns (firstletter, date) in that order.

    \n

    The idea is that an index on multiple columns allows sorting by any column as long as you specified previous columns to be constants (single values) in a condition. So an index on A, B, C, D and E allows sorting by C if you specify A and B to be constants in a WHERE condition. A and B cannot be ranges.

    \n soup wrap:

    Firstly you need to create an index based on the date field. This allows the rows to be retrieved in order without having to sort the entire table every time a request is made.

    Secondly, paging based on index gets slower the deeper you delve into the result set. To illustrate:

    • ORDER BY indexedcolumn LIMIT 0, 200 is very fast because it only has to scan 200 rows of the index.

    • ORDER BY indexedcolumn LIMIT 200, 200 is relatively fast, but requires scanning 400 rows of the index.

    • ORDER BY indexedcolumn LIMIT 660000, 200 is very slow because it requires scanning 660,200 rows of the index.

      Note: even so, this may still be significantly faster than not having an index at all.

    You can fix this in a few different ways.

    1. Implement value-based paging, so you're paging based on the value of the last result on the previous page. For example:

      WHERE indexedcolumn>[lastval] ORDER BY indexedcolumn LIMIT 200 replacing [lastval] with the value of the last result of the current page. The index allows random access to a particular value, and proceeding forward or backwards from that value.

    2. Only allow users to view the first X rows (eg. 1000). This is no good if the value they want is the 2529th value.

    3. Think of some logical way of breaking up your large table, for example by the first letter, the year, etc so users never have to encounter the entire result set of millions of rows, instead they need to drill down into a specific subset first, which will be a smaller set and quicker to sort.

    If you're combining a WHERE and an ORDER BY you'll need to reflect this in the design of your index to enable MySQL to continue to benefit from the index for sorting. For example if your query is:

    SELECT * FROM mytable WHERE year='2012' ORDER BY date LIMIT 0, 200
    

    Then your index will need to be on two columns (year, date) in that order.

    If your query is:

    SELECT * FROM mytable WHERE firstletter='P' ORDER BY date LIMIT 0, 200
    

    Then your index will need to be on the two columns (firstletter, date) in that order.

    The idea is that an index on multiple columns allows sorting by any column as long as you specified previous columns to be constants (single values) in a condition. So an index on A, B, C, D and E allows sorting by C if you specify A and B to be constants in a WHERE condition. A and B cannot be ranges.

    qid & accept id: (19163959, 19164011) query: Yesterday's date in where clase with HH:MM:SS soup:

    You could use

    \n
    TRUNC(TableT.STARTDATETIME) = TRUNC(sysdate-1)\n
    \n

    for this purpose to truncate both dates to the day on both side of the check. However, for this to be efficient, you'd need a function index on TRUNC(TableT.STARTDATETIME).

    \n

    Maybe better in general from a performance aspect:

    \n
    TableT.STARTDATETIME >= trunc(sysdate-1) AND TableT.STARTDATETIME < trunc(sysdate);\n
    \n

    This includes yesterday 00:00:00 (the >= ), but excludes today 00:00:00 (the <).

    \n

    Warning! Keep in mind, that for TIMESTAMP columns - while tempting because of its simplicity - don't use 23:59:59 as end time, as the 1 second time slot between 23:59:59 and 00:00:00 might contain data too - and this gap will leave them out of processing...

    \n soup wrap:

    You could use

    TRUNC(TableT.STARTDATETIME) = TRUNC(sysdate-1)
    

    for this purpose to truncate both dates to the day on both side of the check. However, for this to be efficient, you'd need a function index on TRUNC(TableT.STARTDATETIME).

    Maybe better in general from a performance aspect:

    TableT.STARTDATETIME >= trunc(sysdate-1) AND TableT.STARTDATETIME < trunc(sysdate);
    

    This includes yesterday 00:00:00 (the >= ), but excludes today 00:00:00 (the <).

    Warning! Keep in mind, that for TIMESTAMP columns - while tempting because of its simplicity - don't use 23:59:59 as end time, as the 1 second time slot between 23:59:59 and 00:00:00 might contain data too - and this gap will leave them out of processing...

    qid & accept id: (19181164, 19293511) query: how to change font in mysql database to store unicode charactors soup:

    We assume we have a DB with table articles, and a column named posts, which will save the article written in your blog. Best part, we know that all major DB’s support UTF8. And we shall explore that feature.

    \n

    Now we write a article in hindi, हेल्लो वर्ल्ड

    \n

    If the UTF8 is not specified, you should see something like ?????? in ur DB else u shud see the hindi data.

    \n

    Code:

    \n

    First check for UTF8 compatibility with this query. If it supports you should see the output as

    \n
    “Character_set_system”| “UTF8″\n
    \n

    SHOW VARIABLES LIKE

    \n
    ‘character_set_system’;\n
    \n

    Now that being checked, alter the table and just modify the column, Posts in our above example and specify it as UTF8

    \n
    ALTER TABLE articles MODIFY Posts VARCHAR(20) CHARACTER SET UTF8;\n
    \n

    Now, try to insert the hindi value and save it. Query it and u shud see the hindi text

    \n soup wrap:

    We assume we have a DB with table articles, and a column named posts, which will save the article written in your blog. Best part, we know that all major DB’s support UTF8. And we shall explore that feature.

    Now we write a article in hindi, हेल्लो वर्ल्ड

    If the UTF8 is not specified, you should see something like ?????? in ur DB else u shud see the hindi data.

    Code:

    First check for UTF8 compatibility with this query. If it supports you should see the output as

    “Character_set_system”| “UTF8″
    

    SHOW VARIABLES LIKE

    ‘character_set_system’;
    

    Now that being checked, alter the table and just modify the column, Posts in our above example and specify it as UTF8

    ALTER TABLE articles MODIFY Posts VARCHAR(20) CHARACTER SET UTF8;
    

    Now, try to insert the hindi value and save it. Query it and u shud see the hindi text

    qid & accept id: (19189050, 19189434) query: T-SQL Query to Select current, previous or next week soup:
    DECLARE @CurrentDate SMALLDATETIME; -- Or DATE\n\nSET @CurrentDate = '20131004'\n\nSELECT  DATEADD(DAY, (DATEDIFF(DAY, 0, @CurrentDate) / 7) * 7, 0)  AS FirstDayOfTheWeek,\n        DATEADD(DAY, (DATEDIFF(DAY, 0, @CurrentDate) / 7) * 7 + 4, 0)  AS LastDayOfTheWeek\n
    \n

    Results:

    \n
    FirstDayOfTheWeek       LastDayOfTheWeek\n----------------------- -----------------------\n2013-09-30 00:00:00.000 2013-10-04 00:00:00.000\n
    \n

    All days between Monday and Friday:

    \n
    DECLARE @CurrentDate DATE;\nDECLARE @WeekNum SMALLINT;\n\nSET @CurrentDate = '20131004'\nSET @WeekNum = +1; -- -1 Previous WK, 0 Current WK, +1 Next WK\n\nSELECT   DATEADD(DAY, dof.DayNum, fdow.FirstDayOfTheWeek) AS DayAsDateTime\nFROM    (VALUES (DATEADD(DAY, (DATEDIFF(DAY, 0, @CurrentDate) / 7) * 7 + @WeekNum*7, 0)))  fdow(FirstDayOfTheWeek)\nCROSS JOIN (VALUES (0), (1), (2), (3), (4)) dof(DayNum)\n\n/*\nDayAsDateTime\n-----------------------\n2013-10-07 00:00:00.000\n2013-10-08 00:00:00.000\n2013-10-09 00:00:00.000\n2013-10-10 00:00:00.000\n2013-10-11 00:00:00.000\n*/\n\nSELECT  *\nFROM\n(\nSELECT   DATEADD(DAY, dof.DayNum, fdow.FirstDayOfTheWeek) AS DayAsDateTime, dof.DayNum\nFROM    (VALUES (DATEADD(DAY, (DATEDIFF(DAY, 0, @CurrentDate) / 7) * 7 + @WeekNum*7, 0)))  fdow(FirstDayOfTheWeek)\nCROSS JOIN (VALUES (0), (1), (2), (3), (4)) dof(DayNum)\n) src \nPIVOT( MAX(DayAsDateTime) FOR DayNum IN ([0], [1], [2], [3], [4]) ) pvt\n\n/*\n0                       1                       2                       3                       4\n----------------------- ----------------------- ----------------------- ----------------------- -----------------------\n2013-10-07 00:00:00.000 2013-10-08 00:00:00.000 2013-10-09 00:00:00.000 2013-10-10 00:00:00.000 2013-10-11 00:00:00.000\n*/\n
    \n soup wrap:
    DECLARE @CurrentDate SMALLDATETIME; -- Or DATE
    
    SET @CurrentDate = '20131004'
    
    SELECT  DATEADD(DAY, (DATEDIFF(DAY, 0, @CurrentDate) / 7) * 7, 0)  AS FirstDayOfTheWeek,
            DATEADD(DAY, (DATEDIFF(DAY, 0, @CurrentDate) / 7) * 7 + 4, 0)  AS LastDayOfTheWeek
    

    Results:

    FirstDayOfTheWeek       LastDayOfTheWeek
    ----------------------- -----------------------
    2013-09-30 00:00:00.000 2013-10-04 00:00:00.000
    

    All days between Monday and Friday:

    DECLARE @CurrentDate DATE;
    DECLARE @WeekNum SMALLINT;
    
    SET @CurrentDate = '20131004'
    SET @WeekNum = +1; -- -1 Previous WK, 0 Current WK, +1 Next WK
    
    SELECT   DATEADD(DAY, dof.DayNum, fdow.FirstDayOfTheWeek) AS DayAsDateTime
    FROM    (VALUES (DATEADD(DAY, (DATEDIFF(DAY, 0, @CurrentDate) / 7) * 7 + @WeekNum*7, 0)))  fdow(FirstDayOfTheWeek)
    CROSS JOIN (VALUES (0), (1), (2), (3), (4)) dof(DayNum)
    
    /*
    DayAsDateTime
    -----------------------
    2013-10-07 00:00:00.000
    2013-10-08 00:00:00.000
    2013-10-09 00:00:00.000
    2013-10-10 00:00:00.000
    2013-10-11 00:00:00.000
    */
    
    SELECT  *
    FROM
    (
    SELECT   DATEADD(DAY, dof.DayNum, fdow.FirstDayOfTheWeek) AS DayAsDateTime, dof.DayNum
    FROM    (VALUES (DATEADD(DAY, (DATEDIFF(DAY, 0, @CurrentDate) / 7) * 7 + @WeekNum*7, 0)))  fdow(FirstDayOfTheWeek)
    CROSS JOIN (VALUES (0), (1), (2), (3), (4)) dof(DayNum)
    ) src 
    PIVOT( MAX(DayAsDateTime) FOR DayNum IN ([0], [1], [2], [3], [4]) ) pvt
    
    /*
    0                       1                       2                       3                       4
    ----------------------- ----------------------- ----------------------- ----------------------- -----------------------
    2013-10-07 00:00:00.000 2013-10-08 00:00:00.000 2013-10-09 00:00:00.000 2013-10-10 00:00:00.000 2013-10-11 00:00:00.000
    */
    
    qid & accept id: (19211707, 19212599) query: How to query specific category or all categories inside the same query? soup:

    Building on what @JamesMarks was offering, it would be simpler to use a query like

    \n
    $query = "SELECT * FROM table WHERE category = ? OR 1 = ?;"\n
    \n

    Then pass your $category for the first parameter, and either 1 or 0 as the second parameter. If you pass 1, then the second term becomes 1 = 1. That's always true, so the whole expression is always true. If you pass 0, then the second term is 1 = 0 and that's always false, but then the whole expression will be true only if category = $category matches.

    \n

    That's simpler and better style than designating a special value 0 for "any category."

    \n

    An alternative solution is to build the query dynamically:

    \n
    $where = array();\nif ($category) {\n    $where[] = "category = ?";\n    $params[] = $category;\n}\n\n... perhaps add more terms to $where conditionally ...\n\n$query = "SELECT * FROM table";\nif ($where) {\n    $query .= " WHERE " . implode(" AND ", $where);\n}\n
    \n soup wrap:

    Building on what @JamesMarks was offering, it would be simpler to use a query like

    $query = "SELECT * FROM table WHERE category = ? OR 1 = ?;"
    

    Then pass your $category for the first parameter, and either 1 or 0 as the second parameter. If you pass 1, then the second term becomes 1 = 1. That's always true, so the whole expression is always true. If you pass 0, then the second term is 1 = 0 and that's always false, but then the whole expression will be true only if category = $category matches.

    That's simpler and better style than designating a special value 0 for "any category."

    An alternative solution is to build the query dynamically:

    $where = array();
    if ($category) {
        $where[] = "category = ?";
        $params[] = $category;
    }
    
    ... perhaps add more terms to $where conditionally ...
    
    $query = "SELECT * FROM table";
    if ($where) {
        $query .= " WHERE " . implode(" AND ", $where);
    }
    
    qid & accept id: (19256123, 19256721) query: In SQL query to find duplicates in one column then use a second column to determine which record to return soup:

    I think that should do the job on a DB2 as well:

    \n
    SELECT Column1, Column2, \n       MAX (CASE Column3 WHEN 2 THEN 2 ELSE NULL END)\n  FROM t\n GROUP BY Column1, Column2;\n
    \n

    See this Fiddle for an ORACLE database.

    \n

    Result:

    \n
    COLUMN1     COLUMN2         COLUMN3\n---------   -----------     -------\n134024323   81999000004     (null)\n127001126   90489495251     2\n346122930   346000016       2\n346207637   346000016       (null)\n
    \n soup wrap:

    I think that should do the job on a DB2 as well:

    SELECT Column1, Column2, 
           MAX (CASE Column3 WHEN 2 THEN 2 ELSE NULL END)
      FROM t
     GROUP BY Column1, Column2;
    

    See this Fiddle for an ORACLE database.

    Result:

    COLUMN1     COLUMN2         COLUMN3
    ---------   -----------     -------
    134024323   81999000004     (null)
    127001126   90489495251     2
    346122930   346000016       2
    346207637   346000016       (null)
    
    qid & accept id: (19268811, 19268839) query: Set default value in query when value is null soup:

    Use the following:

    \n
    SELECT RegName,\n       RegEmail,\n       RegPhone,\n       RegOrg,\n       RegCountry,\n       DateReg,\n       ISNULL(Website,'no website')  AS WebSite \nFROM   RegTakePart \nWHERE  Reject IS NULL\n
    \n

    or as, @Lieven noted:

    \n
    SELECT RegName,\n       RegEmail,\n       RegPhone,\n       RegOrg,\n       RegCountry,\n       DateReg,\n       COALESCE(Website,'no website')  AS WebSite \nFROM   RegTakePart \nWHERE  Reject IS NULL\n
    \n

    The dynamic of COALESCE is that you may define more arguments, so if the first is null then get the second, if the second is null get the third etc etc...

    \n soup wrap:

    Use the following:

    SELECT RegName,
           RegEmail,
           RegPhone,
           RegOrg,
           RegCountry,
           DateReg,
           ISNULL(Website,'no website')  AS WebSite 
    FROM   RegTakePart 
    WHERE  Reject IS NULL
    

    or as, @Lieven noted:

    SELECT RegName,
           RegEmail,
           RegPhone,
           RegOrg,
           RegCountry,
           DateReg,
           COALESCE(Website,'no website')  AS WebSite 
    FROM   RegTakePart 
    WHERE  Reject IS NULL
    

    The dynamic of COALESCE is that you may define more arguments, so if the first is null then get the second, if the second is null get the third etc etc...

    qid & accept id: (19270316, 19276815) query: Count sequential matching words in two strings oracle soup:

    Personally, in this situation, I would choose PL/SQL code over plain SQL. Something like:

    \n

    Package specification:

    \n
    create or replace package PKG is\n  function NumOfSeqWords(\n    p_str1 in varchar2,\n    p_str2 in varchar2\n  ) return number;\nend;\n
    \n

    Package body:

    \n
    create or replace package body PKG is\n  function NumOfSeqWords(\n    p_str1 in varchar2,\n    p_str2 in varchar2\n  ) return number\n  is\n    l_str1     varchar2(4000) := p_str1;\n    l_str2     varchar2(4000) := p_str2;\n    l_res      number  default 0;\n    l_del_pos1 number;\n    l_del_pos2 number;\n    l_word1    varchar2(1000);\n    l_word2    varchar2(1000);\n  begin\n    loop\n      l_del_pos1 := instr(l_str1, ' ');\n      l_del_pos2 := instr(l_str2, ' ');\n      case l_del_pos1\n        when 0 \n        then l_word1 := l_str1;\n             l_str1 := ''; \n        else l_word1 := substr(l_str1, 1, l_del_pos1 - 1);\n      end case;\n      case l_del_pos2\n        when 0 \n        then l_word2 := l_str2;\n             l_str2 := ''; \n        else l_word2 := substr(l_str2, 1, l_del_pos2 - 1);\n      end case;\n      exit when (l_word1 <> l_word2) or \n                ((l_word1 is null) or (l_word2 is null));\n\n      l_res := l_res + 1;\n      l_str1 := substr(l_str1, l_del_pos1 + 1);\n      l_str2 := substr(l_str2, l_del_pos2 + 1);\n    end loop;\n    return l_res;\n  end;\nend;\n
    \n

    Test case:

    \n
     with t1(Id1, col1, col2) as(\n   select 1, 'foo bar live'  ,'foo bar'     from dual union all\n   select 2, 'foo live tele' ,'foo tele'    from dual union all\n   select 3, 'bar foo live'  ,'foo bar live'from dual\n  )\n  select id1\n       , col1\n       , col2\n       , pkg.NumOfSeqWords(col1, col2) as res\n    from t1\n  ;\n
    \n

    Result:

    \n
           ID1 COL1          COL2                RES\n---------- ------------- ------------ ----------\n         1 foo bar live  foo bar               2\n         2 foo live tele foo tele              1\n         3 bar foo live  foo bar live          0\n
    \n soup wrap:

    Personally, in this situation, I would choose PL/SQL code over plain SQL. Something like:

    Package specification:

    create or replace package PKG is
      function NumOfSeqWords(
        p_str1 in varchar2,
        p_str2 in varchar2
      ) return number;
    end;
    

    Package body:

    create or replace package body PKG is
      function NumOfSeqWords(
        p_str1 in varchar2,
        p_str2 in varchar2
      ) return number
      is
        l_str1     varchar2(4000) := p_str1;
        l_str2     varchar2(4000) := p_str2;
        l_res      number  default 0;
        l_del_pos1 number;
        l_del_pos2 number;
        l_word1    varchar2(1000);
        l_word2    varchar2(1000);
      begin
        loop
          l_del_pos1 := instr(l_str1, ' ');
          l_del_pos2 := instr(l_str2, ' ');
          case l_del_pos1
            when 0 
            then l_word1 := l_str1;
                 l_str1 := ''; 
            else l_word1 := substr(l_str1, 1, l_del_pos1 - 1);
          end case;
          case l_del_pos2
            when 0 
            then l_word2 := l_str2;
                 l_str2 := ''; 
            else l_word2 := substr(l_str2, 1, l_del_pos2 - 1);
          end case;
          exit when (l_word1 <> l_word2) or 
                    ((l_word1 is null) or (l_word2 is null));
    
          l_res := l_res + 1;
          l_str1 := substr(l_str1, l_del_pos1 + 1);
          l_str2 := substr(l_str2, l_del_pos2 + 1);
        end loop;
        return l_res;
      end;
    end;
    

    Test case:

     with t1(Id1, col1, col2) as(
       select 1, 'foo bar live'  ,'foo bar'     from dual union all
       select 2, 'foo live tele' ,'foo tele'    from dual union all
       select 3, 'bar foo live'  ,'foo bar live'from dual
      )
      select id1
           , col1
           , col2
           , pkg.NumOfSeqWords(col1, col2) as res
        from t1
      ;
    

    Result:

           ID1 COL1          COL2                RES
    ---------- ------------- ------------ ----------
             1 foo bar live  foo bar               2
             2 foo live tele foo tele              1
             3 bar foo live  foo bar live          0
    
    qid & accept id: (19270491, 19272776) query: What is the best way to SELECT data when there are two possible tables holding the detail information? soup:

    What I've done before in a similar situation is introduce a raw query with all possible values, along with the precedence of the value; then use a ROW_NUMBER outer query to get just the value with the highest precedence.

    \n

    I'm going to use your (excellent) sample data, and everything goes after the insert into @GroupWeight.

    \n

    This is our raw data:

    \n
    -- the product weights (use INNER JOIN to only find \n--   the products with their own weights)\nSELECT\n    p.ProductId,\n    p.ProductName,\n    m.MaterialId,\n    m.MaterialName,\n    pw.Weight,\n    'Product' WeightSource,\n    20 Precedence\nFROM\n    @Product p\n    INNER JOIN @ProductWeight pw ON pw.ProductId = p.ProductId\n    INNER JOIN @Material m ON m.MaterialId = pw.MaterialId\nUNION ALL\n-- the group weight\nSELECT\n    p.ProductId,\n    p.ProductName,\n    m.MaterialId,\n    m.MaterialName,\n    gw.Weight,\n    'Group' WeightSource,\n    10 Precedence\nFROM\n    @Product p\n    INNER JOIN @GroupWeight gw on gw.GroupId = p.GroupId\n    INNER JOIN @Material m ON m.MaterialId = gw.MaterialId\n
    \n

    This will return one row for each product-material with a specific weight, plus one row for each product-material. Each row indicates whether it is a product weight or a group weight.

    \n

    We can then number the rows, ordering by precedence:

    \n
    -- assume the above is in a CTE named AllWeights\nSELECT \n    *,\n    ROW_NUMBER() OVER (PARTITION BY ProductId, MaterialId \n                       ORDER BY Precedence DESC) rn\nFROM \n    AllWeights\n
    \n

    Which gives us the same data with an additional indication of which row for a given product-material is the relevant one, so finally we can get just that:

    \n
    -- assume the above is in a CTE named RowNumbered\nSELECT\n    ProductName,\n    MaterialName,\n    WeightSource,\n    Weight\nFROM\n    RowNumbered\nWHERE\n    rn = 1\n;\n
    \n

    And we're done.

    \n
    \n

    Putting it all together:

    \n
    ;WITH AllWeights AS (\n-- the product weights (use INNER JOIN to only find \n--   the products with their own weights)\nSELECT\n    p.ProductId,\n    p.ProductName,\n    m.MaterialId,\n    m.MaterialName,\n    pw.Weight,\n    'Product' WeightSource,\n    20 Precedence\nFROM\n    @Product p\n    INNER JOIN @ProductWeight pw ON pw.ProductId = p.ProductId\n    INNER JOIN @Material m ON m.MaterialId = pw.MaterialId\nUNION ALL\n-- the group weight\nSELECT\n    p.ProductId,\n    p.ProductName,\n    m.MaterialId,\n    m.MaterialName,\n    gw.Weight,\n    'Group' WeightSource,\n    10 Precedence\nFROM\n    @Product p\n    INNER JOIN @GroupWeight gw on gw.GroupId = p.GroupId\n    INNER JOIN @Material m ON m.MaterialId = gw.MaterialId\n),\nRowNumbered AS (\nSELECT \n    *,\n    ROW_NUMBER() OVER (PARTITION BY ProductId, MaterialId \n                       ORDER BY Precedence DESC) rn\nFROM \n    AllWeights\n)\nSELECT\n    ProductName,\n    MaterialName,\n    WeightSource,\n    Weight\nFROM\n    RowNumbered\nWHERE\n    rn = 1\n;\n
    \n

    Output:

    \n
    ProductName          MaterialName WeightSource Weight\n-------------------- ------------ ------------ ------------\nCan of soup          Paper        Product      5.20\nCan of soup          Steel        Product      23.10\nCan of beans         Paper        Group        5.20\nCan of beans         Steel        Group        23.10\nBottle of beer       Paper        Product      4.60\nBottle of beer       Steel        Product      2.40\nBottle of beer       Glass        Product      185.90\nBottle of wine       Paper        Product      5.10\nBottle of wine       Steel        Product      2.60\nBottle of wine       Glass        Product      650.40\nBottle of sauce      Paper        Group        4.85\nBottle of sauce      Steel        Group        2.50\nBottle of sauce      Glass        Group        418.15\n
    \n

    which except for order is the same as yours, I think.

    \n

    You'll have to check performance yourself, of course.

    \n soup wrap:

    What I've done before in a similar situation is introduce a raw query with all possible values, along with the precedence of the value; then use a ROW_NUMBER outer query to get just the value with the highest precedence.

    I'm going to use your (excellent) sample data, and everything goes after the insert into @GroupWeight.

    This is our raw data:

    -- the product weights (use INNER JOIN to only find 
    --   the products with their own weights)
    SELECT
        p.ProductId,
        p.ProductName,
        m.MaterialId,
        m.MaterialName,
        pw.Weight,
        'Product' WeightSource,
        20 Precedence
    FROM
        @Product p
        INNER JOIN @ProductWeight pw ON pw.ProductId = p.ProductId
        INNER JOIN @Material m ON m.MaterialId = pw.MaterialId
    UNION ALL
    -- the group weight
    SELECT
        p.ProductId,
        p.ProductName,
        m.MaterialId,
        m.MaterialName,
        gw.Weight,
        'Group' WeightSource,
        10 Precedence
    FROM
        @Product p
        INNER JOIN @GroupWeight gw on gw.GroupId = p.GroupId
        INNER JOIN @Material m ON m.MaterialId = gw.MaterialId
    

    This will return one row for each product-material with a specific weight, plus one row for each product-material. Each row indicates whether it is a product weight or a group weight.

    We can then number the rows, ordering by precedence:

    -- assume the above is in a CTE named AllWeights
    SELECT 
        *,
        ROW_NUMBER() OVER (PARTITION BY ProductId, MaterialId 
                           ORDER BY Precedence DESC) rn
    FROM 
        AllWeights
    

    Which gives us the same data with an additional indication of which row for a given product-material is the relevant one, so finally we can get just that:

    -- assume the above is in a CTE named RowNumbered
    SELECT
        ProductName,
        MaterialName,
        WeightSource,
        Weight
    FROM
        RowNumbered
    WHERE
        rn = 1
    ;
    

    And we're done.


    Putting it all together:

    ;WITH AllWeights AS (
    -- the product weights (use INNER JOIN to only find 
    --   the products with their own weights)
    SELECT
        p.ProductId,
        p.ProductName,
        m.MaterialId,
        m.MaterialName,
        pw.Weight,
        'Product' WeightSource,
        20 Precedence
    FROM
        @Product p
        INNER JOIN @ProductWeight pw ON pw.ProductId = p.ProductId
        INNER JOIN @Material m ON m.MaterialId = pw.MaterialId
    UNION ALL
    -- the group weight
    SELECT
        p.ProductId,
        p.ProductName,
        m.MaterialId,
        m.MaterialName,
        gw.Weight,
        'Group' WeightSource,
        10 Precedence
    FROM
        @Product p
        INNER JOIN @GroupWeight gw on gw.GroupId = p.GroupId
        INNER JOIN @Material m ON m.MaterialId = gw.MaterialId
    ),
    RowNumbered AS (
    SELECT 
        *,
        ROW_NUMBER() OVER (PARTITION BY ProductId, MaterialId 
                           ORDER BY Precedence DESC) rn
    FROM 
        AllWeights
    )
    SELECT
        ProductName,
        MaterialName,
        WeightSource,
        Weight
    FROM
        RowNumbered
    WHERE
        rn = 1
    ;
    

    Output:

    ProductName          MaterialName WeightSource Weight
    -------------------- ------------ ------------ ------------
    Can of soup          Paper        Product      5.20
    Can of soup          Steel        Product      23.10
    Can of beans         Paper        Group        5.20
    Can of beans         Steel        Group        23.10
    Bottle of beer       Paper        Product      4.60
    Bottle of beer       Steel        Product      2.40
    Bottle of beer       Glass        Product      185.90
    Bottle of wine       Paper        Product      5.10
    Bottle of wine       Steel        Product      2.60
    Bottle of wine       Glass        Product      650.40
    Bottle of sauce      Paper        Group        4.85
    Bottle of sauce      Steel        Group        2.50
    Bottle of sauce      Glass        Group        418.15
    

    which except for order is the same as yours, I think.

    You'll have to check performance yourself, of course.

    qid & accept id: (19279889, 19280129) query: Removing the prefix of a string in TSQL soup:

    Try this :

    \n
    RIGHT(words, LEN(words) - (LEN(prefix+'?')-1))\n
    \n

    EDITED :

    \n

    May be you will find this one "cleaner" :

    \n
    RIGHT(words, LEN(words) - DATALENGTH(CONVERT(VARCHAR(100),prefix)))\n
    \n soup wrap:

    Try this :

    RIGHT(words, LEN(words) - (LEN(prefix+'?')-1))
    

    EDITED :

    May be you will find this one "cleaner" :

    RIGHT(words, LEN(words) - DATALENGTH(CONVERT(VARCHAR(100),prefix)))
    
    qid & accept id: (19307842, 19307904) query: Calling a stored procedure with a select soup:

    The stored procedure is populating RT but you then need to select out of it:

    \n
    CREATE OR REPLACE PROCEDURE MDC_UTIL_PROCEDURE (results OUT SYS_REFCURSOR)\nAS\n    RT MDC_CAT_PARAMETROS%ROWTYPE;\nBEGIN\n    SELECT * INTO RT FROM MDC_CAT_PARAMETROS WHERE PARAM_LLAVE='SMTP_SERVER';\n    OPEN results FOR SELECT * FROM RT;\nEND MDC_UTIL_PROCEDURE; \n
    \n

    or you could simplify it to get rid of the RT variable:

    \n
    CREATE OR REPLACE PROCEDURE MDC_UTIL_PROCEDURE (results OUT SYS_REFCURSOR)\nAS\nBEGIN\n    OPEN results FOR \n    SELECT * FROM MDC_CAT_PARAMETROS WHERE PARAM_LLAVE='SMTP_SERVER';\nEND MDC_UTIL_PROCEDURE; \n
    \n soup wrap:

    The stored procedure is populating RT but you then need to select out of it:

    CREATE OR REPLACE PROCEDURE MDC_UTIL_PROCEDURE (results OUT SYS_REFCURSOR)
    AS
        RT MDC_CAT_PARAMETROS%ROWTYPE;
    BEGIN
        SELECT * INTO RT FROM MDC_CAT_PARAMETROS WHERE PARAM_LLAVE='SMTP_SERVER';
        OPEN results FOR SELECT * FROM RT;
    END MDC_UTIL_PROCEDURE; 
    

    or you could simplify it to get rid of the RT variable:

    CREATE OR REPLACE PROCEDURE MDC_UTIL_PROCEDURE (results OUT SYS_REFCURSOR)
    AS
    BEGIN
        OPEN results FOR 
        SELECT * FROM MDC_CAT_PARAMETROS WHERE PARAM_LLAVE='SMTP_SERVER';
    END MDC_UTIL_PROCEDURE; 
    
    qid & accept id: (19329816, 19333964) query: Hierarchical Query( how to retrieve middle nodes) soup:

    You can use the CONNECT_BY_IS_LEAF pseudo column for this.

    \n
    select level,  first_name ||' '|| last_name "FullName" \nfrom more_employees\nwhere connect_by_isleaf = 0 and manager_id is not null\nstart with employee_id = 1\nconnect by prior employee_id = manager_id;\n
    \n

    You can also use that to get all leafs:

    \n
    select level,  first_name ||' '|| last_name "FullName" \nfrom more_employees\nwhere connect_by_isleaf = 1\nstart with employee_id = 1\nconnect by prior employee_id = manager_id;\n
    \n

    Which is probably faster than your solution with a sub-select

    \n

    Here is an SQLFiddle example: http://sqlfiddle.com/#!4/511d9/2

    \n soup wrap:

    You can use the CONNECT_BY_IS_LEAF pseudo column for this.

    select level,  first_name ||' '|| last_name "FullName" 
    from more_employees
    where connect_by_isleaf = 0 and manager_id is not null
    start with employee_id = 1
    connect by prior employee_id = manager_id;
    

    You can also use that to get all leafs:

    select level,  first_name ||' '|| last_name "FullName" 
    from more_employees
    where connect_by_isleaf = 1
    start with employee_id = 1
    connect by prior employee_id = manager_id;
    

    Which is probably faster than your solution with a sub-select

    Here is an SQLFiddle example: http://sqlfiddle.com/#!4/511d9/2

    qid & accept id: (19353473, 19353591) query: SQL multiple table join throwing dupes soup:

    Your first couple joins (Video/VideoTags/Tags) yields a table like so:

    \n
    VideoID = 1 will bring in TagID = 2,5 (Dogs, orlyowl) so you have this\n\n| 1 | Dogs\n| 1 | orlyowl\n
    \n

    When you join to VideoChannels, it duplicates the above entries for each channel

    \n
    | 1 | Dogs    | 1\n| 1 | orlyowl | 1\n| 1 | Dogs    | 4\n| 1 | orlyowl | 4\n| 1 | Dogs    | 6\n| 1 | orlyowl | 6\n
    \n

    group_concat has a DISTINCT attribute

    \n
    select v.*\n  , group_concat(distinct t.tagName) Tags\n  , group_concat(distinct c.channelName) Channels\nfrom videos as v \ninner join videoTags as vt on v.videoId = vt.videoid\ninner join tags as t on t.tagId = vt.tagId\ninner join videoChannels as vc on v.videoId = vc.videoId\ninner join channels as c on c.channelId = vc.channelId\ngroup by v.videoId;\n
    \n soup wrap:

    Your first couple joins (Video/VideoTags/Tags) yields a table like so:

    VideoID = 1 will bring in TagID = 2,5 (Dogs, orlyowl) so you have this
    
    | 1 | Dogs
    | 1 | orlyowl
    

    When you join to VideoChannels, it duplicates the above entries for each channel

    | 1 | Dogs    | 1
    | 1 | orlyowl | 1
    | 1 | Dogs    | 4
    | 1 | orlyowl | 4
    | 1 | Dogs    | 6
    | 1 | orlyowl | 6
    

    group_concat has a DISTINCT attribute

    select v.*
      , group_concat(distinct t.tagName) Tags
      , group_concat(distinct c.channelName) Channels
    from videos as v 
    inner join videoTags as vt on v.videoId = vt.videoid
    inner join tags as t on t.tagId = vt.tagId
    inner join videoChannels as vc on v.videoId = vc.videoId
    inner join channels as c on c.channelId = vc.channelId
    group by v.videoId;
    
    qid & accept id: (19356906, 19357183) query: Show modified strings that appear more than once soup:

    Just add

    \n
    GROUP BY Keydomain\nHAVING COUNT(*) > 1\n
    \n

    to your query.

    \n

    EDIT:

    \n
    \n

    Could you tell me if there is a way to list the complete domains one by one with your addition?

    \n
    \n
    SELECT * FROM\n(\nSELECT \nCASE \nWHEN LENGTH(domain) - LENGTH(REPLACE(domain, '.', '')) = 1 THEN REVERSE(SUBSTRING(REVERSE(domain), LOCATE('.', REVERSE(domain)) + 1, 1000))\nWHEN LENGTH(domain) - LENGTH(REPLACE(domain, '.', '')) = 2 THEN REVERSE(SUBSTRING(REVERSE(REVERSE(SUBSTRING(REVERSE(domain), LOCATE('.', REVERSE(domain)) + 1, 1000))), LOCATE('.', REVERSE(REVERSE(SUBSTRING(REVERSE(domain), LOCATE('.', REVERSE(domain)) + 1, 1000)))) + 1, 1000))\nEND as Keydomain\nFROM sites\nGROUP BY Keydomain\nHAVING COUNT(*) > 1\n) d1\nINNER JOIN\n(\nSELECT id, domain,\nCASE \nWHEN LENGTH(domain) - LENGTH(REPLACE(domain, '.', '')) = 1 THEN REVERSE(SUBSTRING(REVERSE(domain), LOCATE('.', REVERSE(domain)) + 1, 1000))\nWHEN LENGTH(domain) - LENGTH(REPLACE(domain, '.', '')) = 2 THEN REVERSE(SUBSTRING(REVERSE(REVERSE(SUBSTRING(REVERSE(domain), LOCATE('.', REVERSE(domain)) + 1, 1000))), LOCATE('.', REVERSE(REVERSE(SUBSTRING(REVERSE(domain), LOCATE('.', REVERSE(domain)) + 1, 1000)))) + 1, 1000))\nEND as Keydomain\nFROM sites\n) d2\nON d1.Keydomain = d2.Keydomain\n
    \n soup wrap:

    Just add

    GROUP BY Keydomain
    HAVING COUNT(*) > 1
    

    to your query.

    EDIT:

    Could you tell me if there is a way to list the complete domains one by one with your addition?

    SELECT * FROM
    (
    SELECT 
    CASE 
    WHEN LENGTH(domain) - LENGTH(REPLACE(domain, '.', '')) = 1 THEN REVERSE(SUBSTRING(REVERSE(domain), LOCATE('.', REVERSE(domain)) + 1, 1000))
    WHEN LENGTH(domain) - LENGTH(REPLACE(domain, '.', '')) = 2 THEN REVERSE(SUBSTRING(REVERSE(REVERSE(SUBSTRING(REVERSE(domain), LOCATE('.', REVERSE(domain)) + 1, 1000))), LOCATE('.', REVERSE(REVERSE(SUBSTRING(REVERSE(domain), LOCATE('.', REVERSE(domain)) + 1, 1000)))) + 1, 1000))
    END as Keydomain
    FROM sites
    GROUP BY Keydomain
    HAVING COUNT(*) > 1
    ) d1
    INNER JOIN
    (
    SELECT id, domain,
    CASE 
    WHEN LENGTH(domain) - LENGTH(REPLACE(domain, '.', '')) = 1 THEN REVERSE(SUBSTRING(REVERSE(domain), LOCATE('.', REVERSE(domain)) + 1, 1000))
    WHEN LENGTH(domain) - LENGTH(REPLACE(domain, '.', '')) = 2 THEN REVERSE(SUBSTRING(REVERSE(REVERSE(SUBSTRING(REVERSE(domain), LOCATE('.', REVERSE(domain)) + 1, 1000))), LOCATE('.', REVERSE(REVERSE(SUBSTRING(REVERSE(domain), LOCATE('.', REVERSE(domain)) + 1, 1000)))) + 1, 1000))
    END as Keydomain
    FROM sites
    ) d2
    ON d1.Keydomain = d2.Keydomain
    
    qid & accept id: (19359464, 19359558) query: create 1 column from 2 column with in SQL soup:

    Please try:

    \n
    select \n    a.col2+'#'+b.col2 \nfrom \n    T1 a, T1 b \nwhere a.col1='Con'and \n    b.col1='Arr'\n
    \n

    OR

    \n
    select \n    a.col2+'#'+b.col2 \nfrom \n    T1 a CROSS JOIN T1 b \nwhere a.col1='Con'and \n    b.col1='Arr'\n
    \n soup wrap:

    Please try:

    select 
        a.col2+'#'+b.col2 
    from 
        T1 a, T1 b 
    where a.col1='Con'and 
        b.col1='Arr'
    

    OR

    select 
        a.col2+'#'+b.col2 
    from 
        T1 a CROSS JOIN T1 b 
    where a.col1='Con'and 
        b.col1='Arr'
    
    qid & accept id: (19408757, 19419243) query: Cakephp - Adding data to relational database soup:

    to create a select:

    \n

    in your ImagesController

    \n
    public function add() {\n    //\n    // ...\n    //\n    $albums = $this->Image->Album->find('list');\n    $this->set('albums', $albums);\n}\n
    \n

    somewhere in your add.ctp view file

    \n
    echo $this->Form->input('album_id');\n
    \n soup wrap:

    to create a select:

    in your ImagesController

    public function add() {
        //
        // ...
        //
        $albums = $this->Image->Album->find('list');
        $this->set('albums', $albums);
    }
    

    somewhere in your add.ctp view file

    echo $this->Form->input('album_id');
    
    qid & accept id: (19436954, 19437127) query: Change some value each 5 inserts (MySQL Stored Procedure) soup:

    Will using the following work:

    \n
    CAST((Counter / 5) AS UNSIGNED)\n
    \n

    OR

    \n
    FLOOR(Counter / 5)\n
    \n

    OR

    \n
    FORMAT((Counter / 5),0)\n
    \n

    It would look something like the following:

    \n
    VALUES \n  ("Hello!", \n  "Click here.",\n  "Can you tell me your name?",\n  "example.com/img.jpg",\n  "google.com",\n  CAST((Counter / 5) AS UNSIGNED),\n  40,\n  2013);\n
    \n soup wrap:

    Will using the following work:

    CAST((Counter / 5) AS UNSIGNED)
    

    OR

    FLOOR(Counter / 5)
    

    OR

    FORMAT((Counter / 5),0)
    

    It would look something like the following:

    VALUES 
      ("Hello!", 
      "Click here.",
      "Can you tell me your name?",
      "example.com/img.jpg",
      "google.com",
      CAST((Counter / 5) AS UNSIGNED),
      40,
      2013);
    
    qid & accept id: (19447701, 19448095) query: Change single database datetime format soup:

    use this query

    \n
    SELECT CONVERT(VARCHAR(10), convert(date,'2013/10/18'), 103) AS [DD/MM/YYYY]\n
    \n

    OR

    \n
    SELECT CONVERT(VARCHAR(10), getdate(), 103) AS [DD/MM/YYYY]\n
    \n soup wrap:

    use this query

    SELECT CONVERT(VARCHAR(10), convert(date,'2013/10/18'), 103) AS [DD/MM/YYYY]
    

    OR

    SELECT CONVERT(VARCHAR(10), getdate(), 103) AS [DD/MM/YYYY]
    
    qid & accept id: (19459274, 19562334) query: Sequential Group By in sql server soup:

    Per the tag I added to your question this is a gaps and islands problem.

    \n

    The best performing solution will likely be

    \n
    WITH T\n     AS (SELECT *,\n                ID - ROW_NUMBER() OVER (PARTITION BY [STATUS] ORDER BY [ID]) AS Grp\n         FROM   YourTable)\nSELECT [STATUS],\n       SUM([VALUE]) AS [SUM(VALUE)]\nFROM   T\nGROUP  BY [STATUS],\n          Grp\nORDER  BY MIN(ID)\n
    \n

    If the ID values were not guaranteed contiguous as stated then you would need to use

    \n
    ROW_NUMBER() OVER (ORDER BY [ID]) - \n       ROW_NUMBER() OVER (PARTITION BY [STATUS] ORDER BY [ID]) AS Grp\n
    \n

    Instead in the CTE definition.

    \n

    SQL Fiddle

    \n soup wrap:

    Per the tag I added to your question this is a gaps and islands problem.

    The best performing solution will likely be

    WITH T
         AS (SELECT *,
                    ID - ROW_NUMBER() OVER (PARTITION BY [STATUS] ORDER BY [ID]) AS Grp
             FROM   YourTable)
    SELECT [STATUS],
           SUM([VALUE]) AS [SUM(VALUE)]
    FROM   T
    GROUP  BY [STATUS],
              Grp
    ORDER  BY MIN(ID)
    

    If the ID values were not guaranteed contiguous as stated then you would need to use

    ROW_NUMBER() OVER (ORDER BY [ID]) - 
           ROW_NUMBER() OVER (PARTITION BY [STATUS] ORDER BY [ID]) AS Grp
    

    Instead in the CTE definition.

    SQL Fiddle

    qid & accept id: (19499472, 19499896) query: sql query get multiple values from same column for one row soup:

    If you are selecting email and phone in subqueries these two joins are probably unnecessary:

    \n
    left join StaffContactInformation as sci on sr.ID = sci.StaffID\ninner join dictStaffContactTypes as dsct on sci.ContactTypeID = dsct.ID\n
    \n

    Because of them you are getting as many rows as contacts for specific person.

    \n

    Final query might look like:

    \n
    SELECT sr.LastName, sr.FirstName, dd.Name, \n    Email = (\n        select sc.ContactValue FROM StaffContactInformation as sc\n        INNER JOIN StaffRoster as roster on sc.StaffID = roster.ID\n        where sc.ContactTypeID = 3 and roster.ID = sr.ID\n    ),\n    Phone = (\n        SELECT sc1.ContactValue FROM StaffContactInformation as sc1 \n        INNER JOIN StaffRoster as roster on sc1.StaffID = roster.ID\n        where sc1.ContactTypeID = 1\n    ) \nFROM StaffRoster as sr\nleft join dictDivisions as dd on sr.DivisionID = dd.Id  \nwhere (sr.Active = 1 and sr.isContractor = 0 )\nORDER BY sr.LastName, sr.FirstName\n
    \n soup wrap:

    If you are selecting email and phone in subqueries these two joins are probably unnecessary:

    left join StaffContactInformation as sci on sr.ID = sci.StaffID
    inner join dictStaffContactTypes as dsct on sci.ContactTypeID = dsct.ID
    

    Because of them you are getting as many rows as contacts for specific person.

    Final query might look like:

    SELECT sr.LastName, sr.FirstName, dd.Name, 
        Email = (
            select sc.ContactValue FROM StaffContactInformation as sc
            INNER JOIN StaffRoster as roster on sc.StaffID = roster.ID
            where sc.ContactTypeID = 3 and roster.ID = sr.ID
        ),
        Phone = (
            SELECT sc1.ContactValue FROM StaffContactInformation as sc1 
            INNER JOIN StaffRoster as roster on sc1.StaffID = roster.ID
            where sc1.ContactTypeID = 1
        ) 
    FROM StaffRoster as sr
    left join dictDivisions as dd on sr.DivisionID = dd.Id  
    where (sr.Active = 1 and sr.isContractor = 0 )
    ORDER BY sr.LastName, sr.FirstName
    
    qid & accept id: (19532288, 19532788) query: MySQL Adding Timestamp Values, Adding Resultset, and Grouping by Date soup:

    You don't need to make GROUP BY start_time, end_time if you have a column date (i suggest you to create column date to groups the 'time diff').
    \nhere's my example:
    \nmy table (named time)

    \n
    ++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n    date   |     starttime       |       endtime       |\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n2013-10-23 | 2013-10-23 08:00:00 | 2013-10-23 16:30:00 |\n2013-10-24 | 2013-10-24 08:30:00 | 2013-10-24 17:00:00 |\n
    \n

    this is my query to display the different time between starttime and endtime:

    \n
    SELECT *, TIMEDIFF(endtime,starttime) AS duration FROM time\n
    \n

    it will return :

    \n
    +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n    date   |     starttime       |       endtime       | duration |\n+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n2013-10-23 | 2013-10-23 08:00:00 | 2013-10-23 16:30:00 | 08:30:00 |\n2013-10-24 | 2013-10-24 08:30:00 | 2013-10-24 17:00:00 | 08:30:00 |\n
    \n

    that's if you have a date column as different column from starttime and endtime.
    \nyou didn't give me the structure of your table, so i can't see you problem clearly.

    \nUPDATE :
    \nI imagine that you have a table like this :\ntable data
    \nAnd may be your matter is : calculate the time between starting time and ending time from a day of a user that the user could start and stop in anytime (at that day).
    \nI run this query to do that :

    \n
    SELECT *, TIMEDIFF(MAX(end),MIN(start)) AS duration FROM time\nGROUP BY user_id, date \nORDER BY date ASC;\n
    \n

    It will return this:
    \nresult 1
    \nor if you run this query :

    \n
    SELECT \nuser_id,\nMIN(start) AS start, \nMAX(end) AS end, \nTIMEDIFF(MAX(end),MIN(start)) AS duration \nFROM time\nGROUP BY user_id, date \nORDER BY date ASC\n
    \n

    it will return this :
    \nresult 2

    \n soup wrap:

    You don't need to make GROUP BY start_time, end_time if you have a column date (i suggest you to create column date to groups the 'time diff').
    here's my example:
    my table (named time)

    ++++++++++++++++++++++++++++++++++++++++++++++++++++++++
        date   |     starttime       |       endtime       |
    ++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    2013-10-23 | 2013-10-23 08:00:00 | 2013-10-23 16:30:00 |
    2013-10-24 | 2013-10-24 08:30:00 | 2013-10-24 17:00:00 |
    

    this is my query to display the different time between starttime and endtime:

    SELECT *, TIMEDIFF(endtime,starttime) AS duration FROM time
    

    it will return :

    +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
        date   |     starttime       |       endtime       | duration |
    +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    2013-10-23 | 2013-10-23 08:00:00 | 2013-10-23 16:30:00 | 08:30:00 |
    2013-10-24 | 2013-10-24 08:30:00 | 2013-10-24 17:00:00 | 08:30:00 |
    

    that's if you have a date column as different column from starttime and endtime.
    you didn't give me the structure of your table, so i can't see you problem clearly.

    UPDATE :
    I imagine that you have a table like this : table data
    And may be your matter is : calculate the time between starting time and ending time from a day of a user that the user could start and stop in anytime (at that day).
    I run this query to do that :

    SELECT *, TIMEDIFF(MAX(end),MIN(start)) AS duration FROM time
    GROUP BY user_id, date 
    ORDER BY date ASC;
    

    It will return this:
    result 1
    or if you run this query :

    SELECT 
    user_id,
    MIN(start) AS start, 
    MAX(end) AS end, 
    TIMEDIFF(MAX(end),MIN(start)) AS duration 
    FROM time
    GROUP BY user_id, date 
    ORDER BY date ASC
    

    it will return this :
    result 2

    qid & accept id: (19532801, 19533387) query: Inserting a TIME value soup:

    For an alternative, start_time field could store "14:00:00" directly.

    \n

    e.g

    \n
    UPDATE TABLE SET start_time= STR_TO_DATE('14:00:00', '%k:%i:%s');\n
    \n

    while you retrieve the data, the sql may look like below:

    \n
    SELECT TIME_FORMAT(start_time, '%r') FROM TABLE\n
    \n

    However, it is still a little different from your expectation, the result will be 2:00:00 PM

    \n soup wrap:

    For an alternative, start_time field could store "14:00:00" directly.

    e.g

    UPDATE TABLE SET start_time= STR_TO_DATE('14:00:00', '%k:%i:%s');
    

    while you retrieve the data, the sql may look like below:

    SELECT TIME_FORMAT(start_time, '%r') FROM TABLE
    

    However, it is still a little different from your expectation, the result will be 2:00:00 PM

    qid & accept id: (19562212, 19562538) query: SQL - select row with most matching columns soup:

    This should do the trick:

    \n
    SELECT * FROM (\n    SELECT *, CASE application WHEN ? THEN 1 WHEN NULL THEN 0 ELSE NULL END\n            + CASE dstIP WHEN ? THEN 1 WHEN NULL THEN 0 ELSE NULL END\n            + CASE dstPort WHEN ? THEN 1 WHEN NULL THEN 0 ELSE NULL END AS Matches\n    FROM table WHERE Matches IS NOT NULL\n) GROUP BY application, dstIP, dstPort ORDER BY Matches DESC;\n
    \n

    Matches column will count all column match or be NULL when mismatch.

    \n

    GROUP BY without aggregate functions will catch first row (I hope!), which is max match because inner query is sorted descending.

    \n

    EDIT: New version:

    \n
    SELECT *, CASE WHEN application IS ? THEN 1 WHEN application IS NULL THEN 0 ELSE NULL END\n        + CASE WHEN dstIP IS ? THEN 1 WHEN dstIP IS NULL THEN 0 ELSE NULL END\n        + CASE WHEN dstPort IS ? THEN 1 WHEN dstPort IS NULL THEN 0 ELSE NULL END AS Matches\nFROM t\nWHERE Matches IS NOT NULL\nORDER BY Matches DESC\nLIMIT 1;\n
    \n

    Advantages: You can compare NULL also. Disvantages: only 1 match is showed when equally ranked matches are found.

    \n soup wrap:

    This should do the trick:

    SELECT * FROM (
        SELECT *, CASE application WHEN ? THEN 1 WHEN NULL THEN 0 ELSE NULL END
                + CASE dstIP WHEN ? THEN 1 WHEN NULL THEN 0 ELSE NULL END
                + CASE dstPort WHEN ? THEN 1 WHEN NULL THEN 0 ELSE NULL END AS Matches
        FROM table WHERE Matches IS NOT NULL
    ) GROUP BY application, dstIP, dstPort ORDER BY Matches DESC;
    

    Matches column will count all column match or be NULL when mismatch.

    GROUP BY without aggregate functions will catch first row (I hope!), which is max match because inner query is sorted descending.

    EDIT: New version:

    SELECT *, CASE WHEN application IS ? THEN 1 WHEN application IS NULL THEN 0 ELSE NULL END
            + CASE WHEN dstIP IS ? THEN 1 WHEN dstIP IS NULL THEN 0 ELSE NULL END
            + CASE WHEN dstPort IS ? THEN 1 WHEN dstPort IS NULL THEN 0 ELSE NULL END AS Matches
    FROM t
    WHERE Matches IS NOT NULL
    ORDER BY Matches DESC
    LIMIT 1;
    

    Advantages: You can compare NULL also. Disvantages: only 1 match is showed when equally ranked matches are found.

    qid & accept id: (19577349, 19577436) query: SQL select all from one table of joint tables soup:

    In the group by you need to have all the column not aggregated.

    \n

    So you query has to become:

    \n
          SELECT FLIGHTS.*, \n             SEATS_MAX-COUNT(BOOKING_ID) \n        FROM FLIGHTS \n  INNER JOIN PLANES \n          ON FLIGHTS.PLANE_ID = PLANES.PLANE_ID \n   LEFT JOIN BOOKINGS \n          ON FLIGHTS.FLIGHT_ID = BOOKINGS.FLIGHT_ID \n    GROUP BY FLIGHTS.Column1,\n             ...\n             FLIGHTS.ColumN,\n             SEATS_MAX;\n
    \n

    Edit:\nTo list all columns of you table you can use the following query

    \n
      SELECT 'FLIGHTS.' || column_name\n    FROM user_tab_columns\n   WHERE table_name = 'FLIGHTS'\nORDER BY column_id;\n
    \n

    This should make your life a bit easier, then copy and paste

    \n soup wrap:

    In the group by you need to have all the column not aggregated.

    So you query has to become:

          SELECT FLIGHTS.*, 
                 SEATS_MAX-COUNT(BOOKING_ID) 
            FROM FLIGHTS 
      INNER JOIN PLANES 
              ON FLIGHTS.PLANE_ID = PLANES.PLANE_ID 
       LEFT JOIN BOOKINGS 
              ON FLIGHTS.FLIGHT_ID = BOOKINGS.FLIGHT_ID 
        GROUP BY FLIGHTS.Column1,
                 ...
                 FLIGHTS.ColumN,
                 SEATS_MAX;
    

    Edit: To list all columns of you table you can use the following query

      SELECT 'FLIGHTS.' || column_name
        FROM user_tab_columns
       WHERE table_name = 'FLIGHTS'
    ORDER BY column_id;
    

    This should make your life a bit easier, then copy and paste

    qid & accept id: (19582011, 19582546) query: How can I copy one column to from one table to another in SQL Server soup:
    \n
    \n

    How can I abort those active transactions so the task can be successful ?

    \n
    \n
    \n

    You can't, because it's the UPDATE FROM transaction.
    \nYou can either increase max size of the log file:

    \n
    ALTER DATABASE DB_NAME\nMODIFY FILE (NAME=LOG_FILE_NAME,MAXSIZE=UNLIMITED);\n
    \n

    Or you can try something like this:

    \n
    WHILE EXISTS\n(select *\nfrom ExceptionRow\n     inner join HashFP ON ExceptionRow.Hash=HashFP.FingerPrintMD5\nwhere ExceptionRow.Message is null\n      AND not HashFP.MessageFP is null\n)\nUPDATE TOP (1000) ExceptionRow\nSET Exceptionrow.Message = HashFP.MessageFP\nFROM ExceptionRow \n     INNER JOIN HashFP ON ExceptionRow.Hash=HashFP.FingerPrintMD5\nWHERE ExceptionRow.Message IS NULL\n      AND NOT HashFP.MessageFP IS NULL\n
    \n

    IF the database has SIMPLE recovery model this should work, if FULL or BULK_LOAD you need also do backup of transaction log in every iteration.

    \n soup wrap:

    How can I abort those active transactions so the task can be successful ?

    You can't, because it's the UPDATE FROM transaction.
    You can either increase max size of the log file:

    ALTER DATABASE DB_NAME
    MODIFY FILE (NAME=LOG_FILE_NAME,MAXSIZE=UNLIMITED);
    

    Or you can try something like this:

    WHILE EXISTS
    (select *
    from ExceptionRow
         inner join HashFP ON ExceptionRow.Hash=HashFP.FingerPrintMD5
    where ExceptionRow.Message is null
          AND not HashFP.MessageFP is null
    )
    UPDATE TOP (1000) ExceptionRow
    SET Exceptionrow.Message = HashFP.MessageFP
    FROM ExceptionRow 
         INNER JOIN HashFP ON ExceptionRow.Hash=HashFP.FingerPrintMD5
    WHERE ExceptionRow.Message IS NULL
          AND NOT HashFP.MessageFP IS NULL
    

    IF the database has SIMPLE recovery model this should work, if FULL or BULK_LOAD you need also do backup of transaction log in every iteration.

    qid & accept id: (19582702, 19582887) query: Get data from object in SQL soup:

    How the returning result is displayed heavily depends on a client you are using to execute that query. It would be better if you explicitly specified those properties of an object instance you want to be displayed. For example:

    \n
    create or replace type T_Obj as object(\n  prop1 number,\n  prop2 date\n)  \n\ncreate or replace function F_1(\n   p_var1 in number,\n   p_var2 in date\n ) return t_obj is\n begin\n   return t_obj(p_var1, p_var2);\n end;\n\nselect t.obj.prop1\n     , t.obj.prop2\n from (select F_1(1, sysdate) as obj\n         from dual) t\n
    \n

    result:

    \n
     OBJ.PROP1  OBJ.PROP2\n----------  -----------\n         1  25-Oct-2013\n
    \n soup wrap:

    How the returning result is displayed heavily depends on a client you are using to execute that query. It would be better if you explicitly specified those properties of an object instance you want to be displayed. For example:

    create or replace type T_Obj as object(
      prop1 number,
      prop2 date
    )  
    
    create or replace function F_1(
       p_var1 in number,
       p_var2 in date
     ) return t_obj is
     begin
       return t_obj(p_var1, p_var2);
     end;
    
    select t.obj.prop1
         , t.obj.prop2
     from (select F_1(1, sysdate) as obj
             from dual) t
    

    result:

     OBJ.PROP1  OBJ.PROP2
    ----------  -----------
             1  25-Oct-2013
    
    qid & accept id: (19645073, 19645805) query: SQL for list of winners that have won at least a specific percentage of times soup:

    For one user:

    \n
    SELECT ifnull(wins, 0) wins, ifnull(loses,0) loses, \n       ifnull(wins, 0)+ifnull(loses,0) total, \n       ifnull(wins, 0) / ( ifnull(wins, 0)+ifnull(loses,0)) percent\nFROM (\nSELECT\n (SELECT COUNT(*) FROM user_versus WHERE id_user_winner = 6 ) wins,\n (SELECT COUNT(*) FROM user_versus WHERE id_user_loser = 6 ) loses\n) subqry\n
    \n

    For all users:

    \n
    SELECT id_user_winner AS id_user, \n       ifnull(wins, 0) wins\n       ifnull(loses,0) loses\n       ifnull(wins, 0)+ifnull(loses,0) total, \n       ifnull(wins, 0) / ( ifnull(wins, 0)+ifnull(loses,0)) percent\nFROM (\n   SELECT id_user_winner AS id_user FROM user_versus \n   UNION\n   SELECT id_user_loser FROM user_versus \n) u\nLEFT JOIN\nFROM (\n  SELECT id_user_winner, count(*) wins\n  FROM user_versus \n  GROUP BY id_user_winner\n) w\nON u.id_user = id_user_winner\nLEFT JOIN (\n  SELECT id_user_loser, count(*) loses\n  FROM user_versus \n  GROUP BY id_user_loser\n) l\nON u.id_user = l.id_user_loser\n
    \n soup wrap:

    For one user:

    SELECT ifnull(wins, 0) wins, ifnull(loses,0) loses, 
           ifnull(wins, 0)+ifnull(loses,0) total, 
           ifnull(wins, 0) / ( ifnull(wins, 0)+ifnull(loses,0)) percent
    FROM (
    SELECT
     (SELECT COUNT(*) FROM user_versus WHERE id_user_winner = 6 ) wins,
     (SELECT COUNT(*) FROM user_versus WHERE id_user_loser = 6 ) loses
    ) subqry
    

    For all users:

    SELECT id_user_winner AS id_user, 
           ifnull(wins, 0) wins
           ifnull(loses,0) loses
           ifnull(wins, 0)+ifnull(loses,0) total, 
           ifnull(wins, 0) / ( ifnull(wins, 0)+ifnull(loses,0)) percent
    FROM (
       SELECT id_user_winner AS id_user FROM user_versus 
       UNION
       SELECT id_user_loser FROM user_versus 
    ) u
    LEFT JOIN
    FROM (
      SELECT id_user_winner, count(*) wins
      FROM user_versus 
      GROUP BY id_user_winner
    ) w
    ON u.id_user = id_user_winner
    LEFT JOIN (
      SELECT id_user_loser, count(*) loses
      FROM user_versus 
      GROUP BY id_user_loser
    ) l
    ON u.id_user = l.id_user_loser
    
    qid & accept id: (19663813, 19663954) query: MySQL: Counting Latest Occurrences of Field in Another Table soup:
    select status_id, count(1) cnt\nfrom statushistory h\nwhere not exists \n (select 1 from statushistory h1 \n  where h1.project_id=h.project_id and h1.date_added>h.date_added)\ngroup by status_id\n
    \n

    Here it is to test in SQLfiddle

    \n

    This is its version, checking projects table:

    \n
    select status_id, count(1) cnt\nfrom statushistory h, projects p\nwhere p.project_id=h.project_id and p.active=1\n and not exists \n (select 1 from statushistory h1 \n  where h1.project_id=h.project_id and h1.date_added>h.date_added)\ngroup by status_id\n
    \n

    See it in fiddle here

    \n

    Of course to run this effectively, you definitely need index on (project_id,date_added) and maybe on status_id too (see if its presence changes query executin plan).

    \n

    I am not sure if low perfomace caused by subquery in where-clause is a myth or not, but here is a version without it (based partly on Mosty Mostacho's code). You are welcome to compare these queries and tell us which is performing better.

    \n
    select h.status_id, count(*) cnt FROM (\n select project_id, max(date_added) maxdate \n from statushistory\n group by project_id\n) h1, statushistory h, projects p\nwhere h.project_id=h1.project_id and h.date_added=h1.maxdate\n and p.project_id=h.project_id and p.active=1\ngroup by h.status_id\n
    \n

    See it in fiddle here

    \n soup wrap:
    select status_id, count(1) cnt
    from statushistory h
    where not exists 
     (select 1 from statushistory h1 
      where h1.project_id=h.project_id and h1.date_added>h.date_added)
    group by status_id
    

    Here it is to test in SQLfiddle

    This is its version, checking projects table:

    select status_id, count(1) cnt
    from statushistory h, projects p
    where p.project_id=h.project_id and p.active=1
     and not exists 
     (select 1 from statushistory h1 
      where h1.project_id=h.project_id and h1.date_added>h.date_added)
    group by status_id
    

    See it in fiddle here

    Of course to run this effectively, you definitely need index on (project_id,date_added) and maybe on status_id too (see if its presence changes query executin plan).

    I am not sure if low perfomace caused by subquery in where-clause is a myth or not, but here is a version without it (based partly on Mosty Mostacho's code). You are welcome to compare these queries and tell us which is performing better.

    select h.status_id, count(*) cnt FROM (
     select project_id, max(date_added) maxdate 
     from statushistory
     group by project_id
    ) h1, statushistory h, projects p
    where h.project_id=h1.project_id and h.date_added=h1.maxdate
     and p.project_id=h.project_id and p.active=1
    group by h.status_id
    

    See it in fiddle here

    qid & accept id: (19680651, 19681047) query: FULL OUTER JOIN with temp tables soup:

    You can still use a FULL JOIN, just use ISNULL on the second join condition:

    \n
    SELECT  RowNumber = COALESCE(t.RowNumber, e.RowNumber, d.RowNumber),\n        EmployeeID = COALESCE(t.EmployeeID, e.EmployeeID, d.EmployeeID),\n        t.FirstName,\n        t.MiddleName,\n        t.LastName,\n        t.SSN,\n        t.EmployeeCode,\n        t.TaxName,\n        t.Amount,\n        t.GrossPay,\n        t.CompanyId,\n        e.EarningDescription,\n        EarningAmount = e.Amount,\n        d.DeductionDescription,\n        DeductionAmount = d.Amount\nFROM    @Tax t\n        FULL JOIN @Earnings e\n            ON t.EmployeeID = e.EmployeeID\n            AND t.RowNumber = e.RowNumber\n        FULL JOIN @Deductions D\n            ON d.EmployeeID = ISNULL(t.EmployeeID, e.EmployeeID)\n            AND d.RowNumber = ISNULL(t.RowNumber, e.RowNumber);\n
    \n
    \n

    Working example below (all columns other than those needed for joins are null though

    \n
    \n
    DECLARE @Tax Table \n(\n   RowNumber int , \n   FirstName nvarchar(50),\n   MiddleName  nvarchar(50),\n   LastName nvarchar(50),\n   SSN nvarchar(50),\n   EmployeeCode nvarchar(50),\n   TaxName nvarchar(50),\n   Amount decimal(18,2),   \n   GrossPay decimal(18,2),\n   CompanyId int,\n   EmployeeId int\n)\nINSERT @Tax  (RowNumber, EmployeeID)\nVALUES (1, 1), (2, 1), (3, 1), (4, 1);\n\nDECLARE @Earnings TABLE\n(\n   RowNumber int , \n   EmployeeId int,  \n   EarningDescription nvarchar(50),  \n   Amount decimal(18,2)\n)\nINSERT @Earnings  (RowNumber, EmployeeID)\nVALUES (1, 1), (2, 1);\n\nDECLARE @Deductions TABLE \n(\n    RowNumber int , \n    EmployeeId int,  \n    DeductionDescription nvarchar(50),  \n    Amount decimal(18,2)\n) \nINSERT @Deductions  (RowNumber, EmployeeID)\nVALUES (1, 1), (2, 1), (3, 1), (4, 1), (5, 1), (6, 1);  \n\n\nSELECT  RowNumber = COALESCE(t.RowNumber, e.RowNumber, d.RowNumber),\n        EmployeeID = COALESCE(t.EmployeeID, e.EmployeeID, d.EmployeeID),\n        t.FirstName,\n        t.MiddleName,\n        t.LastName,\n        t.SSN,\n        t.EmployeeCode,\n        t.TaxName,\n        t.Amount,\n        t.GrossPay,\n        t.CompanyId,\n        e.EarningDescription,\n        EarningAmount = e.Amount,\n        d.DeductionDescription,\n        DeductionAmount = d.Amount\nFROM    @Tax t\n        FULL JOIN @Earnings e\n            ON t.EmployeeID = e.EmployeeID\n            AND t.RowNumber = e.RowNumber\n        FULL JOIN @Deductions D\n            ON d.EmployeeID = ISNULL(t.EmployeeID, e.EmployeeID)\n            AND d.RowNumber = ISNULL(t.RowNumber, e.RowNumber);\n
    \n soup wrap:

    You can still use a FULL JOIN, just use ISNULL on the second join condition:

    SELECT  RowNumber = COALESCE(t.RowNumber, e.RowNumber, d.RowNumber),
            EmployeeID = COALESCE(t.EmployeeID, e.EmployeeID, d.EmployeeID),
            t.FirstName,
            t.MiddleName,
            t.LastName,
            t.SSN,
            t.EmployeeCode,
            t.TaxName,
            t.Amount,
            t.GrossPay,
            t.CompanyId,
            e.EarningDescription,
            EarningAmount = e.Amount,
            d.DeductionDescription,
            DeductionAmount = d.Amount
    FROM    @Tax t
            FULL JOIN @Earnings e
                ON t.EmployeeID = e.EmployeeID
                AND t.RowNumber = e.RowNumber
            FULL JOIN @Deductions D
                ON d.EmployeeID = ISNULL(t.EmployeeID, e.EmployeeID)
                AND d.RowNumber = ISNULL(t.RowNumber, e.RowNumber);
    

    Working example below (all columns other than those needed for joins are null though


    DECLARE @Tax Table 
    (
       RowNumber int , 
       FirstName nvarchar(50),
       MiddleName  nvarchar(50),
       LastName nvarchar(50),
       SSN nvarchar(50),
       EmployeeCode nvarchar(50),
       TaxName nvarchar(50),
       Amount decimal(18,2),   
       GrossPay decimal(18,2),
       CompanyId int,
       EmployeeId int
    )
    INSERT @Tax  (RowNumber, EmployeeID)
    VALUES (1, 1), (2, 1), (3, 1), (4, 1);
    
    DECLARE @Earnings TABLE
    (
       RowNumber int , 
       EmployeeId int,  
       EarningDescription nvarchar(50),  
       Amount decimal(18,2)
    )
    INSERT @Earnings  (RowNumber, EmployeeID)
    VALUES (1, 1), (2, 1);
    
    DECLARE @Deductions TABLE 
    (
        RowNumber int , 
        EmployeeId int,  
        DeductionDescription nvarchar(50),  
        Amount decimal(18,2)
    ) 
    INSERT @Deductions  (RowNumber, EmployeeID)
    VALUES (1, 1), (2, 1), (3, 1), (4, 1), (5, 1), (6, 1);  
    
    
    SELECT  RowNumber = COALESCE(t.RowNumber, e.RowNumber, d.RowNumber),
            EmployeeID = COALESCE(t.EmployeeID, e.EmployeeID, d.EmployeeID),
            t.FirstName,
            t.MiddleName,
            t.LastName,
            t.SSN,
            t.EmployeeCode,
            t.TaxName,
            t.Amount,
            t.GrossPay,
            t.CompanyId,
            e.EarningDescription,
            EarningAmount = e.Amount,
            d.DeductionDescription,
            DeductionAmount = d.Amount
    FROM    @Tax t
            FULL JOIN @Earnings e
                ON t.EmployeeID = e.EmployeeID
                AND t.RowNumber = e.RowNumber
            FULL JOIN @Deductions D
                ON d.EmployeeID = ISNULL(t.EmployeeID, e.EmployeeID)
                AND d.RowNumber = ISNULL(t.RowNumber, e.RowNumber);
    
    qid & accept id: (19690325, 19690486) query: SQL Query to get recursive count of employees under each manager soup:

    First off, an important note: The first row of the Emp_Table, where Emp_id==Manager_Id==1 is not only meaningless but will also cause infinite recursion. I suggest you remove it.

    \n

    In order to provide an answer, however, I first created a view that eliminates such invalid entries, and used that instead of Emp_Table:

    \n
    create view valid_mng as \nselect Emp_Id,Manager_id from Emp_Table\nwhere Emp_Id<>Manager_Id\n
    \n

    So it boils down to the following, with a little help of a recursive CTE:

    \n
    With cte as (\n  select Emp_Id,Manager_id from valid_mng\n  union all\n  select c.Emp_Id,e.Manager_Id \n  from cte c join valid_mng e on (c.Manager_Id=e.Emp_Id)\n  )\n\nselect m.Manager_Id,count(e.Emp_Id) as Count_of_Employees\nfrom [Execute] m\nleft join cte e on (e.Manager_Id=m.Manager_Id)\ngroup by m.Manager_Id\n
    \n

    If you eventually remove the offending row(s), or better yet set Manager_Id=NULL as HABO suggested, just ignore the valid_mng view and replace it with Emp_Table everywhere.

    \n

    Also a side note: Execute is a reserved word in MSSQL. Avoiding the use of reserved words in user obect naming is always a good practice.

    \n soup wrap:

    First off, an important note: The first row of the Emp_Table, where Emp_id==Manager_Id==1 is not only meaningless but will also cause infinite recursion. I suggest you remove it.

    In order to provide an answer, however, I first created a view that eliminates such invalid entries, and used that instead of Emp_Table:

    create view valid_mng as 
    select Emp_Id,Manager_id from Emp_Table
    where Emp_Id<>Manager_Id
    

    So it boils down to the following, with a little help of a recursive CTE:

    With cte as (
      select Emp_Id,Manager_id from valid_mng
      union all
      select c.Emp_Id,e.Manager_Id 
      from cte c join valid_mng e on (c.Manager_Id=e.Emp_Id)
      )
    
    select m.Manager_Id,count(e.Emp_Id) as Count_of_Employees
    from [Execute] m
    left join cte e on (e.Manager_Id=m.Manager_Id)
    group by m.Manager_Id
    

    If you eventually remove the offending row(s), or better yet set Manager_Id=NULL as HABO suggested, just ignore the valid_mng view and replace it with Emp_Table everywhere.

    Also a side note: Execute is a reserved word in MSSQL. Avoiding the use of reserved words in user obect naming is always a good practice.

    qid & accept id: (19707228, 19707902) query: XML/SQL - Adding a string at the end of each line in individual fields soup:

    In this case, there is no real distinguishing between one newline or two newlines

    \n

    Does this do the job?

    \n
    select replace(details, E'\n', ''||E'\n') from personal_details\n
    \n

    EDIT:\nAfter reading your latest edit with extra care to the desired result,\nI also suggest a double replace:

    \n
    select replace(\n  replace(details, E'\n\n', ''||E'\n'),\nE'\n', ''||E'\n')\nfrom personal_details\n
    \n

    The inner replace which runs first, replaces all double newline chars with your desired extra string just once, plus one newline,

    \n

    while the outer replace further adds the desired string in all newlines encountered.

    \n

    If you want single line output in your file, you can just remove the last ||E'\n' of the outer replace

    \n soup wrap:

    In this case, there is no real distinguishing between one newline or two newlines

    Does this do the job?

    select replace(details, E'\n', ''||E'\n') from personal_details
    

    EDIT: After reading your latest edit with extra care to the desired result, I also suggest a double replace:

    select replace(
      replace(details, E'\n\n', ''||E'\n'),
    E'\n', ''||E'\n')
    from personal_details
    

    The inner replace which runs first, replaces all double newline chars with your desired extra string just once, plus one newline,

    while the outer replace further adds the desired string in all newlines encountered.

    If you want single line output in your file, you can just remove the last ||E'\n' of the outer replace

    qid & accept id: (19716510, 19716638) query: Add Select and Write Privileges to User for Specific Table Names soup:

    If you want to grant the privileges directly to the user

    \n
    GRANT select, update, insert \n   ON table_owner.feed_data_a\n   TO user_a;\nGRANT select, update, insert \n   ON table_owner.feed_data_b\n   TO user_a;\n
    \n

    More commonly, though, you would create a role, grant the role to the user, and grant the privileges to the role. That makes it easier in the future when there is a new user created that you want to have the same privileges as USER_A to just grant a couple of roles rather than figuring out all the privileges that potentially need to be granted. It also makes it easier as new tables are created and new privileges are granted to ensure that users that should have the same privileges continue to have the same privileges.

    \n
    CREATE ROLE feed_data_role;\n\nGRANT select, update, insert \n   ON table_owner.feed_data_a\n   TO feed_data_role;\nGRANT select, update, insert \n   ON table_owner.feed_data_b\n   TO feed_data_role;\n\nGRANT feed_data_role\n   TO user_a\n
    \n soup wrap:

    If you want to grant the privileges directly to the user

    GRANT select, update, insert 
       ON table_owner.feed_data_a
       TO user_a;
    GRANT select, update, insert 
       ON table_owner.feed_data_b
       TO user_a;
    

    More commonly, though, you would create a role, grant the role to the user, and grant the privileges to the role. That makes it easier in the future when there is a new user created that you want to have the same privileges as USER_A to just grant a couple of roles rather than figuring out all the privileges that potentially need to be granted. It also makes it easier as new tables are created and new privileges are granted to ensure that users that should have the same privileges continue to have the same privileges.

    CREATE ROLE feed_data_role;
    
    GRANT select, update, insert 
       ON table_owner.feed_data_a
       TO feed_data_role;
    GRANT select, update, insert 
       ON table_owner.feed_data_b
       TO feed_data_role;
    
    GRANT feed_data_role
       TO user_a
    
    qid & accept id: (19718193, 19718374) query: SQL query to return rows from one table that don't exist in another soup:

    Personally, I'd use a MINUS

    \n
    SELECT *\n  FROM code_mapping\n WHERE soure_system_id = '&LHDNUMBER'\nMINUS\nSELECT *\n  FROM dm.code_mapping@prod_check\n
    \n

    MINUS handles NULL comparisons automatically (a NULL on the source automatically matches a NULL on the target).

    \n

    If you want to list all differences between the two tables (i.e. list all rows that exist in dev but not prod and prod but not dev), you can add a UNION ALL

    \n
    (SELECT a.*, 'In dev but not prod' descriptio\n   FROM dev_table a\n MINUS \n SELECT a.*, 'In dev but not prod' description\n   FROM prod_table a)\nUNION ALL\n(SELECT a.*, 'In prod but not dev' descriptio\n   FROM prod_table a\n MINUS \n SELECT a.*, 'In prod but not dev' description\n   FROM dev_table a)\n
    \n soup wrap:

    Personally, I'd use a MINUS

    SELECT *
      FROM code_mapping
     WHERE soure_system_id = '&LHDNUMBER'
    MINUS
    SELECT *
      FROM dm.code_mapping@prod_check
    

    MINUS handles NULL comparisons automatically (a NULL on the source automatically matches a NULL on the target).

    If you want to list all differences between the two tables (i.e. list all rows that exist in dev but not prod and prod but not dev), you can add a UNION ALL

    (SELECT a.*, 'In dev but not prod' descriptio
       FROM dev_table a
     MINUS 
     SELECT a.*, 'In dev but not prod' description
       FROM prod_table a)
    UNION ALL
    (SELECT a.*, 'In prod but not dev' descriptio
       FROM prod_table a
     MINUS 
     SELECT a.*, 'In prod but not dev' description
       FROM dev_table a)
    
    qid & accept id: (19748723, 19748863) query: how I can use mysql date greater in order by case? soup:

    Are you looking for something like this?

    \n
    SELECT *\n  FROM users\n ORDER BY (COALESCE(subs_end_datetime, 0) <= CURDATE()), id\n
    \n

    Here is SQLFiddle demo

    \n
    \n

    Based on your comments

    \n
    SELECT *, subs_end_datetime <= CURDATE() aa\n  FROM users\n ORDER BY (COALESCE(subs_end_datetime, 0) <= CURDATE()), subs_end_datetime DESC\n
    \n

    Here is SQLFiddle demo

    \n soup wrap:

    Are you looking for something like this?

    SELECT *
      FROM users
     ORDER BY (COALESCE(subs_end_datetime, 0) <= CURDATE()), id
    

    Here is SQLFiddle demo


    Based on your comments

    SELECT *, subs_end_datetime <= CURDATE() aa
      FROM users
     ORDER BY (COALESCE(subs_end_datetime, 0) <= CURDATE()), subs_end_datetime DESC
    

    Here is SQLFiddle demo

    qid & accept id: (19752084, 19752163) query: SQL case statement in join condition soup:

    You could try something like :

    \n
    AND (table1.counter IS NULL OR table1.counter=table2.counter)\n
    \n

    Instead of :

    \n
    AND table1.counter=table2.counter\n
    \n

    In your first query.

    \n soup wrap:

    You could try something like :

    AND (table1.counter IS NULL OR table1.counter=table2.counter)
    

    Instead of :

    AND table1.counter=table2.counter
    

    In your first query.

    qid & accept id: (19765962, 19767746) query: Calculating days to excluding weekends (Monday to Friday) in SQL Server soup:

    I would always recommend a Calendar table, then you can simply use:

    \n
    SELECT  COUNT(*)\nFROM    dbo.CalendarTable\nWHERE   IsWorkingDay = 1\nAND     [Date] > @StartDate\nAND     [Date] <= @EndDate;\n
    \n

    Since SQL has no knowledge of national holidays for example the number of weekdays between two dates does not always represent the number of working days. This is why a calendar table is a must for most databases. They do not take a lot of memory and simplify a lot of queries.

    \n

    But if this is not an option then you can generate a table of dates relatively easily on the fly and use this

    \n
    SET DATEFIRST 1;\nDECLARE @StartDate DATETIME = '20131103', \n        @EndDate DATETIME = '20131104';\n\n-- GENERATE A LIST OF ALL DATES BETWEEN THE START DATE AND THE END DATE\nWITH AllDates AS\n(   SELECT  TOP (DATEDIFF(DAY, @StartDate, @EndDate))\n            D = DATEADD(DAY, ROW_NUMBER() OVER(ORDER BY a.Object_ID), @StartDate)\n    FROM    sys.all_objects a\n            CROSS JOIN sys.all_objects b\n)\nSELECT  WeekDays = COUNT(*)\nFROM    AllDates\nWHERE   DATEPART(WEEKDAY, D) NOT IN (6, 7);\n
    \n
    \n

    EDIT

    \n

    If you need to calculate the difference between two date columns you can still use your calendar table as so:

    \n
    SELECT  t.ID,\n        t.Date1,\n        t.Date2,\n        WorkingDays = COUNT(c.DateKey)\nFROM    TestTable t\n        LEFT JOIN dbo.Calendar c\n            ON c.DateKey >= t.Date1\n            AND c.DateKey < t.Date2\n            AND c.IsWorkingDay = 1\nGROUP BY t.ID, t.Date1, t.Date2;\n
    \n

    Example on SQL-Fiddle

    \n soup wrap:

    I would always recommend a Calendar table, then you can simply use:

    SELECT  COUNT(*)
    FROM    dbo.CalendarTable
    WHERE   IsWorkingDay = 1
    AND     [Date] > @StartDate
    AND     [Date] <= @EndDate;
    

    Since SQL has no knowledge of national holidays for example the number of weekdays between two dates does not always represent the number of working days. This is why a calendar table is a must for most databases. They do not take a lot of memory and simplify a lot of queries.

    But if this is not an option then you can generate a table of dates relatively easily on the fly and use this

    SET DATEFIRST 1;
    DECLARE @StartDate DATETIME = '20131103', 
            @EndDate DATETIME = '20131104';
    
    -- GENERATE A LIST OF ALL DATES BETWEEN THE START DATE AND THE END DATE
    WITH AllDates AS
    (   SELECT  TOP (DATEDIFF(DAY, @StartDate, @EndDate))
                D = DATEADD(DAY, ROW_NUMBER() OVER(ORDER BY a.Object_ID), @StartDate)
        FROM    sys.all_objects a
                CROSS JOIN sys.all_objects b
    )
    SELECT  WeekDays = COUNT(*)
    FROM    AllDates
    WHERE   DATEPART(WEEKDAY, D) NOT IN (6, 7);
    

    EDIT

    If you need to calculate the difference between two date columns you can still use your calendar table as so:

    SELECT  t.ID,
            t.Date1,
            t.Date2,
            WorkingDays = COUNT(c.DateKey)
    FROM    TestTable t
            LEFT JOIN dbo.Calendar c
                ON c.DateKey >= t.Date1
                AND c.DateKey < t.Date2
                AND c.IsWorkingDay = 1
    GROUP BY t.ID, t.Date1, t.Date2;
    

    Example on SQL-Fiddle

    qid & accept id: (19835090, 21036491) query: Replace multiple characters from string without using any nested replace functions soup:

    I had created a SPLIT function to implement this because I need to implement this operation multiple time in PROCEDURE

    \n

    SPLIT FUNCTION

    \n
    create function [dbo].[Split](@String varchar(8000), @Delimiter char(1))       \nreturns @temptable TABLE (items varchar(8000))       \nas       \nbegin       \n    declare @idx int       \n    declare @slice varchar(8000)       \n\n    select @idx = 1       \n        if len(@String)<1 or @String is null  return       \n\n    while @idx!= 0       \n    begin       \n        set @idx = charindex(@Delimiter,@String)       \n        if @idx!=0       \n            set @slice = left(@String,@idx - 1)       \n        else       \n            set @slice = @String       \n\n        if(len(@slice)>0)  \n            insert into @temptable(Items) values(@slice)       \n\n        set @String = right(@String,len(@String) - @idx)       \n        if len(@String) = 0 break       \n    end   \nreturn       \nend\n
    \n

    Code used in procedure:

    \n
    DECLARE @NEWSTRING VARCHAR(100) \nSET @NEWSTRING = '(N_100-(6858)*(6858)*N_100/0_2)%N_35' ;\nSELECT @NEWSTRING = REPLACE(@NEWSTRING, items, '~') FROM dbo.Split('+,-,*,/,%,(,)', ',');\nPRINT @NEWSTRING\n
    \n

    OUTPUT

    \n
    ~N_100~~6858~~~6858~~N_100~0_2~~N_35\n
    \n soup wrap:

    I had created a SPLIT function to implement this because I need to implement this operation multiple time in PROCEDURE

    SPLIT FUNCTION

    create function [dbo].[Split](@String varchar(8000), @Delimiter char(1))       
    returns @temptable TABLE (items varchar(8000))       
    as       
    begin       
        declare @idx int       
        declare @slice varchar(8000)       
    
        select @idx = 1       
            if len(@String)<1 or @String is null  return       
    
        while @idx!= 0       
        begin       
            set @idx = charindex(@Delimiter,@String)       
            if @idx!=0       
                set @slice = left(@String,@idx - 1)       
            else       
                set @slice = @String       
    
            if(len(@slice)>0)  
                insert into @temptable(Items) values(@slice)       
    
            set @String = right(@String,len(@String) - @idx)       
            if len(@String) = 0 break       
        end   
    return       
    end
    

    Code used in procedure:

    DECLARE @NEWSTRING VARCHAR(100) 
    SET @NEWSTRING = '(N_100-(6858)*(6858)*N_100/0_2)%N_35' ;
    SELECT @NEWSTRING = REPLACE(@NEWSTRING, items, '~') FROM dbo.Split('+,-,*,/,%,(,)', ',');
    PRINT @NEWSTRING
    

    OUTPUT

    ~N_100~~6858~~~6858~~N_100~0_2~~N_35
    
    qid & accept id: (19837655, 19837754) query: SQL Server query dry run soup:

    Use an SQL transaction to make your changes then back them out.

    \n

    Before you execute your script:

    \n
    BEGIN TRANSACTION;\n
    \n

    After you execute your script and have done your checking:

    \n
    ROLLBACK TRANSACTION;\n
    \n

    Every change in your script will then be undone.

    \n

    Note: Make sure you don't have a COMMIT in your script!

    \n soup wrap:

    Use an SQL transaction to make your changes then back them out.

    Before you execute your script:

    BEGIN TRANSACTION;
    

    After you execute your script and have done your checking:

    ROLLBACK TRANSACTION;
    

    Every change in your script will then be undone.

    Note: Make sure you don't have a COMMIT in your script!

    qid & accept id: (19866409, 19866600) query: How to add one nanosecond to a timestamp in PL/SQL soup:

    interval day to second literal can be used to add fractional seconds to a timestamp value:

    \n

    In this example we add one nanosecond:

    \n
    select timestamp '2013-11-11 22:10:10.111111111' + \n       interval '0 00:00:00.000000001' day to second(9) as res\n  from dual\n
    \n

    Result:

    \n
    RES                           \n-------------------------------\n11-NOV-13 10.10.10.111111112 PM \n
    \n

    Note: When you are using to_timestamp() function to convert character literal to a value of timestamp data type, it's a good idea to specify a format mask(not relay on NLS settings).

    \n
    select TO_TIMESTAMP('11-11-2013 22:10:10:111111111', 'dd-mm-yyyy hh24:mi:ss:ff9') + \n       interval '0 00:00:00.000000001' day to second(9) as res\n  from dual\n
    \n

    Result:

    \n
    RES                           \n-------------------------------\n11-NOV-13 10.10.10.111111112 PM \n
    \n

    Note: As you intend to process values of timestamp data type using PL/SQL you should be aware of the following. The default precision of fractional seconds for values of timestamp data type, in PL/SQL, is 6 not 9 as it is in SQL, so you may expect truncation of fractional second. In order to avoid truncation of fractional seconds use timestamp_unconstrained and dsinterval_unconstrained data types instead of timestamp and interval day to second:

    \n
    declare\n  l_tmstmp timestamp_unconstrained := to_timestamp('11-11-2013 22:10:10:111111111',\n                                                   'dd-mm-yyyy hh24:mi:ss:ff9');\n  l_ns     dsinterval_unconstrained :=  interval '0.000000001' second;\nbegin\n  l_tmstmp := l_tmstmp + l_ns;\n  dbms_output.put_line(to_char(l_tmstmp, 'dd-mm-yyyy hh24:mi:ss:ff9'));\nend;\n
    \n

    Result:

    \n
    anonymous block completed\n11-11-2013 22:10:10:111111112\n
    \n soup wrap:

    interval day to second literal can be used to add fractional seconds to a timestamp value:

    In this example we add one nanosecond:

    select timestamp '2013-11-11 22:10:10.111111111' + 
           interval '0 00:00:00.000000001' day to second(9) as res
      from dual
    

    Result:

    RES                           
    -------------------------------
    11-NOV-13 10.10.10.111111112 PM 
    

    Note: When you are using to_timestamp() function to convert character literal to a value of timestamp data type, it's a good idea to specify a format mask(not relay on NLS settings).

    select TO_TIMESTAMP('11-11-2013 22:10:10:111111111', 'dd-mm-yyyy hh24:mi:ss:ff9') + 
           interval '0 00:00:00.000000001' day to second(9) as res
      from dual
    

    Result:

    RES                           
    -------------------------------
    11-NOV-13 10.10.10.111111112 PM 
    

    Note: As you intend to process values of timestamp data type using PL/SQL you should be aware of the following. The default precision of fractional seconds for values of timestamp data type, in PL/SQL, is 6 not 9 as it is in SQL, so you may expect truncation of fractional second. In order to avoid truncation of fractional seconds use timestamp_unconstrained and dsinterval_unconstrained data types instead of timestamp and interval day to second:

    declare
      l_tmstmp timestamp_unconstrained := to_timestamp('11-11-2013 22:10:10:111111111',
                                                       'dd-mm-yyyy hh24:mi:ss:ff9');
      l_ns     dsinterval_unconstrained :=  interval '0.000000001' second;
    begin
      l_tmstmp := l_tmstmp + l_ns;
      dbms_output.put_line(to_char(l_tmstmp, 'dd-mm-yyyy hh24:mi:ss:ff9'));
    end;
    

    Result:

    anonymous block completed
    11-11-2013 22:10:10:111111112
    
    qid & accept id: (19872492, 19882626) query: Detect if MySQL has duplicates when inserting soup:

    In order to be able to change a value of value1 with ON DUPLICATE KEY clause you have to have either a UNIQUE constraint or a PRIMARY KEY on (value2, value3).

    \n
    ALTER TABLE table1 ADD UNIQUE (value2, value3);\n
    \n

    Now to simplify your insert statement you can also use VALUES() in ON DUPLICATE KEY like this

    \n
    INSERT INTO Table1 (`value1`, `value2`, `value3`)\nVALUES ('$valueForValue1', '$valueForValue2', '$valueForValue3')\nON DUPLICATE KEY UPDATE value1 = VALUES(value1);\n
    \n

    Here is SQLFIddle demo

    \n soup wrap:

    In order to be able to change a value of value1 with ON DUPLICATE KEY clause you have to have either a UNIQUE constraint or a PRIMARY KEY on (value2, value3).

    ALTER TABLE table1 ADD UNIQUE (value2, value3);
    

    Now to simplify your insert statement you can also use VALUES() in ON DUPLICATE KEY like this

    INSERT INTO Table1 (`value1`, `value2`, `value3`)
    VALUES ('$valueForValue1', '$valueForValue2', '$valueForValue3')
    ON DUPLICATE KEY UPDATE value1 = VALUES(value1);
    

    Here is SQLFIddle demo

    qid & accept id: (19902526, 19902958) query: table into a table row soup:

    Stop. Don't create tables for each category. Use a proper schema design from the beginning. It will pay off big time by allowing you normally maintain and query your data.

    \n

    In your case the schema might look like

    \n
    CREATE TABLE categories\n(\n  category_id INT NOT NULL AUTO_INCREMENT PRIMARY KEY, \n  category_name VARCHAR(128)\n);\n\nCREATE TABLE items\n(\n  item_id int NOT NULL AUTO_INCREMENT PRIMARY KEY, \n  category_id INT, \n  item_name VARCHAR(128),\n  FOREIGN KEY (category_id) REFERENCES categories (category_id)\n);\n
    \n

    To insert new items and associate them with categories

    \n
    INSERT INTO items (category_id, item_name)\nVALUES (1, 'Hard disk');\nINSERT INTO items (category_id, item_name)\nVALUES (2, 'Java');\n
    \n

    To get items in category Hardware

    \n
    SELECT item_id, item_name\n  FROM items i JOIN categories c\n    ON i.category_id = c.category_id\n WHERE c.category_name = 'Hardware'\n
    \n

    or to easily get a count of items per category

    \n
    SELECT category_name, COUNT(item_id) no_items\n  FROM categories c LEFT JOIN items i\n    ON c.category_id = i.category_id\n GROUP BY c.category_id, c.category_name;\n
    \n

    Here is SQLFiddle demo

    \n

    If an item may belong to different categories then you'll need a many-to-many table categories_items.

    \n soup wrap:

    Stop. Don't create tables for each category. Use a proper schema design from the beginning. It will pay off big time by allowing you normally maintain and query your data.

    In your case the schema might look like

    CREATE TABLE categories
    (
      category_id INT NOT NULL AUTO_INCREMENT PRIMARY KEY, 
      category_name VARCHAR(128)
    );
    
    CREATE TABLE items
    (
      item_id int NOT NULL AUTO_INCREMENT PRIMARY KEY, 
      category_id INT, 
      item_name VARCHAR(128),
      FOREIGN KEY (category_id) REFERENCES categories (category_id)
    );
    

    To insert new items and associate them with categories

    INSERT INTO items (category_id, item_name)
    VALUES (1, 'Hard disk');
    INSERT INTO items (category_id, item_name)
    VALUES (2, 'Java');
    

    To get items in category Hardware

    SELECT item_id, item_name
      FROM items i JOIN categories c
        ON i.category_id = c.category_id
     WHERE c.category_name = 'Hardware'
    

    or to easily get a count of items per category

    SELECT category_name, COUNT(item_id) no_items
      FROM categories c LEFT JOIN items i
        ON c.category_id = i.category_id
     GROUP BY c.category_id, c.category_name;
    

    Here is SQLFiddle demo

    If an item may belong to different categories then you'll need a many-to-many table categories_items.

    qid & accept id: (19939624, 19940421) query: Format the XML String Generated using Oracle XMLAgg soup:

    You have to use XMLSERIALIZE:

    \n
    SELECT\n  XMLSERIALIZE(DOCUMENT\n    XMLElement("Sample-Test" ,\n        XMLAgg(\n        XMLElement("Sample",\n              XMLElement("SAMPLE_NUM", s.sample_number), \n              XMLElement("LABEL_ID", s.label_id),\n              XMLElement("STATUS", s.status),\n               (SELECT \n                  XMLAgg( \n                  XMLElement("Test-Details",\n                      XMLElement("TEST_NUM", t.test_number),\n                      XMLElement("ANALYSIS", t.analysis),                                                                   \n                          (SELECT \n                           XMLAgg( \n                              XMLElement("Result-Details",\n                              XMLElement("RESULT_NUM", R.RESULT_NUMBER),\n                              XMLElement("RESULT_NAME", R.NAME))) \n                              FROM RESULT R WHERE t.test_number = R.test_number \n                              and t.SAMPLE_number = R.SAMPLE_NUMBER\n                                                          )))                                                                            \n                 FROM TEST T WHERE t.SAMPLE_number = S.SAMPLE_NUMBER))) \n                 ) AS CLOB INDENT SIZE = 2) as XML                                             \n FROM sample s \n WHERE s.sample_number = 720000020018;\n
    \n

    Edit

    \n

    It is not working for you, because, most probably, you are using Oracle 10g, and the INDENT option was introduced in version 11g. If this is the case, try below approach with the EXTRACT('*'):

    \n
    SELECT\n        XMLElement("Sample-Test" ,\n            XMLAgg(\n            XMLElement("Sample",\n                  XMLElement("SAMPLE_NUM", s.sample_number), \n                  XMLElement("LABEL_ID", s.label_id),\n                  XMLElement("STATUS", s.status),\n                   (SELECT \n                      XMLAgg( \n                      XMLElement("Test-Details",\n                          XMLElement("TEST_NUM", t.test_number),\n                          XMLElement("ANALYSIS", t.analysis),                                                                   \n                              (SELECT \n                               XMLAgg( \n                                  XMLElement("Result-Details",\n                                  XMLElement("RESULT_NUM", R.RESULT_NUMBER),\n                                  XMLElement("RESULT_NAME", R.NAME))) \n                                  FROM RESULT R WHERE t.test_number = R.test_number \n                                  and t.SAMPLE_number = R.SAMPLE_NUMBER\n                                                              )))                                                                            \n                     FROM TEST T WHERE t.SAMPLE_number = S.SAMPLE_NUMBER))) \n                     ).EXTRACT('*') as XML                                             \n     FROM sample s \n     WHERE s.sample_number = 720000020018;\n
    \n soup wrap:

    You have to use XMLSERIALIZE:

    SELECT
      XMLSERIALIZE(DOCUMENT
        XMLElement("Sample-Test" ,
            XMLAgg(
            XMLElement("Sample",
                  XMLElement("SAMPLE_NUM", s.sample_number), 
                  XMLElement("LABEL_ID", s.label_id),
                  XMLElement("STATUS", s.status),
                   (SELECT 
                      XMLAgg( 
                      XMLElement("Test-Details",
                          XMLElement("TEST_NUM", t.test_number),
                          XMLElement("ANALYSIS", t.analysis),                                                                   
                              (SELECT 
                               XMLAgg( 
                                  XMLElement("Result-Details",
                                  XMLElement("RESULT_NUM", R.RESULT_NUMBER),
                                  XMLElement("RESULT_NAME", R.NAME))) 
                                  FROM RESULT R WHERE t.test_number = R.test_number 
                                  and t.SAMPLE_number = R.SAMPLE_NUMBER
                                                              )))                                                                            
                     FROM TEST T WHERE t.SAMPLE_number = S.SAMPLE_NUMBER))) 
                     ) AS CLOB INDENT SIZE = 2) as XML                                             
     FROM sample s 
     WHERE s.sample_number = 720000020018;
    

    Edit

    It is not working for you, because, most probably, you are using Oracle 10g, and the INDENT option was introduced in version 11g. If this is the case, try below approach with the EXTRACT('*'):

    SELECT
            XMLElement("Sample-Test" ,
                XMLAgg(
                XMLElement("Sample",
                      XMLElement("SAMPLE_NUM", s.sample_number), 
                      XMLElement("LABEL_ID", s.label_id),
                      XMLElement("STATUS", s.status),
                       (SELECT 
                          XMLAgg( 
                          XMLElement("Test-Details",
                              XMLElement("TEST_NUM", t.test_number),
                              XMLElement("ANALYSIS", t.analysis),                                                                   
                                  (SELECT 
                                   XMLAgg( 
                                      XMLElement("Result-Details",
                                      XMLElement("RESULT_NUM", R.RESULT_NUMBER),
                                      XMLElement("RESULT_NAME", R.NAME))) 
                                      FROM RESULT R WHERE t.test_number = R.test_number 
                                      and t.SAMPLE_number = R.SAMPLE_NUMBER
                                                                  )))                                                                            
                         FROM TEST T WHERE t.SAMPLE_number = S.SAMPLE_NUMBER))) 
                         ).EXTRACT('*') as XML                                             
         FROM sample s 
         WHERE s.sample_number = 720000020018;
    
    qid & accept id: (19941944, 19942007) query: How to execute two DELETE queries one after another soup:

    You can execute queries in succession by separating them with a semicolon ;. More details are in the MySQL documentation.

    \n

    Simply do:

    \n
    DELETE FROM A WHERE Id IN (SELECT Id FROM B); DELETE FROM B;\n
    \n

    Based on your requirement; this does exactly what you asked for based on the below example:

    \n
    mysql> select sleep(5); show databases;\n+----------+\n| sleep(5) |\n+----------+\n|        0 |\n+----------+\n1 row in set (5.00 sec)\n\n+--------------------+\n| Database           |\n+--------------------+\n|         ...        |\n+--------------------+\n9 rows in set (0.01 sec)\n
    \n

    You can do this with mysql -e command and virtually any mysql library (such as the one with php).

    \n soup wrap:

    You can execute queries in succession by separating them with a semicolon ;. More details are in the MySQL documentation.

    Simply do:

    DELETE FROM A WHERE Id IN (SELECT Id FROM B); DELETE FROM B;
    

    Based on your requirement; this does exactly what you asked for based on the below example:

    mysql> select sleep(5); show databases;
    +----------+
    | sleep(5) |
    +----------+
    |        0 |
    +----------+
    1 row in set (5.00 sec)
    
    +--------------------+
    | Database           |
    +--------------------+
    |         ...        |
    +--------------------+
    9 rows in set (0.01 sec)
    

    You can do this with mysql -e command and virtually any mysql library (such as the one with php).

    qid & accept id: (19949250, 19949316) query: oracle date to string soup:

    You shouldn't use the FM format model, because FM, as written in the documentation:

    \n

    FM - Used in combination with other elements to direct the suppression of leading or trailing blanks

    \n

    So using FM will make your final string shorter, if possible.

    \n

    You should remove the FM from your format model mask and it will work as you expect:

    \n
    select to_char(TRUNC(sysdate), 'mm/dd/yyyy hh12:mi:ss am') from dual;\n
    \n

    Output:

    \n
    11/13/2013 12:00:00 am.
    \n

    I've changed my answer after reading Nicholas Krasnov's comment (thanks).

    \n

    More about date format models in Oracle Documentation: Format models

    \n

    Edit

    \n

    Yes, the code I provided would return, for example, 01-01-2013. If you want to have the month and day without leading zeroes, than you should write it like this: fmDD-MM-YYYY fmHH:MI:SS.

    \n

    The first fm makes the leading zeroes be truncated. The second fm turns off that feature and you do get leading zeroes for the time part of the date, example:

    \n
    SELECT TO_CHAR(\n         TO_DATE('01-01-2013 10:00:00', 'DD-MM-YYYY HH12:MI:SS'),\n         'fmmm/dd/yyyy fmhh12:mi:ss am')\nFROM dual;\n
    \n

    Output:

    \n
    1/1/2013 10:00:00 am.
    \n soup wrap:

    You shouldn't use the FM format model, because FM, as written in the documentation:

    FM - Used in combination with other elements to direct the suppression of leading or trailing blanks

    So using FM will make your final string shorter, if possible.

    You should remove the FM from your format model mask and it will work as you expect:

    select to_char(TRUNC(sysdate), 'mm/dd/yyyy hh12:mi:ss am') from dual;
    

    Output:

    11/13/2013 12:00:00 am.

    I've changed my answer after reading Nicholas Krasnov's comment (thanks).

    More about date format models in Oracle Documentation: Format models

    Edit

    Yes, the code I provided would return, for example, 01-01-2013. If you want to have the month and day without leading zeroes, than you should write it like this: fmDD-MM-YYYY fmHH:MI:SS.

    The first fm makes the leading zeroes be truncated. The second fm turns off that feature and you do get leading zeroes for the time part of the date, example:

    SELECT TO_CHAR(
             TO_DATE('01-01-2013 10:00:00', 'DD-MM-YYYY HH12:MI:SS'),
             'fmmm/dd/yyyy fmhh12:mi:ss am')
    FROM dual;
    

    Output:

    1/1/2013 10:00:00 am.
    qid & accept id: (19968525, 19968582) query: How to Insert a value from column to another colum? soup:

    Okay. With the additional information from your comment, this runs on SQL 2012:

    \n

    First some first aid for your data model:

    \n
    CREATE TABLE [Orders] (\nCustomerId INT,\nProductId INT,\nQuantity INT,\nOrderDate datetime2 default GetDate(),\nEnteredBy SYSNAME default original_login() \n)\nGO\n
    \n

    Then the transaction code would be:

    \n
    BEGIN TRANSACTION\n\nDECLARE @Quantity INT\nDECLARE @CustomerId INT\nDECLARE @ProductId INT\n\nINSERT INTO Orders (customerId,productId,quantity) \nVALUES (@CustomerId,@ProductId,@Quantity)\n\nUPDATE Customer\nSET quantityOrder = QuantityOrder + @Quantity\nWHERE CustomerId = @CustomerId\n\nUPDATE product\nSET quantity = quantity - @Quantity\nWHERE productId = @ProductId\n\nCOMMIT TRANSACTION\n
    \n soup wrap:

    Okay. With the additional information from your comment, this runs on SQL 2012:

    First some first aid for your data model:

    CREATE TABLE [Orders] (
    CustomerId INT,
    ProductId INT,
    Quantity INT,
    OrderDate datetime2 default GetDate(),
    EnteredBy SYSNAME default original_login() 
    )
    GO
    

    Then the transaction code would be:

    BEGIN TRANSACTION
    
    DECLARE @Quantity INT
    DECLARE @CustomerId INT
    DECLARE @ProductId INT
    
    INSERT INTO Orders (customerId,productId,quantity) 
    VALUES (@CustomerId,@ProductId,@Quantity)
    
    UPDATE Customer
    SET quantityOrder = QuantityOrder + @Quantity
    WHERE CustomerId = @CustomerId
    
    UPDATE product
    SET quantity = quantity - @Quantity
    WHERE productId = @ProductId
    
    COMMIT TRANSACTION
    
    qid & accept id: (19985833, 19986040) query: Get mysql column values to row soup:

    You can´t do this without a PIVOT TABLE Wich in most cases has fixed numbers of columns to rows.

    \n

    This one has a procedure to do it automatticaly http://www.artfulsoftware.com/infotree/qrytip.php?id=523

    \n

    But you have a function on MySql wich will give you something to work with. You will not see the Passenger1..PassengerN you will see a result like this:

    \n
    1 Steve, Gary, Tom\n2 John, Chris, Thomas\n
    \n

    If that is good enough to you this is your query:

    \n
    select passengers.Bookingid, group_concat(bookings.Customer)\n  from bookings inner join passengers on ( bookings.Bookingid = passengers.Bookingid )\ngroup by passengers.Bookingid \n
    \n soup wrap:

    You can´t do this without a PIVOT TABLE Wich in most cases has fixed numbers of columns to rows.

    This one has a procedure to do it automatticaly http://www.artfulsoftware.com/infotree/qrytip.php?id=523

    But you have a function on MySql wich will give you something to work with. You will not see the Passenger1..PassengerN you will see a result like this:

    1 Steve, Gary, Tom
    2 John, Chris, Thomas
    

    If that is good enough to you this is your query:

    select passengers.Bookingid, group_concat(bookings.Customer)
      from bookings inner join passengers on ( bookings.Bookingid = passengers.Bookingid )
    group by passengers.Bookingid 
    
    qid & accept id: (19998376, 19998564) query: Getting Values from a table column and inserting to another table soup:

    Using a GROUP BY and CASE will do the trick:

    \n
    CREATE TABLE extended_values (\n  name VARCHAR(20),\n  value VARCHAR(20),\n  userkey INT\n);\n\nINSERT INTO extended_values VALUES ('cs1', 'tgb', 100);\nINSERT INTO extended_values VALUES ('cs2', 'hhy', 100);\nINSERT INTO extended_values VALUES ('cs3', 'ttr', 100);\nINSERT INTO extended_values VALUES ('cs1', 'hht', 104);\nINSERT INTO extended_values VALUES ('cs2', 'iyu', 104);\nINSERT INTO extended_values VALUES ('cs3', 'uyt', 104);\nINSERT INTO extended_values VALUES ('cs1', 'tjg', 106);\nINSERT INTO extended_values VALUES ('cs2', 'yyt', 106);\nINSERT INTO extended_values VALUES ('cs3', 'try', 106);\n\nCOMMIT;\n\nCREATE TABLE user_custom_property (\n  userkey INT,\n  cs1 VARCHAR(20),\n  cs2 VARCHAR(20),\n  cs3 VARCHAR(20)\n);\n\nINSERT INTO user_custom_property\n  SELECT\n      userkey,\n      MIN(CASE WHEN name = 'cs1' THEN value END),\n      MIN(CASE WHEN name = 'cs2' THEN value END),\n      MIN(CASE WHEN name = 'cs3' THEN value END)\n    FROM extended_values\n  GROUP BY userkey;\n\nSELECT * FROM user_custom_property;\n
    \n

    Output:

    \n
       USERKEY CS1                  CS2                  CS3                \n---------- -------------------- -------------------- --------------------\n       100 tgb                  hhy                  ttr                  \n       104 hht                  iyu                  uyt                  \n       106 tjg                  yyt                  try 
    \n

    Check at SQLFiddle:

    \n\n

    Edit

    \n

    Regarding the question in the comment - you just have to change the values in the CASE:

    \n
    INSERT INTO user_custom_property\n  SELECT\n      userkey,\n      MIN(CASE WHEN name = 'ea1' THEN value END),\n      MIN(CASE WHEN name = 'ea2' THEN value END),\n      MIN(CASE WHEN name = 'ea3' THEN value END)\n    FROM extended_values\n  GROUP BY userkey;\n
    \n soup wrap:

    Using a GROUP BY and CASE will do the trick:

    CREATE TABLE extended_values (
      name VARCHAR(20),
      value VARCHAR(20),
      userkey INT
    );
    
    INSERT INTO extended_values VALUES ('cs1', 'tgb', 100);
    INSERT INTO extended_values VALUES ('cs2', 'hhy', 100);
    INSERT INTO extended_values VALUES ('cs3', 'ttr', 100);
    INSERT INTO extended_values VALUES ('cs1', 'hht', 104);
    INSERT INTO extended_values VALUES ('cs2', 'iyu', 104);
    INSERT INTO extended_values VALUES ('cs3', 'uyt', 104);
    INSERT INTO extended_values VALUES ('cs1', 'tjg', 106);
    INSERT INTO extended_values VALUES ('cs2', 'yyt', 106);
    INSERT INTO extended_values VALUES ('cs3', 'try', 106);
    
    COMMIT;
    
    CREATE TABLE user_custom_property (
      userkey INT,
      cs1 VARCHAR(20),
      cs2 VARCHAR(20),
      cs3 VARCHAR(20)
    );
    
    INSERT INTO user_custom_property
      SELECT
          userkey,
          MIN(CASE WHEN name = 'cs1' THEN value END),
          MIN(CASE WHEN name = 'cs2' THEN value END),
          MIN(CASE WHEN name = 'cs3' THEN value END)
        FROM extended_values
      GROUP BY userkey;
    
    SELECT * FROM user_custom_property;
    

    Output:

       USERKEY CS1                  CS2                  CS3                
    ---------- -------------------- -------------------- --------------------
           100 tgb                  hhy                  ttr                  
           104 hht                  iyu                  uyt                  
           106 tjg                  yyt                  try 

    Check at SQLFiddle:

    Edit

    Regarding the question in the comment - you just have to change the values in the CASE:

    INSERT INTO user_custom_property
      SELECT
          userkey,
          MIN(CASE WHEN name = 'ea1' THEN value END),
          MIN(CASE WHEN name = 'ea2' THEN value END),
          MIN(CASE WHEN name = 'ea3' THEN value END)
        FROM extended_values
      GROUP BY userkey;
    
    qid & accept id: (20028832, 20031650) query: longest winning streak by query soup:

    Here's one way, but I've got a feeling you're not going to like it...

    \n

    Consider the following data DDL's...

    \n
    CREATE TABLE results\n(id     INT NOT NULL AUTO_INCREMENT PRIMARY KEY\n,homeTeam    INT NOT NULL\n,awayTeam    INT NOT NULL\n,homeScore    INT NOT NULL\n,awayScore INT NOT NULL\n);\n\nINSERT INTO results VALUES\n(1,1,2,3,2),\n(2,3,4,0,1),\n(3,2,1,2,0),\n(4,4,3,1,0),\n(5,3,2,1,2),\n(6,2,3,0,2),\n(7,1,4,4,1),\n(8,4,1,1,2),\n(9,1,3,3,0),\n(10,3,1,1,0),\n(11,4,2,1,0),\n(12,2,4,1,2);\n
    \n

    From here, we can obtain an intermediate result as follows...

    \n
    SELECT x.*, COUNT(*) rank\n  FROM\n     ( SELECT id,hometeam team, CASE WHEN homescore > awayscore THEN 'w' ELSE 'l' END result FROM results \n       UNION\n       SELECT id,awayteam, CASE WHEN awayscore > homescore THEN 'w' ELSE 'l' END result FROM results\n     ) x\n  JOIN \n     ( SELECT id,hometeam team, CASE WHEN homescore > awayscore THEN 'w' ELSE 'l' END result FROM results \n       UNION\n       SELECT id,awayteam, CASE WHEN awayscore > homescore THEN 'w' ELSE 'l' END result FROM results\n     ) y\n    ON y.team = x.team\n   AND y.id <= x.id\n GROUP\n    BY x.id\n     , x.team\n ORDER\n    BY team, rank;\n\n+----+------+--------+------+\n| id | team | result | rank |\n+----+------+--------+------+\n|  1 |    1 | w      |    1 |\n|  3 |    1 | l      |    2 |\n|  7 |    1 | w      |    3 |\n|  8 |    1 | w      |    4 |\n|  9 |    1 | w      |    5 |\n| 10 |    1 | l      |    6 |\n|  1 |    2 | l      |    1 |\n|  3 |    2 | w      |    2 |\n|  5 |    2 | w      |    3 |\n|  6 |    2 | l      |    4 |\n| 11 |    2 | l      |    5 |\n| 12 |    2 | l      |    6 |\n|  2 |    3 | l      |    1 |\n|  4 |    3 | l      |    2 |\n|  5 |    3 | l      |    3 |\n|  6 |    3 | w      |    4 |\n|  9 |    3 | l      |    5 |\n| 10 |    3 | w      |    6 |\n|  2 |    4 | w      |    1 |\n|  4 |    4 | w      |    2 |\n|  7 |    4 | l      |    3 |\n|  8 |    4 | l      |    4 |\n| 11 |    4 | w      |    5 |\n| 12 |    4 | w      |    6 |\n+----+------+--------+------+\n
    \n

    By inspection, we can see that team 1 has the longest winning streak (3 consecutive 'w's). You can set up a couple of @vars to track this or, if you're slightly masochistic (like me) you can do something slower, longer, and more complicated...

    \n
    SELECT a.team\n     , MIN(c.rank) - a.rank + 1 streak\n  FROM (SELECT x.*, COUNT(*) rank\n  FROM\n     ( SELECT id,hometeam team, CASE WHEN homescore > awayscore THEN 'w' ELSE 'l' END result FROM results \n       UNION\n       SELECT id,awayteam, CASE WHEN awayscore > homescore THEN 'w' ELSE 'l' END result FROM results\n     ) x\n  JOIN \n     ( SELECT id,hometeam team, CASE WHEN homescore > awayscore THEN 'w' ELSE 'l' END result FROM results \n       UNION\n       SELECT id,awayteam, CASE WHEN awayscore > homescore THEN 'w' ELSE 'l' END result FROM results\n     ) y\n    ON y.team = x.team\n   AND y.id <= x.id\n GROUP\n    BY x.id\n     , x.team\n     ) a\n  LEFT \n  JOIN (SELECT x.*, COUNT(*) rank\n  FROM\n     ( SELECT id,hometeam team, CASE WHEN homescore > awayscore THEN 'w' ELSE 'l' END result FROM results \n       UNION\n       SELECT id,awayteam, CASE WHEN awayscore > homescore THEN 'w' ELSE 'l' END result FROM results\n     ) x\n  JOIN \n     ( SELECT id,hometeam team, CASE WHEN homescore > awayscore THEN 'w' ELSE 'l' END result FROM results \n       UNION\n       SELECT id,awayteam, CASE WHEN awayscore > homescore THEN 'w' ELSE 'l' END result FROM results\n     ) y\n    ON y.team = x.team\n   AND y.id <= x.id\n GROUP\n    BY x.id\n     , x.team\n     ) b \n    ON b.team = a.team\n   AND b.rank = a.rank - 1 \n   AND b.result = a.result\n  LEFT \n  JOIN (SELECT x.*, COUNT(*) rank\n  FROM\n     ( SELECT id,hometeam team, CASE WHEN homescore > awayscore THEN 'w' ELSE 'l' END result FROM results \n       UNION\n       SELECT id,awayteam, CASE WHEN awayscore > homescore THEN 'w' ELSE 'l' END result FROM results\n     ) x\n  JOIN \n     ( SELECT id,hometeam team, CASE WHEN homescore > awayscore THEN 'w' ELSE 'l' END result FROM results \n       UNION\n       SELECT id,awayteam, CASE WHEN awayscore > homescore THEN 'w' ELSE 'l' END result FROM results\n     ) y\n    ON y.team = x.team\n   AND y.id <= x.id\n GROUP\n    BY x.id\n     , x.team\n     ) c \n    ON c.team = a.team\n   AND c.rank >= a.rank \n   AND c.result = a.result\n  LEFT \n  JOIN (SELECT x.*, COUNT(*) rank\n  FROM\n     ( SELECT id,hometeam team, CASE WHEN homescore > awayscore THEN 'w' ELSE 'l' END result FROM results \n       UNION\n       SELECT id,awayteam, CASE WHEN awayscore > homescore THEN 'w' ELSE 'l' END result FROM results\n     ) x\n  JOIN \n     ( SELECT id,hometeam team, CASE WHEN homescore > awayscore THEN 'w' ELSE 'l' END result FROM results \n       UNION\n       SELECT id,awayteam, CASE WHEN awayscore > homescore THEN 'w' ELSE 'l' END result FROM results\n     ) y\n    ON y.team = x.team\n   AND y.id <= x.id\n GROUP\n    BY x.id\n     , x.team\n     ) d \n    ON d.team = a.team\n   AND d.rank = c.rank + 1 \n   AND d.result = a.result\n WHERE a.result = 'w'\n   AND b.id IS NULL\n   AND c.id IS NOT NULL\n   AND d.id IS NULL\n GROUP \n    BY a.team\n     , a.rank\n ORDER \n    BY streak DESC \n LIMIT 1; \n\n +------+--------+\n | team | streak |\n +------+--------+\n |    1 |      3 |\n +------+--------+\n
    \n

    Note that this doesn't account for individual match ties (a modest change to the repeated subquery), nor if two teams have longest winning streaks of equal length (requiring a JOIN of everything here back on itself!).

    \n soup wrap:

    Here's one way, but I've got a feeling you're not going to like it...

    Consider the following data DDL's...

    CREATE TABLE results
    (id     INT NOT NULL AUTO_INCREMENT PRIMARY KEY
    ,homeTeam    INT NOT NULL
    ,awayTeam    INT NOT NULL
    ,homeScore    INT NOT NULL
    ,awayScore INT NOT NULL
    );
    
    INSERT INTO results VALUES
    (1,1,2,3,2),
    (2,3,4,0,1),
    (3,2,1,2,0),
    (4,4,3,1,0),
    (5,3,2,1,2),
    (6,2,3,0,2),
    (7,1,4,4,1),
    (8,4,1,1,2),
    (9,1,3,3,0),
    (10,3,1,1,0),
    (11,4,2,1,0),
    (12,2,4,1,2);
    

    From here, we can obtain an intermediate result as follows...

    SELECT x.*, COUNT(*) rank
      FROM
         ( SELECT id,hometeam team, CASE WHEN homescore > awayscore THEN 'w' ELSE 'l' END result FROM results 
           UNION
           SELECT id,awayteam, CASE WHEN awayscore > homescore THEN 'w' ELSE 'l' END result FROM results
         ) x
      JOIN 
         ( SELECT id,hometeam team, CASE WHEN homescore > awayscore THEN 'w' ELSE 'l' END result FROM results 
           UNION
           SELECT id,awayteam, CASE WHEN awayscore > homescore THEN 'w' ELSE 'l' END result FROM results
         ) y
        ON y.team = x.team
       AND y.id <= x.id
     GROUP
        BY x.id
         , x.team
     ORDER
        BY team, rank;
    
    +----+------+--------+------+
    | id | team | result | rank |
    +----+------+--------+------+
    |  1 |    1 | w      |    1 |
    |  3 |    1 | l      |    2 |
    |  7 |    1 | w      |    3 |
    |  8 |    1 | w      |    4 |
    |  9 |    1 | w      |    5 |
    | 10 |    1 | l      |    6 |
    |  1 |    2 | l      |    1 |
    |  3 |    2 | w      |    2 |
    |  5 |    2 | w      |    3 |
    |  6 |    2 | l      |    4 |
    | 11 |    2 | l      |    5 |
    | 12 |    2 | l      |    6 |
    |  2 |    3 | l      |    1 |
    |  4 |    3 | l      |    2 |
    |  5 |    3 | l      |    3 |
    |  6 |    3 | w      |    4 |
    |  9 |    3 | l      |    5 |
    | 10 |    3 | w      |    6 |
    |  2 |    4 | w      |    1 |
    |  4 |    4 | w      |    2 |
    |  7 |    4 | l      |    3 |
    |  8 |    4 | l      |    4 |
    | 11 |    4 | w      |    5 |
    | 12 |    4 | w      |    6 |
    +----+------+--------+------+
    

    By inspection, we can see that team 1 has the longest winning streak (3 consecutive 'w's). You can set up a couple of @vars to track this or, if you're slightly masochistic (like me) you can do something slower, longer, and more complicated...

    SELECT a.team
         , MIN(c.rank) - a.rank + 1 streak
      FROM (SELECT x.*, COUNT(*) rank
      FROM
         ( SELECT id,hometeam team, CASE WHEN homescore > awayscore THEN 'w' ELSE 'l' END result FROM results 
           UNION
           SELECT id,awayteam, CASE WHEN awayscore > homescore THEN 'w' ELSE 'l' END result FROM results
         ) x
      JOIN 
         ( SELECT id,hometeam team, CASE WHEN homescore > awayscore THEN 'w' ELSE 'l' END result FROM results 
           UNION
           SELECT id,awayteam, CASE WHEN awayscore > homescore THEN 'w' ELSE 'l' END result FROM results
         ) y
        ON y.team = x.team
       AND y.id <= x.id
     GROUP
        BY x.id
         , x.team
         ) a
      LEFT 
      JOIN (SELECT x.*, COUNT(*) rank
      FROM
         ( SELECT id,hometeam team, CASE WHEN homescore > awayscore THEN 'w' ELSE 'l' END result FROM results 
           UNION
           SELECT id,awayteam, CASE WHEN awayscore > homescore THEN 'w' ELSE 'l' END result FROM results
         ) x
      JOIN 
         ( SELECT id,hometeam team, CASE WHEN homescore > awayscore THEN 'w' ELSE 'l' END result FROM results 
           UNION
           SELECT id,awayteam, CASE WHEN awayscore > homescore THEN 'w' ELSE 'l' END result FROM results
         ) y
        ON y.team = x.team
       AND y.id <= x.id
     GROUP
        BY x.id
         , x.team
         ) b 
        ON b.team = a.team
       AND b.rank = a.rank - 1 
       AND b.result = a.result
      LEFT 
      JOIN (SELECT x.*, COUNT(*) rank
      FROM
         ( SELECT id,hometeam team, CASE WHEN homescore > awayscore THEN 'w' ELSE 'l' END result FROM results 
           UNION
           SELECT id,awayteam, CASE WHEN awayscore > homescore THEN 'w' ELSE 'l' END result FROM results
         ) x
      JOIN 
         ( SELECT id,hometeam team, CASE WHEN homescore > awayscore THEN 'w' ELSE 'l' END result FROM results 
           UNION
           SELECT id,awayteam, CASE WHEN awayscore > homescore THEN 'w' ELSE 'l' END result FROM results
         ) y
        ON y.team = x.team
       AND y.id <= x.id
     GROUP
        BY x.id
         , x.team
         ) c 
        ON c.team = a.team
       AND c.rank >= a.rank 
       AND c.result = a.result
      LEFT 
      JOIN (SELECT x.*, COUNT(*) rank
      FROM
         ( SELECT id,hometeam team, CASE WHEN homescore > awayscore THEN 'w' ELSE 'l' END result FROM results 
           UNION
           SELECT id,awayteam, CASE WHEN awayscore > homescore THEN 'w' ELSE 'l' END result FROM results
         ) x
      JOIN 
         ( SELECT id,hometeam team, CASE WHEN homescore > awayscore THEN 'w' ELSE 'l' END result FROM results 
           UNION
           SELECT id,awayteam, CASE WHEN awayscore > homescore THEN 'w' ELSE 'l' END result FROM results
         ) y
        ON y.team = x.team
       AND y.id <= x.id
     GROUP
        BY x.id
         , x.team
         ) d 
        ON d.team = a.team
       AND d.rank = c.rank + 1 
       AND d.result = a.result
     WHERE a.result = 'w'
       AND b.id IS NULL
       AND c.id IS NOT NULL
       AND d.id IS NULL
     GROUP 
        BY a.team
         , a.rank
     ORDER 
        BY streak DESC 
     LIMIT 1; 
    
     +------+--------+
     | team | streak |
     +------+--------+
     |    1 |      3 |
     +------+--------+
    

    Note that this doesn't account for individual match ties (a modest change to the repeated subquery), nor if two teams have longest winning streaks of equal length (requiring a JOIN of everything here back on itself!).

    qid & accept id: (20049984, 20050682) query: Calculate time difference between rows soup:

    This is a CTE solution but, as has been indicated, this may not always perform well - because we're having to compute functions against the DateTime column, most indexes will be useless:

    \n
    declare @t table (ID int not null,[DateTime] datetime not null,\n                  PID int not null,TIU int not null)\ninsert into @t(ID,[DateTime],PID,TIU) values\n(1,'2013-11-18 00:15:00',1551,1005  ),\n(2,'2013-11-18 00:16:03',1551,1885  ),\n(3,'2013-11-18 00:16:30',9110,75527 ),\n(4,'2013-11-18 00:22:01',1022,75    ),\n(5,'2013-11-18 00:22:09',1019,1311  ),\n(6,'2013-11-18 00:23:52',1022,89    ),\n(7,'2013-11-18 00:24:19',1300,44433 ),\n(8,'2013-11-18 00:38:57',9445,2010  )\n\n;With Islands as (\n    select ID as MinID,[DateTime],ID as RecID from @t t1\n    where not exists\n        (select * from @t t2\n            where t2.ID < t1.ID and --Or by date, if needed\n                    --Use 300 seconds to avoid most transition issues\n            DATEDIFF(second,t2.[DateTime],t1.[DateTime]) < 300\n        )\n    union all\n    select i.MinID,t2.[DateTime],t2.ID\n    from Islands i\n        inner join\n        @t t2\n            on\n                i.RecID < t2.ID and\n                DATEDIFF(second,i.[DateTime],t2.[DateTime]) < 300\n), Ends as (\n    select MinID,MAX(RecID) as MaxID from Islands group by MinID\n)\nselect * from @t t\nwhere exists(select * from Ends e where e.MinID = t.ID or e.MaxID = t.ID)\n
    \n

    This also returns a row for ID 1, since that row has no preceding row within 5 minutes of it - but that should be easy enough to exclude in the final select, if needed.

    \n

    I've assumed we can use ID as a proxy for increasing dates - that if for two rows, the ID is higher in the second row, then the DateTime will also be later.

    \n
    \n

    Islands is a recursive CTE. The top half (the anchor) just selects rows which do not have any preceding row within 5 minutes of themselves. We select the ID twice for those rows and also keep the DateTime around.

    \n

    In the recursive portion, we try to find a new row from the table that can be "added on" to an existing Islands row - based on this new row being no more than 5 minutes later than the current end-point of the island.

    \n

    Once the recursion is complete, we then exclude the intermediate rows that the CTE produces. E.g. for the "4" island, it generated the following rows:

    \n
    4,00:22:01,4\n4,00:22:09,5\n4,00:23:52,6\n4,00:24:19,7\n
    \n

    And all that we care about is that final row where we've identified an "island" of time from ID 4 to ID 7 - that's what the second CTE (Ends) is finding for us.

    \n soup wrap:

    This is a CTE solution but, as has been indicated, this may not always perform well - because we're having to compute functions against the DateTime column, most indexes will be useless:

    declare @t table (ID int not null,[DateTime] datetime not null,
                      PID int not null,TIU int not null)
    insert into @t(ID,[DateTime],PID,TIU) values
    (1,'2013-11-18 00:15:00',1551,1005  ),
    (2,'2013-11-18 00:16:03',1551,1885  ),
    (3,'2013-11-18 00:16:30',9110,75527 ),
    (4,'2013-11-18 00:22:01',1022,75    ),
    (5,'2013-11-18 00:22:09',1019,1311  ),
    (6,'2013-11-18 00:23:52',1022,89    ),
    (7,'2013-11-18 00:24:19',1300,44433 ),
    (8,'2013-11-18 00:38:57',9445,2010  )
    
    ;With Islands as (
        select ID as MinID,[DateTime],ID as RecID from @t t1
        where not exists
            (select * from @t t2
                where t2.ID < t1.ID and --Or by date, if needed
                        --Use 300 seconds to avoid most transition issues
                DATEDIFF(second,t2.[DateTime],t1.[DateTime]) < 300
            )
        union all
        select i.MinID,t2.[DateTime],t2.ID
        from Islands i
            inner join
            @t t2
                on
                    i.RecID < t2.ID and
                    DATEDIFF(second,i.[DateTime],t2.[DateTime]) < 300
    ), Ends as (
        select MinID,MAX(RecID) as MaxID from Islands group by MinID
    )
    select * from @t t
    where exists(select * from Ends e where e.MinID = t.ID or e.MaxID = t.ID)
    

    This also returns a row for ID 1, since that row has no preceding row within 5 minutes of it - but that should be easy enough to exclude in the final select, if needed.

    I've assumed we can use ID as a proxy for increasing dates - that if for two rows, the ID is higher in the second row, then the DateTime will also be later.


    Islands is a recursive CTE. The top half (the anchor) just selects rows which do not have any preceding row within 5 minutes of themselves. We select the ID twice for those rows and also keep the DateTime around.

    In the recursive portion, we try to find a new row from the table that can be "added on" to an existing Islands row - based on this new row being no more than 5 minutes later than the current end-point of the island.

    Once the recursion is complete, we then exclude the intermediate rows that the CTE produces. E.g. for the "4" island, it generated the following rows:

    4,00:22:01,4
    4,00:22:09,5
    4,00:23:52,6
    4,00:24:19,7
    

    And all that we care about is that final row where we've identified an "island" of time from ID 4 to ID 7 - that's what the second CTE (Ends) is finding for us.

    qid & accept id: (20062208, 20062265) query: Simple SQL Table Insert soup:

    If you have an existing table you can do:

    \n
    INSERT INTO ExistingTable (Columns,..)\nSELECT Columns,...\nFROM OtherTable\n
    \n

    From your sql

    \n
    insert into newEmpTable (employee_id, first_name, \n  last_name, email, phone_number, hire_date, \n  job_id, salary, commission_pct, manager_id, department_id)\nselect e.employee_id, e.first_name, e.last_name, e.email, e.phone_number, e.hire_date, e.job_id,       e.salary, e.commission_pct, e.manager_id, e.department_id\nfrom employees e\njoin departments d\non e.department_id = d.department_id\njoin jobs j\non e.job_id = j.job_id\njoin locations l\non d.location_id = l.location_id\nwhere l.city = 'Seattle';\n
    \n

    See http://docs.oracle.com/cd/E17952_01/refman-5.1-en/insert-select.html

    \n

    If you do not have a table and want to create it,

    \n
    create table new_table as \nselect e.employee_id, e.first_name, e.last_name, e.email, e.phone_number, e.hire_date,     e.job_id,       e.salary, e.commission_pct, e.manager_id, e.department_id\nfrom employees e\njoin departments d\non e.department_id = d.department_id\njoin jobs j\non e.job_id = j.job_id\njoin locations l\non d.location_id = l.location_id\nwhere l.city = 'Seattle';\n
    \n soup wrap:

    If you have an existing table you can do:

    INSERT INTO ExistingTable (Columns,..)
    SELECT Columns,...
    FROM OtherTable
    

    From your sql

    insert into newEmpTable (employee_id, first_name, 
      last_name, email, phone_number, hire_date, 
      job_id, salary, commission_pct, manager_id, department_id)
    select e.employee_id, e.first_name, e.last_name, e.email, e.phone_number, e.hire_date, e.job_id,       e.salary, e.commission_pct, e.manager_id, e.department_id
    from employees e
    join departments d
    on e.department_id = d.department_id
    join jobs j
    on e.job_id = j.job_id
    join locations l
    on d.location_id = l.location_id
    where l.city = 'Seattle';
    

    See http://docs.oracle.com/cd/E17952_01/refman-5.1-en/insert-select.html

    If you do not have a table and want to create it,

    create table new_table as 
    select e.employee_id, e.first_name, e.last_name, e.email, e.phone_number, e.hire_date,     e.job_id,       e.salary, e.commission_pct, e.manager_id, e.department_id
    from employees e
    join departments d
    on e.department_id = d.department_id
    join jobs j
    on e.job_id = j.job_id
    join locations l
    on d.location_id = l.location_id
    where l.city = 'Seattle';
    
    qid & accept id: (20072250, 20074369) query: Find sum across varying no of columns soup:

    Intro

    \n

    The normal way to resolve this question is: chose correct structure. If you have 24 fields and you need to loop dynamically in SQL, then something went wrong. Also, it is bad that your table has not any primary key (or you've not mentioned that).

    \n

    Extremely important note

    \n

    It is no matter that the way I'll describe will work. It is still bad practice because of using some special things in MySQL. You can use it on your own risk - and, again, reconsider your structure if it's possible.

    \n

    The hack

    \n

    Actually, you can do some tricks using MySQL INFORMATION_SCHEMA tables. With this you can create "text" SQL, which later can be used in prepared statement.

    \n

    My table

    \n

    It's called test. Here it is:

    \n
    \n+----------+---------+------+-----+---------+-------+\n| Field    | Type    | Null | Key | Default | Extra |\n+----------+---------+------+-----+---------+-------+\n| value1   | int(11) | YES  |     | NULL    |       |\n| value2   | int(11) | YES  |     | NULL    |       |\n| value3   | int(11) | YES  |     | NULL    |       |\n| value4   | int(11) | YES  |     | NULL    |       |\n| constant | int(11) | YES  |     | NULL    |       |\n+----------+---------+------+-----+---------+-------+\n
    \n

    -I have 4 "value" fields in it and no primary key column (that causes troubles, but I've resolved that). Now, my data:

    \n
    \n+--------+--------+--------+--------+----------+\n| value1 | value2 | value3 | value4 | constant |\n+--------+--------+--------+--------+----------+\n|      2 |      5 |      6 |      0 |        2 |\n|      1 |   -100 |      0 |      0 |        1 |\n|      3 |     10 |    -10 |      0 |        3 |\n|      4 |      0 |     -1 |      5 |        3 |\n|     -1 |      1 |     -1 |      1 |        4 |\n+--------+--------+--------+--------+----------+\n
    \n

    The trick

    \n

    It's about selecting data from mentioned service schema in MySQL and working with GROUP_CONCAT function:

    \n
    select \n  concat('SELECT CASE(seq) ', \n    group_concat(groupcase separator ''), \n    ' END AS result FROM (select *, @j:=@j+1 as seq from test cross join (select @j:=0) as initj) as inittest') \nfrom \n  (select \n    concat(' WHEN ', rownum, ' THEN ', groupvalue) as groupcase \n   from \n     (select \n       rownum, \n       group_concat(COLUMN_NAME SEPARATOR '+') as groupvalue \n      from \n       (select \n         *, \n         @row:=@row+1 as rownum \n        from test \n          cross join (select @row:=0) as initrow) as tablestruct \n        left join \n          (select \n             COLUMN_NAME, \n             @num:=@num+1 as num \n           from \n             INFORMATION_SCHEMA.COLUMNS cross join (select @num:=0) as init \n           where \n             TABLE_SCHEMA='test' && \n             TABLE_NAME='test' && \n             COLUMN_NAME!='constant') as struct \n          on tablestruct.constant>=struct.num \n        group by \n          rownum) as groupvalues) as groupscase\n
    \n

    -what will this do? Actually, I recommend to execute it step-by-step (i.e. add more complex layer to that which you've already understood) - I doubt there's short way to describe what's happening. It's not a wizardry, it's about constructing valid text SQL from input conditions. End result will be like:

    \n
    SELECT CASE(seq)  WHEN 1 THEN value1+value2 WHEN 2 THEN value1 WHEN 3 THEN value3+value2+value1 WHEN 4 THEN value3+value2+value1 WHEN 5 THEN value2+value1+value4+value3 END AS result FROM (select *, @j:=@j+1 as seq from test cross join (select @j:=0) as initj) as inittest\n
    \n

    (I didn't add formatting because that SQL is generated string, not the one you'll write by yourself).

    \n

    Last step

    \n

    What now? Just Allocate it with:

    \n
    \nmysql> set @s=(select concat('SELECT CASE(seq) ', group_concat(groupcase separator ''), ' END AS result FROM (select *, @j:=@j+1 as seq from test cross join (select @j:=0) as initj) as inittest') from (select concat(' WHEN ', rownum, ' THEN ', groupvalue) as groupcase from (select rownum, group_concat(COLUMN_NAME SEPARATOR '+') as groupvalue from (select *, @row:=@row+1 as rownum from test cross join (select @row:=0) as initrow) as tablestruct left join (select COLUMN_NAME, @num:=@num+1 as num from INFORMATION_SCHEMA.COLUMNS cross join (select @num:=0) as init where TABLE_SCHEMA='test' && TABLE_NAME='test' and COLUMN_NAME!='constant') as struct on tablestruct.constant>=struct.num group by rownum) as groupvalues) as groupscase);\nQuery OK, 0 rows affected (0.00 sec)\n\nmysql> prepare stmt from @s;\nQuery OK, 0 rows affected (0.00 sec)\nStatement prepared\n
    \n

    -and, finally:

    \n
    \nmysql> execute stmt;\n
    \n

    You'll get results as:

    \n
    \n+--------+\n| result |\n+--------+\n|      7 |\n|      1 |\n|      3 |\n|      3 |\n|      0 |\n+--------+\n
    \n

    Why is this bad

    \n

    Because it generates string for whole table. I.e. for each row! Imagine if you'll have 1000 rows - that will be nasty. MySQL also has limitation in GROUP_CONCAT: group_concat_max_len - which will limit this way, obviously.

    \n

    So why I did that?

    \n

    Because I was curious if the way without additional DDL and implicit recounting of table's fields exist. I've found it, so leaving it here.

    \n soup wrap:

    Intro

    The normal way to resolve this question is: chose correct structure. If you have 24 fields and you need to loop dynamically in SQL, then something went wrong. Also, it is bad that your table has not any primary key (or you've not mentioned that).

    Extremely important note

    It is no matter that the way I'll describe will work. It is still bad practice because of using some special things in MySQL. You can use it on your own risk - and, again, reconsider your structure if it's possible.

    The hack

    Actually, you can do some tricks using MySQL INFORMATION_SCHEMA tables. With this you can create "text" SQL, which later can be used in prepared statement.

    My table

    It's called test. Here it is:

    +----------+---------+------+-----+---------+-------+
    | Field    | Type    | Null | Key | Default | Extra |
    +----------+---------+------+-----+---------+-------+
    | value1   | int(11) | YES  |     | NULL    |       |
    | value2   | int(11) | YES  |     | NULL    |       |
    | value3   | int(11) | YES  |     | NULL    |       |
    | value4   | int(11) | YES  |     | NULL    |       |
    | constant | int(11) | YES  |     | NULL    |       |
    +----------+---------+------+-----+---------+-------+
    

    -I have 4 "value" fields in it and no primary key column (that causes troubles, but I've resolved that). Now, my data:

    +--------+--------+--------+--------+----------+
    | value1 | value2 | value3 | value4 | constant |
    +--------+--------+--------+--------+----------+
    |      2 |      5 |      6 |      0 |        2 |
    |      1 |   -100 |      0 |      0 |        1 |
    |      3 |     10 |    -10 |      0 |        3 |
    |      4 |      0 |     -1 |      5 |        3 |
    |     -1 |      1 |     -1 |      1 |        4 |
    +--------+--------+--------+--------+----------+
    

    The trick

    It's about selecting data from mentioned service schema in MySQL and working with GROUP_CONCAT function:

    select 
      concat('SELECT CASE(seq) ', 
        group_concat(groupcase separator ''), 
        ' END AS result FROM (select *, @j:=@j+1 as seq from test cross join (select @j:=0) as initj) as inittest') 
    from 
      (select 
        concat(' WHEN ', rownum, ' THEN ', groupvalue) as groupcase 
       from 
         (select 
           rownum, 
           group_concat(COLUMN_NAME SEPARATOR '+') as groupvalue 
          from 
           (select 
             *, 
             @row:=@row+1 as rownum 
            from test 
              cross join (select @row:=0) as initrow) as tablestruct 
            left join 
              (select 
                 COLUMN_NAME, 
                 @num:=@num+1 as num 
               from 
                 INFORMATION_SCHEMA.COLUMNS cross join (select @num:=0) as init 
               where 
                 TABLE_SCHEMA='test' && 
                 TABLE_NAME='test' && 
                 COLUMN_NAME!='constant') as struct 
              on tablestruct.constant>=struct.num 
            group by 
              rownum) as groupvalues) as groupscase
    

    -what will this do? Actually, I recommend to execute it step-by-step (i.e. add more complex layer to that which you've already understood) - I doubt there's short way to describe what's happening. It's not a wizardry, it's about constructing valid text SQL from input conditions. End result will be like:

    SELECT CASE(seq)  WHEN 1 THEN value1+value2 WHEN 2 THEN value1 WHEN 3 THEN value3+value2+value1 WHEN 4 THEN value3+value2+value1 WHEN 5 THEN value2+value1+value4+value3 END AS result FROM (select *, @j:=@j+1 as seq from test cross join (select @j:=0) as initj) as inittest
    

    (I didn't add formatting because that SQL is generated string, not the one you'll write by yourself).

    Last step

    What now? Just Allocate it with:

    mysql> set @s=(select concat('SELECT CASE(seq) ', group_concat(groupcase separator ''), ' END AS result FROM (select *, @j:=@j+1 as seq from test cross join (select @j:=0) as initj) as inittest') from (select concat(' WHEN ', rownum, ' THEN ', groupvalue) as groupcase from (select rownum, group_concat(COLUMN_NAME SEPARATOR '+') as groupvalue from (select *, @row:=@row+1 as rownum from test cross join (select @row:=0) as initrow) as tablestruct left join (select COLUMN_NAME, @num:=@num+1 as num from INFORMATION_SCHEMA.COLUMNS cross join (select @num:=0) as init where TABLE_SCHEMA='test' && TABLE_NAME='test' and COLUMN_NAME!='constant') as struct on tablestruct.constant>=struct.num group by rownum) as groupvalues) as groupscase);
    Query OK, 0 rows affected (0.00 sec)
    
    mysql> prepare stmt from @s;
    Query OK, 0 rows affected (0.00 sec)
    Statement prepared
    

    -and, finally:

    mysql> execute stmt;
    

    You'll get results as:

    +--------+
    | result |
    +--------+
    |      7 |
    |      1 |
    |      3 |
    |      3 |
    |      0 |
    +--------+
    

    Why is this bad

    Because it generates string for whole table. I.e. for each row! Imagine if you'll have 1000 rows - that will be nasty. MySQL also has limitation in GROUP_CONCAT: group_concat_max_len - which will limit this way, obviously.

    So why I did that?

    Because I was curious if the way without additional DDL and implicit recounting of table's fields exist. I've found it, so leaving it here.

    qid & accept id: (20096624, 20096727) query: sum with sql and direct condition soup:

    you cannot use derived column in where clause, there're many discussions on SO about this. One way to do this is to use subquery or CTE

    \n
    select val\nfrom (select 1+3 as val) as v\nwhere val > 2\n
    \n

    or

    \n
    with cte (\n    select 1+3 as val\n)\nselect val\nfrom cte\nwhere val > 2\n
    \n soup wrap:

    you cannot use derived column in where clause, there're many discussions on SO about this. One way to do this is to use subquery or CTE

    select val
    from (select 1+3 as val) as v
    where val > 2
    

    or

    with cte (
        select 1+3 as val
    )
    select val
    from cte
    where val > 2
    
    qid & accept id: (20146719, 20146869) query: Copy row from table 1 to table 2 soup:

    The layout of the two tables are the same you just do:

    \n
    INSERT INTO table2\nSELECT * FROM table1;\n
    \n

    Or we can copy only the columns we want to into another, existing table:

    \n
    INSERT INTO table2\n(column_name(s))\nSELECT column_name(s)\nFROM table1;\n
    \n soup wrap:

    The layout of the two tables are the same you just do:

    INSERT INTO table2
    SELECT * FROM table1;
    

    Or we can copy only the columns we want to into another, existing table:

    INSERT INTO table2
    (column_name(s))
    SELECT column_name(s)
    FROM table1;
    
    qid & accept id: (20147303, 20147780) query: Sybase STR-function in Oracle soup:
    select to_char(123.56, '99999999999999999999.00000000000')\nfrom dual;\n
    \n

    or, more generically (substitute 30 and 10 respectively as required):

    \n
    select to_char(123.56, lpad(rpad('.',10,'0'),30,'9'))\nfrom dual;\n
    \n

    Note: the string length will be 31 to allow room for the possible "-" (negative) sign.

    \n soup wrap:
    select to_char(123.56, '99999999999999999999.00000000000')
    from dual;
    

    or, more generically (substitute 30 and 10 respectively as required):

    select to_char(123.56, lpad(rpad('.',10,'0'),30,'9'))
    from dual;
    

    Note: the string length will be 31 to allow room for the possible "-" (negative) sign.

    qid & accept id: (20197566, 20198642) query: UDF Field Reference In SQL Server UPDATE statement soup:

    To actually answer your question, it is always the values before the update that are used, so with this table:

    \n
    A   |   B\n----+-----\n1   |   2\n3   |   4\n
    \n

    Running:

    \n
    UPDATE  T\nSET     A = B,\n        B = 1;\n
    \n

    Will give:

    \n
    A   |   B\n----+-----\n2   |   1\n4   |   3           \n
    \n

    It does not run in the order of the statements within the update.

    \n

    However, if it is not too late you should seriously consider redesigning your tables, storing delimited values in a text column is a terrible idea.

    \n

    You would be much better off storing your data in a normalised form, so you would have a table structure like:

    \n

    PermissionCode

    \n
    PermissionCode\n-------\nA               \nB               \nC\nD\nZ\n
    \n

    UserPermission

    \n
    UserID  |   PermissionCode\n--------+--------------------\n1       |   A\n1       |   B\n1       |   C\n1       |   D\n
    \n

    You can then use another other table to manage linked Permissions:

    \n
    ParentCode  |   ChildCode\n------------+---------------\n    A       |       C\n    A       |       G\n
    \n

    You can then get all permissions by a user using this table, e.g. by creating a view:

    \n
    CREATE VIEW dbo.AllUserPermission\nAS\nSELECT  p.UserID, p.PermissionCode\nFROM    UserPermission p\nUNION \nSELECT  p.UserID, lp.ChildCode\nFROM    UserPermission p\n        INNER JOIN LinkedPermission lp\n            ON lp.ParentCode = p.PermissionCode;\n
    \n

    Then you can get permissions that a user does not have using something like this:

    \n
    SELECT  u.UserID, P.PermissionCode\nFROM    UserTable u\n        CROSS JOIN PermissionCode p\nWHERE   NOT EXISTS\n        (   SELECT  1\n            FROM    AllUserPermission up\n            WHERE   up.UserID = u.UserID\n            AND     up.PermissionCode = p.PermissionCode\n        );\n
    \n

    This way when you add new permissions you don't need to upate a column for all the users for DoNotPromoteCode, this is calculated on the fly by removing permissions the user has from a list of all permissions.

    \n

    If you specifically need to store codes that people have expcitly opted out of in addition to those they are not receiving then you could add a column to the UserPermission table to store this, you can also store dates and times so you know when various actions were taken:

    \n
    UserID  |   PermissionCode  |   AddedDateTime   |   DoNotPromoteDateTime    |   RemovedDateTime\n--------+-------------------+-------------------+---------------------------+--------------------\n1       |       A           | 2013-11-25 16:55  |           NULL            |       NULL\n1       |       B           | 2013-11-25 16:55  |       2013-11-25 16:55    |       NULL\n1       |       C           | 2013-11-25 16:55  |       2013-11-25 16:56    |   2013-11-25 16:57\n1       |       D           | 2013-11-25 16:55  |           NULL            |   2013-11-25 16:57\n
    \n

    By querying on whether certain columns are NULL or not you can determine various states.

    \n

    This is a much more manageable way of dealing with a one to many relationship, pipe delimited strings will cause no end of problems, if you need to show the permission codes as a delimited string for any reason this can be achieved using SQL Servers XML extensions

    \n soup wrap:

    To actually answer your question, it is always the values before the update that are used, so with this table:

    A   |   B
    ----+-----
    1   |   2
    3   |   4
    

    Running:

    UPDATE  T
    SET     A = B,
            B = 1;
    

    Will give:

    A   |   B
    ----+-----
    2   |   1
    4   |   3           
    

    It does not run in the order of the statements within the update.

    However, if it is not too late you should seriously consider redesigning your tables, storing delimited values in a text column is a terrible idea.

    You would be much better off storing your data in a normalised form, so you would have a table structure like:

    PermissionCode

    PermissionCode
    -------
    A               
    B               
    C
    D
    Z
    

    UserPermission

    UserID  |   PermissionCode
    --------+--------------------
    1       |   A
    1       |   B
    1       |   C
    1       |   D
    

    You can then use another other table to manage linked Permissions:

    ParentCode  |   ChildCode
    ------------+---------------
        A       |       C
        A       |       G
    

    You can then get all permissions by a user using this table, e.g. by creating a view:

    CREATE VIEW dbo.AllUserPermission
    AS
    SELECT  p.UserID, p.PermissionCode
    FROM    UserPermission p
    UNION 
    SELECT  p.UserID, lp.ChildCode
    FROM    UserPermission p
            INNER JOIN LinkedPermission lp
                ON lp.ParentCode = p.PermissionCode;
    

    Then you can get permissions that a user does not have using something like this:

    SELECT  u.UserID, P.PermissionCode
    FROM    UserTable u
            CROSS JOIN PermissionCode p
    WHERE   NOT EXISTS
            (   SELECT  1
                FROM    AllUserPermission up
                WHERE   up.UserID = u.UserID
                AND     up.PermissionCode = p.PermissionCode
            );
    

    This way when you add new permissions you don't need to upate a column for all the users for DoNotPromoteCode, this is calculated on the fly by removing permissions the user has from a list of all permissions.

    If you specifically need to store codes that people have expcitly opted out of in addition to those they are not receiving then you could add a column to the UserPermission table to store this, you can also store dates and times so you know when various actions were taken:

    UserID  |   PermissionCode  |   AddedDateTime   |   DoNotPromoteDateTime    |   RemovedDateTime
    --------+-------------------+-------------------+---------------------------+--------------------
    1       |       A           | 2013-11-25 16:55  |           NULL            |       NULL
    1       |       B           | 2013-11-25 16:55  |       2013-11-25 16:55    |       NULL
    1       |       C           | 2013-11-25 16:55  |       2013-11-25 16:56    |   2013-11-25 16:57
    1       |       D           | 2013-11-25 16:55  |           NULL            |   2013-11-25 16:57
    

    By querying on whether certain columns are NULL or not you can determine various states.

    This is a much more manageable way of dealing with a one to many relationship, pipe delimited strings will cause no end of problems, if you need to show the permission codes as a delimited string for any reason this can be achieved using SQL Servers XML extensions

    qid & accept id: (20250357, 20251202) query: Parsing string values in Access soup:

    Put the following functions into a Module:

    \n
       Function CountCSWords (ByVal S) As Integer\n  ' Counts the words in a string that are separated by commas.\n\n  Dim WC As Integer, Pos As Integer\n     If VarType(S) <> 8 Or Len(S) = 0 Then\n       CountCSWords = 0\n       Exit Function\n     End If\n     WC = 1\n     Pos = InStr(S, ",")\n     Do While Pos > 0\n       WC = WC + 1\n       Pos = InStr(Pos + 1, S, ",")\n     Loop\n     CountCSWords = WC\n  End Function\n\n  Function GetCSWord (ByVal S, Indx As Integer)\n  ' Returns the nth word in a specific field.\n\n  Dim WC As Integer, Count As Integer, SPos As Integer, EPos As Integer\n     WC = CountCSWords(S)\n     If Indx < 1 Or Indx > WC Then\n       GetCSWord = Null\n       Exit Function\n     End If\n     Count = 1\n     SPos = 1\n     For Count = 2 To Indx\n       SPos = InStr(SPos, S, ",") + 1\n     Next Count\n     EPos = InStr(SPos, S, ",") - 1\n     If EPos <= 0 Then EPos = Len(S)\n     GetCSWord = Trim(Mid(S, SPos, EPos - SPos + 1))\n  End Function\n
    \n

    Then, put a field in your query like this:

    \n
    MyFirstField: GetCSWord([FieldForms],1)\n
    \n

    Put another one in like this:

    \n
    MySecondField: GetCSWord([FieldForms],2)\n
    \n

    Etc... for as many as you need.

    \n soup wrap:

    Put the following functions into a Module:

       Function CountCSWords (ByVal S) As Integer
      ' Counts the words in a string that are separated by commas.
    
      Dim WC As Integer, Pos As Integer
         If VarType(S) <> 8 Or Len(S) = 0 Then
           CountCSWords = 0
           Exit Function
         End If
         WC = 1
         Pos = InStr(S, ",")
         Do While Pos > 0
           WC = WC + 1
           Pos = InStr(Pos + 1, S, ",")
         Loop
         CountCSWords = WC
      End Function
    
      Function GetCSWord (ByVal S, Indx As Integer)
      ' Returns the nth word in a specific field.
    
      Dim WC As Integer, Count As Integer, SPos As Integer, EPos As Integer
         WC = CountCSWords(S)
         If Indx < 1 Or Indx > WC Then
           GetCSWord = Null
           Exit Function
         End If
         Count = 1
         SPos = 1
         For Count = 2 To Indx
           SPos = InStr(SPos, S, ",") + 1
         Next Count
         EPos = InStr(SPos, S, ",") - 1
         If EPos <= 0 Then EPos = Len(S)
         GetCSWord = Trim(Mid(S, SPos, EPos - SPos + 1))
      End Function
    

    Then, put a field in your query like this:

    MyFirstField: GetCSWord([FieldForms],1)
    

    Put another one in like this:

    MySecondField: GetCSWord([FieldForms],2)
    

    Etc... for as many as you need.

    qid & accept id: (20299075, 20300834) query: Convert Multiple Rows into Multiple Columns soup:

    This:-

    \n
    create table #source (\n    aid int,\n    qid int,\n    answer char(2),\n    istrue bit\n)\ninsert into #source values\n    (1,11,'a1',1),\n    (2,11,'a2',0),\n    (3,11,'a3',0),\n    (4,11,'a4',0),\n    (1,12,'a5',0),\n    (2,12,'a6',0),\n    (3,12,'a7',1),\n    (4,12,'a8',0)\n\nselect s.qid,\n    q1.aid as aid1, q1.answer as answer1, q1.istrue as istrue1,\n    q2.aid as aid2, q2.answer as answer2, q2.istrue as istrue2,\n    q3.aid as aid3, q3.answer as answer3, q3.istrue as istrue3,\n    q4.aid as aid4, q4.answer as answer4, q4.istrue as istrue4\nfrom (\n    select distinct qid\n    from #source\n) s\njoin #source q1 on q1.qid=s.qid and q1.aid=1\njoin #source q2 on q2.qid=s.qid and q2.aid=2\njoin #source q3 on q3.qid=s.qid and q3.aid=3\njoin #source q4 on q4.qid=s.qid and q4.aid=4\norder by s.qid\n
    \n

    produces:-

    \n
    qid aid1 answer1 istrue1 aid2 answer2 istrue2 aid3 answer3 istrue3 aid4 answer4 istrue4\n11  1    a1      1       2    a2      0       3    a3      0       4    a4      0\n12  1    a5      0       2    a6      0       3    a7      1       4    a8      0\n
    \n soup wrap:

    This:-

    create table #source (
        aid int,
        qid int,
        answer char(2),
        istrue bit
    )
    insert into #source values
        (1,11,'a1',1),
        (2,11,'a2',0),
        (3,11,'a3',0),
        (4,11,'a4',0),
        (1,12,'a5',0),
        (2,12,'a6',0),
        (3,12,'a7',1),
        (4,12,'a8',0)
    
    select s.qid,
        q1.aid as aid1, q1.answer as answer1, q1.istrue as istrue1,
        q2.aid as aid2, q2.answer as answer2, q2.istrue as istrue2,
        q3.aid as aid3, q3.answer as answer3, q3.istrue as istrue3,
        q4.aid as aid4, q4.answer as answer4, q4.istrue as istrue4
    from (
        select distinct qid
        from #source
    ) s
    join #source q1 on q1.qid=s.qid and q1.aid=1
    join #source q2 on q2.qid=s.qid and q2.aid=2
    join #source q3 on q3.qid=s.qid and q3.aid=3
    join #source q4 on q4.qid=s.qid and q4.aid=4
    order by s.qid
    

    produces:-

    qid aid1 answer1 istrue1 aid2 answer2 istrue2 aid3 answer3 istrue3 aid4 answer4 istrue4
    11  1    a1      1       2    a2      0       3    a3      0       4    a4      0
    12  1    a5      0       2    a6      0       3    a7      1       4    a8      0
    
    qid & accept id: (20304400, 20304791) query: Use send_dbmail to send an email for each row in sql table soup:

    1) You could use a LOCAL FAST_FORWARD cursor to read every row and then to execute sp_send_dbmail

    \n

    or

    \n

    2) You could dynamically generate a sql statement that includes the list of EXEC sp_send_dbmail statements like this:

    \n
    DECLARE @SqlStatement NVARCHAR(MAX) = N'\n    EXEC msdb.dbo.sp_send_dbmail @recipients=''dest01@domain.com'', ...; \n    EXEC msdb.dbo.sp_send_dbmail @recipients=''dest02@domain.com'', ...; \n    EXEC msdb.dbo.sp_send_dbmail @recipients=''dest03@domain.com'', ...;\n    ...';\nEXEC(@SqlStatement);\n
    \n

    or

    \n
    DECLARE @bodyText NVARCHAR(MAX);\nSET @bodyText = ...;\n\nDECLARE @SqlStatement NVARCHAR(MAX) = N'\n    EXEC msdb.dbo.sp_send_dbmail @recipients=''dest01@domain.com'', @body = @pBody, ...; \n    EXEC msdb.dbo.sp_send_dbmail @recipients=''dest02@domain.com'', @body = @pBody, ...; \n    EXEC msdb.dbo.sp_send_dbmail @recipients=''dest03@domain.com'', @body = @pBody, ...; \n    ...';\nEXEC sp_executesql @SqlStatement, N'@pBody NVARCHAR(MAX)', @pBody = @bodyText;\n
    \n soup wrap:

    1) You could use a LOCAL FAST_FORWARD cursor to read every row and then to execute sp_send_dbmail

    or

    2) You could dynamically generate a sql statement that includes the list of EXEC sp_send_dbmail statements like this:

    DECLARE @SqlStatement NVARCHAR(MAX) = N'
        EXEC msdb.dbo.sp_send_dbmail @recipients=''dest01@domain.com'', ...; 
        EXEC msdb.dbo.sp_send_dbmail @recipients=''dest02@domain.com'', ...; 
        EXEC msdb.dbo.sp_send_dbmail @recipients=''dest03@domain.com'', ...;
        ...';
    EXEC(@SqlStatement);
    

    or

    DECLARE @bodyText NVARCHAR(MAX);
    SET @bodyText = ...;
    
    DECLARE @SqlStatement NVARCHAR(MAX) = N'
        EXEC msdb.dbo.sp_send_dbmail @recipients=''dest01@domain.com'', @body = @pBody, ...; 
        EXEC msdb.dbo.sp_send_dbmail @recipients=''dest02@domain.com'', @body = @pBody, ...; 
        EXEC msdb.dbo.sp_send_dbmail @recipients=''dest03@domain.com'', @body = @pBody, ...; 
        ...';
    EXEC sp_executesql @SqlStatement, N'@pBody NVARCHAR(MAX)', @pBody = @bodyText;
    
    qid & accept id: (20358637, 20359071) query: Retaining data while modifying column data type soup:

    The upper limit of the varray2 data type can be increased with alter type statement:

    \n
    create or replace type Varray2 is varray(50) of varchar2(20);\n/\nTYPE VARRAY2 compiled\n\n\ncreate table owner (\n  modified        date,          \n  id1             Varchar2(18),  --   use varchar2 data type, not varchar. \n  state           Varchar2(2),  \n  contributer_ids Varray2\n)\n/\n\ntable OWNER created.\n
    \n

    Current information about varray2 data type:

    \n
    SQL> clear screen;\nSQL> column type_name format a11;\nSQL> column upper_bound format a11\n\nSQL> select t.type_name\n  2       , t.upper_bound\n  3   from all_coll_types t\n  4  where type_name = 'VARRAY2';\n\nTYPE_NAME   UPPER_BOUND\n----------- -----------\nVARRAY2              50 \n
    \n

    Change the upper limit of the varray2 data type:

    \n
    SQL> alter type Varray2 modify limit 150 cascade;\n\ntype VARRAY2 altered.\n
    \n

    After the upper limit of the varray2 data type has changed:

    \n
    SQL> clear screen;\nSQL> column type_name format a11;\nSQL> column upper_bound format a11\n\nSQL> select t.type_name\n  2       , t.upper_bound\n  3   from all_coll_types t\n  4  where type_name = 'VARRAY2';\n\nTYPE_NAME   UPPER_BOUND\n----------- -----------\nVARRAY2             150 \n
    \n

    cascade clause of the alter type statement propagates the data type change to the dependent objects, whether it's a table or another data type.

    \n soup wrap:

    The upper limit of the varray2 data type can be increased with alter type statement:

    create or replace type Varray2 is varray(50) of varchar2(20);
    /
    TYPE VARRAY2 compiled
    
    
    create table owner (
      modified        date,          
      id1             Varchar2(18),  --   use varchar2 data type, not varchar. 
      state           Varchar2(2),  
      contributer_ids Varray2
    )
    /
    
    table OWNER created.
    

    Current information about varray2 data type:

    SQL> clear screen;
    SQL> column type_name format a11;
    SQL> column upper_bound format a11
    
    SQL> select t.type_name
      2       , t.upper_bound
      3   from all_coll_types t
      4  where type_name = 'VARRAY2';
    
    TYPE_NAME   UPPER_BOUND
    ----------- -----------
    VARRAY2              50 
    

    Change the upper limit of the varray2 data type:

    SQL> alter type Varray2 modify limit 150 cascade;
    
    type VARRAY2 altered.
    

    After the upper limit of the varray2 data type has changed:

    SQL> clear screen;
    SQL> column type_name format a11;
    SQL> column upper_bound format a11
    
    SQL> select t.type_name
      2       , t.upper_bound
      3   from all_coll_types t
      4  where type_name = 'VARRAY2';
    
    TYPE_NAME   UPPER_BOUND
    ----------- -----------
    VARRAY2             150 
    

    cascade clause of the alter type statement propagates the data type change to the dependent objects, whether it's a table or another data type.

    qid & accept id: (20371389, 20372508) query: update column to remove html tags soup:

    UDF stands for "user defined function" - unless you did not define the the function with the name "udf_StripHTML" this simply won't work. I think you refer to this function:

    \n
    CREATE FUNCTION [dbo].[udf_StripHTML]\n(@HTMLText VARCHAR(MAX))\nRETURNS VARCHAR(MAX)\nAS\nBEGIN\nDECLARE @Start INT\nDECLARE @End INT\nDECLARE @Length INT\nSET @Start = CHARINDEX('<',@HTMLText)\nSET @End = CHARINDEX('>',@HTMLText,CHARINDEX('<',@HTMLText))\nSET @Length = (@End - @Start) + 1\nWHILE @Start > 0\nAND @End > 0\nAND @Length > 0\nBEGIN\nSET @HTMLText = STUFF(@HTMLText,@Start,@Length,'')\nSET @Start = CHARINDEX('<',@HTMLText)\nSET @End = CHARINDEX('>',@HTMLText,CHARINDEX('<',@HTMLText))\nSET @Length = (@End - @Start) + 1\nEND\nRETURN LTRIM(RTRIM(@HTMLText))\nEND\nGO\n
    \n

    to test this function do:

    \n
    SELECT dbo.udf_StripHTML('UDF at stackoverflow.com 

    Stackoverflow.com')\n
    \n

    Result Set:

    \n

    UDF at stackoverflow.com Stackoverflow.com

    \n

    This function was set up by Pinal Dave - see here.

    \n

    Hope this helps.

    \n soup wrap:

    UDF stands for "user defined function" - unless you did not define the the function with the name "udf_StripHTML" this simply won't work. I think you refer to this function:

    CREATE FUNCTION [dbo].[udf_StripHTML]
    (@HTMLText VARCHAR(MAX))
    RETURNS VARCHAR(MAX)
    AS
    BEGIN
    DECLARE @Start INT
    DECLARE @End INT
    DECLARE @Length INT
    SET @Start = CHARINDEX('<',@HTMLText)
    SET @End = CHARINDEX('>',@HTMLText,CHARINDEX('<',@HTMLText))
    SET @Length = (@End - @Start) + 1
    WHILE @Start > 0
    AND @End > 0
    AND @Length > 0
    BEGIN
    SET @HTMLText = STUFF(@HTMLText,@Start,@Length,'')
    SET @Start = CHARINDEX('<',@HTMLText)
    SET @End = CHARINDEX('>',@HTMLText,CHARINDEX('<',@HTMLText))
    SET @Length = (@End - @Start) + 1
    END
    RETURN LTRIM(RTRIM(@HTMLText))
    END
    GO
    

    to test this function do:

    SELECT dbo.udf_StripHTML('UDF at stackoverflow.com 

    Stackoverflow.com')

    Result Set:

    UDF at stackoverflow.com Stackoverflow.com

    This function was set up by Pinal Dave - see here.

    Hope this helps.

    qid & accept id: (20401247, 20401409) query: Getting the number of rows on MySQL with SQL and PHP soup:

    I advice you to change peopleTable table to follow structure

    \n
    peopleTable\nperson fruit_id\njohn   1\n...\n
    \n

    And by the question you need follow sql

    \n
    SELECT a.id, COUNT(*) as count FROM fruitsTable a\nLEFT JOIN peopleTable b ON a.id = b.fruit_id\nGROUP BY a.id\n
    \n

    This will output follows (Example data)

    \n
    id  count\n1   2\n2   4\n... \n
    \n

    And update query

    \n
    UPDATE fruitTable a SET numberOfPeople = (\n    SELECT COUNT(*) FROM peopleTable b WHERE a.id = b.fruit_id GROUP BY b.fruit_id\n);\n
    \n soup wrap:

    I advice you to change peopleTable table to follow structure

    peopleTable
    person fruit_id
    john   1
    ...
    

    And by the question you need follow sql

    SELECT a.id, COUNT(*) as count FROM fruitsTable a
    LEFT JOIN peopleTable b ON a.id = b.fruit_id
    GROUP BY a.id
    

    This will output follows (Example data)

    id  count
    1   2
    2   4
    ... 
    

    And update query

    UPDATE fruitTable a SET numberOfPeople = (
        SELECT COUNT(*) FROM peopleTable b WHERE a.id = b.fruit_id GROUP BY b.fruit_id
    );
    
    qid & accept id: (20435406, 20435941) query: How to compute the sum of a variable in R considering ID variable and Index variable and save results in a matrix soup:

    You can do it in three steps, assuming tout is your data frame:

    \n
    > library(data.table)\n> tout <- as.data.table(tout)\n> setkey(tout, ProductID)\n> cart <- tout[tout, allow.cartesian = TRUE]\n     ProductID Id Price Index Id.1 Price.1 Index.1\n  1:         1  1     1     1    1       1       1\n  2:         1 10     1     2    1       1       1\n  3:         1 21     1     3    1       1       1\n  4:         1 34     1     4    1       1       1\n  5:         1  1     1     1   10       1       2\n ---                                              \n168:        14 46    11     4   33      11       3\n169:        14 33    11     3   46      11       4\n170:        14 46    11     4   46      11       4\n171:        15 47    12     4   47      12       4\n172:        16 48    12     4   48      12       4\n
    \n

    Now cart is a cartesian product of tout by itself, using ProductID as the key.

    \n
    > x <- cart[, sum(Price), by = list(Index, Index.1)]\n    Index Index.1  V1\n 1:     1       1  45\n 2:     2       1  45\n 3:     3       1  45\n 4:     4       1  45\n 5:     1       2  45\n 6:     2       2  66\n 7:     3       2  66\n 8:     4       2  66\n 9:     1       3  45\n10:     2       3  66\n11:     3       3  88\n12:     4       3  88\n13:     1       4  45\n14:     2       4  66\n15:     3       4  88\n16:     4       4 112\n
    \n

    x is almost what you need, but in a data table (long) form. You need to cast to matrix (wide) form with the help of avast from reshape2 package:

    \n
    > library(reshape2)\n> a <- acast(x, Index ~ Index.1, value.var = "V1")\n   1  2  3   4\n1 45 45 45  45\n2 45 66 66  66\n3 45 66 88  88\n4 45 66 88 112\n
    \n

    Finally, to set upper triangular part of the matrix to NA:

    \n
    > a[upper.tri(a)] <- NA\n   1  2  3   4\n1 45 NA NA  NA\n2 45 66 NA  NA\n3 45 66 88  NA\n4 45 66 88 112\n
    \n soup wrap:

    You can do it in three steps, assuming tout is your data frame:

    > library(data.table)
    > tout <- as.data.table(tout)
    > setkey(tout, ProductID)
    > cart <- tout[tout, allow.cartesian = TRUE]
         ProductID Id Price Index Id.1 Price.1 Index.1
      1:         1  1     1     1    1       1       1
      2:         1 10     1     2    1       1       1
      3:         1 21     1     3    1       1       1
      4:         1 34     1     4    1       1       1
      5:         1  1     1     1   10       1       2
     ---                                              
    168:        14 46    11     4   33      11       3
    169:        14 33    11     3   46      11       4
    170:        14 46    11     4   46      11       4
    171:        15 47    12     4   47      12       4
    172:        16 48    12     4   48      12       4
    

    Now cart is a cartesian product of tout by itself, using ProductID as the key.

    > x <- cart[, sum(Price), by = list(Index, Index.1)]
        Index Index.1  V1
     1:     1       1  45
     2:     2       1  45
     3:     3       1  45
     4:     4       1  45
     5:     1       2  45
     6:     2       2  66
     7:     3       2  66
     8:     4       2  66
     9:     1       3  45
    10:     2       3  66
    11:     3       3  88
    12:     4       3  88
    13:     1       4  45
    14:     2       4  66
    15:     3       4  88
    16:     4       4 112
    

    x is almost what you need, but in a data table (long) form. You need to cast to matrix (wide) form with the help of avast from reshape2 package:

    > library(reshape2)
    > a <- acast(x, Index ~ Index.1, value.var = "V1")
       1  2  3   4
    1 45 45 45  45
    2 45 66 66  66
    3 45 66 88  88
    4 45 66 88 112
    

    Finally, to set upper triangular part of the matrix to NA:

    > a[upper.tri(a)] <- NA
       1  2  3   4
    1 45 NA NA  NA
    2 45 66 NA  NA
    3 45 66 88  NA
    4 45 66 88 112
    
    qid & accept id: (20454604, 20456616) query: PL/SQL help. How to write a anonymous block that inserts 100 new rows soup:

    Your insert statement should look like this:

    \n
    INSERT INTO emp2 \n( EMPLOYEE_ID, FIRST_NAME, LAST_NAME, HIRE_DATE, SALARY, DEPARTMENT_ID )\nVALUES \n( i, 'Fname', 'Lname', sysdate, 100, 10 );\n
    \n

    You need to add an IF statement for the part "also add code that inserts placeholders in the first_name and last_name columns for employee ID 2000". Like this:

    \n
    IF i = 2000\nTHEN\n   INSERT INTO emp2 \n   ( EMPLOYEE_ID, FIRST_NAME, LAST_NAME, HIRE_DATE, SALARY, DEPARTMENT_ID )\n   VALUES \n   ( i, 'Fname ' || i, 'Lname ' || i, sysdate, 100, 10 );\nELSE\n   INSERT INTO emp2 \n   ( EMPLOYEE_ID, FIRST_NAME, LAST_NAME, HIRE_DATE, SALARY, DEPARTMENT_ID )\n   VALUES \n   ( i, 'Fname', 'Lname', sysdate, 100, 10 );\nEND IF;\n
    \n soup wrap:

    Your insert statement should look like this:

    INSERT INTO emp2 
    ( EMPLOYEE_ID, FIRST_NAME, LAST_NAME, HIRE_DATE, SALARY, DEPARTMENT_ID )
    VALUES 
    ( i, 'Fname', 'Lname', sysdate, 100, 10 );
    

    You need to add an IF statement for the part "also add code that inserts placeholders in the first_name and last_name columns for employee ID 2000". Like this:

    IF i = 2000
    THEN
       INSERT INTO emp2 
       ( EMPLOYEE_ID, FIRST_NAME, LAST_NAME, HIRE_DATE, SALARY, DEPARTMENT_ID )
       VALUES 
       ( i, 'Fname ' || i, 'Lname ' || i, sysdate, 100, 10 );
    ELSE
       INSERT INTO emp2 
       ( EMPLOYEE_ID, FIRST_NAME, LAST_NAME, HIRE_DATE, SALARY, DEPARTMENT_ID )
       VALUES 
       ( i, 'Fname', 'Lname', sysdate, 100, 10 );
    END IF;
    
    qid & accept id: (20510051, 20511184) query: Getting the $rank variable and updating it in the table soup:

    Instead of constantly hitting the database with multiple queries, consider to do it at once like this

    \n
    UPDATE bank t JOIN \n(\n  SELECT id, bankaccount, \n  (\n    SELECT COUNT(*)\n      FROM bank\n     WHERE id = b.id\n       AND bankbalance > b.bankbalance\n  ) + 1 rank\n    FROM bank b\n   WHERE id = 1\n) s \n   ON t.id = s.id\n  AND t.bankaccount = s.bankaccount\n  SET t.bankaccountranking = rank;\n
    \n

    Here is SQLFiddle demo

    \n

    or with two statements, leveraging user variables and ORDER BY in UPDATE

    \n
    SET @rnum = 0;\nUPDATE bank\n   SET bankaccountranking = (@rnum := @rnum + 1)\n WHERE id = 1\n ORDER BY bankbalance DESC;\n
    \n

    Here is SQLFiddle demo

    \n
    \n

    Now php code might look like this

    \n
    $sessionid = $_SESSION['uid'];\n\n$sql = "UPDATE bank t JOIN \n(\n  SELECT id, bankaccount, \n  (\n    SELECT COUNT(*)\n      FROM bank\n     WHERE id = b.id\n       AND bankbalance > b.bankbalance\n  ) + 1 rank\n    FROM bank b\n   WHERE id = :id\n) s \n   ON t.id = s.id\n  AND t.bankaccount = s.bankaccount\n  SET t.bankaccountranking = rank;";\n\n$stmt = $conn->prepare($sql);\n$stmt->bindParam(':id', $sessionid , PDO::PARAM_INT);\n$stmt->execute();\n
    \n
    \n

    UPDATE: to implement equivalent of DENSE_RANK() analytic function with a subquery you can do

    \n
    UPDATE bank t JOIN \n(\n  SELECT id, bankaccount, \n  (\n    SELECT COUNT(DISTINCT bankbalance)\n      FROM bank\n     WHERE id = b.id\n       AND bankbalance > b.bankbalance\n  ) + 1 rank\n    FROM bank b\n   WHERE id = 1\n) s \n   ON t.id = s.id\n  AND t.bankaccount = s.bankaccount\n  SET t.bankaccountranking = rank;\n
    \n

    Here is SQLFiddle demo

    \n

    or with user(session) variables

    \n
    SET @r = 0, @b = NULL; \nUPDATE bank b JOIN\n(\n  SELECT id, bankaccount, @r := IF(@b = bankbalance, @r, @r + 1) rank, @b := bankbalance\n    FROM bank\n   WHERE id = 1\n   ORDER BY bankbalance DESC\n) s\n    ON b.id = s.id\n   AND b.bankaccount = s.bankaccount\n   SET bankaccountranking = rank;\n
    \n

    Here is SQLFiddle demo

    \n soup wrap:

    Instead of constantly hitting the database with multiple queries, consider to do it at once like this

    UPDATE bank t JOIN 
    (
      SELECT id, bankaccount, 
      (
        SELECT COUNT(*)
          FROM bank
         WHERE id = b.id
           AND bankbalance > b.bankbalance
      ) + 1 rank
        FROM bank b
       WHERE id = 1
    ) s 
       ON t.id = s.id
      AND t.bankaccount = s.bankaccount
      SET t.bankaccountranking = rank;
    

    Here is SQLFiddle demo

    or with two statements, leveraging user variables and ORDER BY in UPDATE

    SET @rnum = 0;
    UPDATE bank
       SET bankaccountranking = (@rnum := @rnum + 1)
     WHERE id = 1
     ORDER BY bankbalance DESC;
    

    Here is SQLFiddle demo


    Now php code might look like this

    $sessionid = $_SESSION['uid'];
    
    $sql = "UPDATE bank t JOIN 
    (
      SELECT id, bankaccount, 
      (
        SELECT COUNT(*)
          FROM bank
         WHERE id = b.id
           AND bankbalance > b.bankbalance
      ) + 1 rank
        FROM bank b
       WHERE id = :id
    ) s 
       ON t.id = s.id
      AND t.bankaccount = s.bankaccount
      SET t.bankaccountranking = rank;";
    
    $stmt = $conn->prepare($sql);
    $stmt->bindParam(':id', $sessionid , PDO::PARAM_INT);
    $stmt->execute();
    

    UPDATE: to implement equivalent of DENSE_RANK() analytic function with a subquery you can do

    UPDATE bank t JOIN 
    (
      SELECT id, bankaccount, 
      (
        SELECT COUNT(DISTINCT bankbalance)
          FROM bank
         WHERE id = b.id
           AND bankbalance > b.bankbalance
      ) + 1 rank
        FROM bank b
       WHERE id = 1
    ) s 
       ON t.id = s.id
      AND t.bankaccount = s.bankaccount
      SET t.bankaccountranking = rank;
    

    Here is SQLFiddle demo

    or with user(session) variables

    SET @r = 0, @b = NULL; 
    UPDATE bank b JOIN
    (
      SELECT id, bankaccount, @r := IF(@b = bankbalance, @r, @r + 1) rank, @b := bankbalance
        FROM bank
       WHERE id = 1
       ORDER BY bankbalance DESC
    ) s
        ON b.id = s.id
       AND b.bankaccount = s.bankaccount
       SET bankaccountranking = rank;
    

    Here is SQLFiddle demo

    qid & accept id: (20519532, 20525814) query: Storing multiple tags in one column soup:

    This is a good case for a bridge table. Let's say you have in your database:

    \n
    file_info\n---------\nfile_id\nauthor\ncreate_date\n\ntag_info\n--------\ntag_id\ntag_name\n
    \n

    tag_id is a surrogate key, and would be a unique, incrementing value for each new tag. So it may look like:

    \n
    tag_id  tag_name\n------  --------\n     1  Apples\n     2  Pears\n     3  Peaches\n
    \n

    You then create the bridge, which links files to the applicable tags:

    \n
    file_tag_bridge\n---------------\nfile_id\ntag_id\n
    \n

    The combination of file_id/tag_id will be unique in the table (it is a compound key), but a given file_id may be associated with multiple (different) tag_id, and vice-versa.

    \n

    You will have one row in this table for each tag associated with a file:

    \n
    file_id   tag_id\n-------   ------\n      1        1\n      2        2\n      2        3\n
    \n

    In this case, file 1 is associated with the Apples tag; file 2 is associated with Pears and Peaches. File 3 is not associated with any tags, and therefore is not represented in the bridge table.

    \n soup wrap:

    This is a good case for a bridge table. Let's say you have in your database:

    file_info
    ---------
    file_id
    author
    create_date
    
    tag_info
    --------
    tag_id
    tag_name
    

    tag_id is a surrogate key, and would be a unique, incrementing value for each new tag. So it may look like:

    tag_id  tag_name
    ------  --------
         1  Apples
         2  Pears
         3  Peaches
    

    You then create the bridge, which links files to the applicable tags:

    file_tag_bridge
    ---------------
    file_id
    tag_id
    

    The combination of file_id/tag_id will be unique in the table (it is a compound key), but a given file_id may be associated with multiple (different) tag_id, and vice-versa.

    You will have one row in this table for each tag associated with a file:

    file_id   tag_id
    -------   ------
          1        1
          2        2
          2        3
    

    In this case, file 1 is associated with the Apples tag; file 2 is associated with Pears and Peaches. File 3 is not associated with any tags, and therefore is not represented in the bridge table.

    qid & accept id: (20546039, 20546101) query: Create custom field in SELECT if other field is null soup:

    Use CASE instead of IF:

    \n
    SELECT \n    FIRST_NAME,\n    LAST_NAME,\n    ULTIMATE_PARENT_NAME, \n    CASE WHEN LOCATION_ACCOUNT_ID IS NULL THEN 'Y' ELSE '' END AS IMPACT\nFROM (\n    SELECT DISTINCT \n        A.FIRST_NAME,\n        A.LAST_NAME,\n        B.LOCATION_ACCOUNT_ID,\n        A.ULTIMATE_PARENT_NAME\n    FROM ACTIVE_ACCOUNTS A,\n    QL_ASSETS B\n    WHERE A.ACCOUNT_ID = B.LOCATION_ACCOUNT_ID(+)\n
    \n

    You should also use LEFT JOIN syntax instead of the old (+) syntax (but that's more of a style choice in this case - it does not change the result):

    \n
    SELECT \n    FIRST_NAME,\n    LAST_NAME,\n    ULTIMATE_PARENT_NAME, \n    CASE WHEN LOCATION_ACCOUNT_ID IS NULL THEN 'Y' ELSE '' END AS IMPACT\nFROM (\n    SELECT DISTINCT \n        A.FIRST_NAME,\n        A.LAST_NAME,\n        B.LOCATION_ACCOUNT_ID,\n        A.ULTIMATE_PARENT_NAME\n    FROM ACTIVE_ACCOUNTS A\n    LEFT JOIN QL_ASSETS B\n        ON A.ACCOUNT_ID = B.LOCATION_ACCOUNT_ID\n     )\n
    \n

    In fact, since you aren't using any of the columns from B in your result (only checking for existence) you can just use EXISTS:

    \n
    SELECT \n    FIRST_NAME,\n    LAST_NAME,\n    ULTIMATE_PARENT_NAME, \n    CASE WHEN EXISTS(SELECT NULL \n                     FROM QL_ASSETS \n                     WHERE LOCATION_ACCOUNT_ID = A.ACCOUNT_ID)\n         THEN 'Y' \n         ELSE '' \n         END AS IMPACT\n    FROM ACTIVE_ACCOUNTS A\n
    \n soup wrap:

    Use CASE instead of IF:

    SELECT 
        FIRST_NAME,
        LAST_NAME,
        ULTIMATE_PARENT_NAME, 
        CASE WHEN LOCATION_ACCOUNT_ID IS NULL THEN 'Y' ELSE '' END AS IMPACT
    FROM (
        SELECT DISTINCT 
            A.FIRST_NAME,
            A.LAST_NAME,
            B.LOCATION_ACCOUNT_ID,
            A.ULTIMATE_PARENT_NAME
        FROM ACTIVE_ACCOUNTS A,
        QL_ASSETS B
        WHERE A.ACCOUNT_ID = B.LOCATION_ACCOUNT_ID(+)
    

    You should also use LEFT JOIN syntax instead of the old (+) syntax (but that's more of a style choice in this case - it does not change the result):

    SELECT 
        FIRST_NAME,
        LAST_NAME,
        ULTIMATE_PARENT_NAME, 
        CASE WHEN LOCATION_ACCOUNT_ID IS NULL THEN 'Y' ELSE '' END AS IMPACT
    FROM (
        SELECT DISTINCT 
            A.FIRST_NAME,
            A.LAST_NAME,
            B.LOCATION_ACCOUNT_ID,
            A.ULTIMATE_PARENT_NAME
        FROM ACTIVE_ACCOUNTS A
        LEFT JOIN QL_ASSETS B
            ON A.ACCOUNT_ID = B.LOCATION_ACCOUNT_ID
         )
    

    In fact, since you aren't using any of the columns from B in your result (only checking for existence) you can just use EXISTS:

    SELECT 
        FIRST_NAME,
        LAST_NAME,
        ULTIMATE_PARENT_NAME, 
        CASE WHEN EXISTS(SELECT NULL 
                         FROM QL_ASSETS 
                         WHERE LOCATION_ACCOUNT_ID = A.ACCOUNT_ID)
             THEN 'Y' 
             ELSE '' 
             END AS IMPACT
        FROM ACTIVE_ACCOUNTS A
    
    qid & accept id: (20551358, 20552934) query: Use the value an XML element as a variable for a procedure soup:

    If your question is about how to read from an XML file, here is an example.

    \n

    Assuming this is declared:

    \n
    Dim xml = \n            DBUser2\n            \n            N127.0.0.1\CESSQL\n            Marino\n            \n            \n            \n          \n
    \n

    It's just one line of code:

    \n
    xml.Element("ServerDatabase").Value\n
    \n

    Or, to keep your variable names:

    \n
    Dim ServerDatabaseValue As String = xml.Element("ServerDatabase").Value\n
    \n

    Always specify variable types. To help you with that, you can set Option Strict On and Option Infer Off in your project settings. This can improve your code quality by forcing you into certain (good) development habits.

    \n soup wrap:

    If your question is about how to read from an XML file, here is an example.

    Assuming this is declared:

    Dim xml = 
                DBUser2
                
                N127.0.0.1\CESSQL
                Marino
                
                
                
              
    

    It's just one line of code:

    xml.Element("ServerDatabase").Value
    

    Or, to keep your variable names:

    Dim ServerDatabaseValue As String = xml.Element("ServerDatabase").Value
    

    Always specify variable types. To help you with that, you can set Option Strict On and Option Infer Off in your project settings. This can improve your code quality by forcing you into certain (good) development habits.

    qid & accept id: (20589984, 20591149) query: listing SQL table's rows in text file soup:

    In order to get the fieldnames, you would have to write something like this

    \n
       for I := 0 to ADODataSet.FieldCount - 1 do \n    Write (WOLFile,ADODataSet.Fields[I].displayname);\n   writeln (WOLFile);\n
    \n

    Output the data only with 'write', so that all the column names appear in the same line, then open a new line with 'writeln'.

    \n

    Then you can add your code which iterates over the table. Here's the entire code:

    \n
    with ADODataSet do\n begin\n  for i:= 0 to fieldcount - 1 do write (WOLFile, Fields[I].displayname);\n  writeln (WOLFile);\n  first;\n  while not eof do\n   begin\n    for I := 0 to FieldCount - 1 do Write (WOLFile, Fields[I].AsString);\n    writeln (WOLFile);\n    next\n   end;\n  end;\n end;\n
    \n

    The columns probably won't left align correctly, but I'll leave that little problem up to you.

    \n

    People here don't like the use of the 'with' construct but I don't see any problem in this snippet.

    \n

    You could also save the output in a stringlist then write the stringlist to a file at the end, instead of using write and writeln. In order to do that, you would have to concatenate the values of each 'for i' loop into a local variable then add that variable to the stringlist. If you add each value to be printed directly to the stringlist, then every value will appear on a separate line.

    \n soup wrap:

    In order to get the fieldnames, you would have to write something like this

       for I := 0 to ADODataSet.FieldCount - 1 do 
        Write (WOLFile,ADODataSet.Fields[I].displayname);
       writeln (WOLFile);
    

    Output the data only with 'write', so that all the column names appear in the same line, then open a new line with 'writeln'.

    Then you can add your code which iterates over the table. Here's the entire code:

    with ADODataSet do
     begin
      for i:= 0 to fieldcount - 1 do write (WOLFile, Fields[I].displayname);
      writeln (WOLFile);
      first;
      while not eof do
       begin
        for I := 0 to FieldCount - 1 do Write (WOLFile, Fields[I].AsString);
        writeln (WOLFile);
        next
       end;
      end;
     end;
    

    The columns probably won't left align correctly, but I'll leave that little problem up to you.

    People here don't like the use of the 'with' construct but I don't see any problem in this snippet.

    You could also save the output in a stringlist then write the stringlist to a file at the end, instead of using write and writeln. In order to do that, you would have to concatenate the values of each 'for i' loop into a local variable then add that variable to the stringlist. If you add each value to be printed directly to the stringlist, then every value will appear on a separate line.

    qid & accept id: (20637482, 20637609) query: Pivot without aggregate - again soup:

    Based on your sample data, you can easily get the result using an aggregate function with a CASE expression:

    \n
    select userlicenseid,\n  startdate,\n  max(case when name = 'Other' then value end) Other,\n  max(case when name = 'Pathways' then value end) Pathways,\n  max(case when name = 'Execution' then value end) Execution,\n  max(case when name = 'Focus' then value end) Focus,\n  max(case when name = 'Profit' then value end) Profit\nfrom yourtable\ngroup by userlicenseid, startdate;\n
    \n

    See SQL Fiddle with Demo. Since you are converting string values into columns, then you will want to use either the min() or max() aggregate.

    \n

    You could use the PIVOT function to get the result as well:

    \n
    select userlicenseid, startdate,\n  Other, Pathways, Execution, Focus, Profit\nfrom\n(\n  select userlicenseid, startdate,\n    name, value\n  from yourtable\n) d\npivot\n(\n  max(value)\n  for name in (Other, Pathways, Execution, Focus, Profit)\n) piv;\n
    \n

    See SQL Fiddle with Demo

    \n soup wrap:

    Based on your sample data, you can easily get the result using an aggregate function with a CASE expression:

    select userlicenseid,
      startdate,
      max(case when name = 'Other' then value end) Other,
      max(case when name = 'Pathways' then value end) Pathways,
      max(case when name = 'Execution' then value end) Execution,
      max(case when name = 'Focus' then value end) Focus,
      max(case when name = 'Profit' then value end) Profit
    from yourtable
    group by userlicenseid, startdate;
    

    See SQL Fiddle with Demo. Since you are converting string values into columns, then you will want to use either the min() or max() aggregate.

    You could use the PIVOT function to get the result as well:

    select userlicenseid, startdate,
      Other, Pathways, Execution, Focus, Profit
    from
    (
      select userlicenseid, startdate,
        name, value
      from yourtable
    ) d
    pivot
    (
      max(value)
      for name in (Other, Pathways, Execution, Focus, Profit)
    ) piv;
    

    See SQL Fiddle with Demo

    qid & accept id: (20643084, 20643411) query: Combine sql rows into additional columns soup:

    Of course PostgreSQL supports a pivot function. Use crosstab() from the additional module tablefunc. It's up for debate whether that's "native" or not.

    \n

    Run once per database:

    \n
    CREATE EXTENSION tablefunc;\n
    \n

    And consider this detailed explanation:
    \nPostgreSQL Crosstab Query

    \n

    However, what you are trying to do is the opposite of a pivot function! A counter-pivot. I would use UNION ALL:

    \n
    SELECT item_name, 'store_A'::text AS store, store_a AS quantity\nFROM   stock_usage\n\nUNION ALL\nSELECT item_name, 'store_B'::text, store_b\nFROM   stock_usage\n\nUNION ALL\nSELECT item_name, 'store_C'::text, store_c\nFROM   stock_usage\n\n...\n
    \n soup wrap:

    Of course PostgreSQL supports a pivot function. Use crosstab() from the additional module tablefunc. It's up for debate whether that's "native" or not.

    Run once per database:

    CREATE EXTENSION tablefunc;
    

    And consider this detailed explanation:
    PostgreSQL Crosstab Query

    However, what you are trying to do is the opposite of a pivot function! A counter-pivot. I would use UNION ALL:

    SELECT item_name, 'store_A'::text AS store, store_a AS quantity
    FROM   stock_usage
    
    UNION ALL
    SELECT item_name, 'store_B'::text, store_b
    FROM   stock_usage
    
    UNION ALL
    SELECT item_name, 'store_C'::text, store_c
    FROM   stock_usage
    
    ...
    
    qid & accept id: (20659137, 20660749) query: MySQL: Getting the highest number of a combination of two fields soup:

    2 SQL-Statments, the 2nd should do it...

    \n
    \nSELECT\n    AA.user, AA.tone, AA.color, MAX(AA.toneCounter) as toneCounter\nFROM (\n    SELECT\n    user, tone, color, COUNT(tone) as toneCounter\n    FROM\n    experiments\n    LEFT JOIN\n    pairings\n    ON\n    experiments.experimentId = pairings.experimentId \n    GROUP BY\n    user, tone, color\n) AA\nGroup by\n    AA.user, AA.tone\n
    \n

    ... my answer did not satisfy myself and I doublechecked it. And I think the next answer is more adequate (and even runs on no-mysql)

    \n
    \nSELECT \n    AAA.user, AAA.tone, BBB.color, AAA.toneCounter \nFROM (\n    SELECT\n    AA.user, AA.tone, MAX(AA.toneCounter) as toneCounter\n    FROM (\n    SELECT\n        user, tone, color, COUNT(tone) as toneCounter\n    FROM\n        experiments\n    LEFT JOIN\n        pairings\n    ON\n        experiments.experimentId = pairings.experimentId \n    GROUP BY\n        user, tone, color\n    ) AA\n    Group by\n    AA.user, AA.tone\n) AAA\njoin (\n    SELECT\n    BB.user, BB.tone, BB.color, MAX(BB.toneCounter) as toneCounter\n    FROM (\n    SELECT\n        user, tone, color, COUNT(tone) as toneCounter\n    FROM\n        experiments\n    LEFT JOIN\n        pairings\n    ON\n        experiments.experimentId = pairings.experimentId \n    GROUP BY\n        user, tone, color\n    ) BB\n    Group by\n    BB.user, BB.tone, BB.color \n) BBB\nON\n    BBB.user = AAA.user\n    AND BBB.tone = AAA.tone \n    AND BBB.toneCounter = AAA.toneCounter \n
    \n soup wrap:

    2 SQL-Statments, the 2nd should do it...

    
    SELECT
        AA.user, AA.tone, AA.color, MAX(AA.toneCounter) as toneCounter
    FROM (
        SELECT
        user, tone, color, COUNT(tone) as toneCounter
        FROM
        experiments
        LEFT JOIN
        pairings
        ON
        experiments.experimentId = pairings.experimentId 
        GROUP BY
        user, tone, color
    ) AA
    Group by
        AA.user, AA.tone
    

    ... my answer did not satisfy myself and I doublechecked it. And I think the next answer is more adequate (and even runs on no-mysql)

    
    SELECT 
        AAA.user, AAA.tone, BBB.color, AAA.toneCounter 
    FROM (
        SELECT
        AA.user, AA.tone, MAX(AA.toneCounter) as toneCounter
        FROM (
        SELECT
            user, tone, color, COUNT(tone) as toneCounter
        FROM
            experiments
        LEFT JOIN
            pairings
        ON
            experiments.experimentId = pairings.experimentId 
        GROUP BY
            user, tone, color
        ) AA
        Group by
        AA.user, AA.tone
    ) AAA
    join (
        SELECT
        BB.user, BB.tone, BB.color, MAX(BB.toneCounter) as toneCounter
        FROM (
        SELECT
            user, tone, color, COUNT(tone) as toneCounter
        FROM
            experiments
        LEFT JOIN
            pairings
        ON
            experiments.experimentId = pairings.experimentId 
        GROUP BY
            user, tone, color
        ) BB
        Group by
        BB.user, BB.tone, BB.color 
    ) BBB
    ON
        BBB.user = AAA.user
        AND BBB.tone = AAA.tone 
        AND BBB.toneCounter = AAA.toneCounter 
    
    qid & accept id: (20661535, 20711214) query: How to Group time segments and check break time soup:

    i believe if you want to combine both times you need to take them out of the group by and add sum them. based on the results the reporting can check total hours and break hours. you can add case statements if you want to flag them.

    \n
    SELECT  ftc.lEmployeeID\n       ,ftc.sFirstName\n       ,ftc.sLastName\n       ,SUM(ftc.TotalHours) AS TotalHours\n       ,DATEDIFF(mi, MIN(ftc.dtTimeOut), MAX(ftc.dtTimeIn)) AS BreakTimeMinutes\nFROM dbo.fTimeCard(@StartDate, @EndDate,\n                   @DeptList, @iActive,@ EmployeeList) AS ftc\nWHERE SUM(ftc.TotalHours) >= 0 AND (ftc.DID IS NOT NULL) OR\n                     (ftc.DID IS NOT NULL) AND (ftc.dtTimeOut IS NULL)\nGROUP BY ftc.lEmployeeID, ftc.sFirstName, ftc.sLastName\n
    \n

    I made this quick test in sql and it appears to work the way you want. did you add something to the group by?

    \n
    declare @table table (emp_id int,name varchar(4), tin time,tout time);\n\ninsert into @table\nVALUES (1,'d','8:30:00','11:35:00'),\n    (1,'d','13:00:00','17:00:00');\n\n\nSELECT t.emp_id\n      ,t.name\n      ,SUM(DATEDIFF(mi, tin,tout))/60 as hours\n      ,DATEDIFF(mi, MIN(tout), MAX(tin)) AS BreakTimeMinutes\nFROM @table t\n\nGROUP BY t.emp_id, t.name\n
    \n soup wrap:

    i believe if you want to combine both times you need to take them out of the group by and add sum them. based on the results the reporting can check total hours and break hours. you can add case statements if you want to flag them.

    SELECT  ftc.lEmployeeID
           ,ftc.sFirstName
           ,ftc.sLastName
           ,SUM(ftc.TotalHours) AS TotalHours
           ,DATEDIFF(mi, MIN(ftc.dtTimeOut), MAX(ftc.dtTimeIn)) AS BreakTimeMinutes
    FROM dbo.fTimeCard(@StartDate, @EndDate,
                       @DeptList, @iActive,@ EmployeeList) AS ftc
    WHERE SUM(ftc.TotalHours) >= 0 AND (ftc.DID IS NOT NULL) OR
                         (ftc.DID IS NOT NULL) AND (ftc.dtTimeOut IS NULL)
    GROUP BY ftc.lEmployeeID, ftc.sFirstName, ftc.sLastName
    

    I made this quick test in sql and it appears to work the way you want. did you add something to the group by?

    declare @table table (emp_id int,name varchar(4), tin time,tout time);
    
    insert into @table
    VALUES (1,'d','8:30:00','11:35:00'),
        (1,'d','13:00:00','17:00:00');
    
    
    SELECT t.emp_id
          ,t.name
          ,SUM(DATEDIFF(mi, tin,tout))/60 as hours
          ,DATEDIFF(mi, MIN(tout), MAX(tin)) AS BreakTimeMinutes
    FROM @table t
    
    GROUP BY t.emp_id, t.name
    
    qid & accept id: (20707736, 20708437) query: Access query to return several similar records when one is flagged soup:

    This query should give you a list of unique priStkCode values for which at least one row exists with False in priPriceConfirmed.

    \n
    SELECT DISTINCT priStkCode\nFROM tblPriData\nWHERE priPriceConfirmed = False;\n
    \n

    Then you can select the matching tblPriData rows with an INNER JOIN to that query.

    \n
    SELECT pd.*\nFROM\n    tblPriData AS pd\n    INNER JOIN\n    (\n        SELECT DISTINCT priStkCode\n        FROM tblPriData\n        WHERE priPriceConfirmed = False\n    ) AS sub\n    ON pd.priStkCode = sub.priStkCode;\n
    \n soup wrap:

    This query should give you a list of unique priStkCode values for which at least one row exists with False in priPriceConfirmed.

    SELECT DISTINCT priStkCode
    FROM tblPriData
    WHERE priPriceConfirmed = False;
    

    Then you can select the matching tblPriData rows with an INNER JOIN to that query.

    SELECT pd.*
    FROM
        tblPriData AS pd
        INNER JOIN
        (
            SELECT DISTINCT priStkCode
            FROM tblPriData
            WHERE priPriceConfirmed = False
        ) AS sub
        ON pd.priStkCode = sub.priStkCode;
    
    qid & accept id: (20757884, 20758116) query: Calculate last days of months for given period in SQL Server soup:

    The easiest option is to have a calendar table, with a last day of the month flag, so your query would simply be:

    \n
    SELECT  *\nFROM    dbo.Calendar\nWHERE   Date >= @StartDate\nAND     Date <= @EndDate\nAND     EndOfMonth = 1;\n
    \n

    Assuming of course that you don't have a calendar table you can generate a list of dates on the fly:'

    \n
    DECLARE @s_date DATE = '20130101',\n        @e_date DATE = '20130601';\n\nSELECT  Date = DATEADD(DAY, ROW_NUMBER() OVER(ORDER BY Object_ID) - 1, @s_date)\nFROM    sys.all_objects;\n
    \n

    Then once you have your dates you can limit them to where the date is the last day of the month (where adding one day makes it the first of the month):

    \n
    DECLARE @s_date DATE = '20130101',\n        @e_date DATE = '20130601';\n\nWITH Dates AS\n(   SELECT  Date = DATEADD(DAY, ROW_NUMBER() OVER(ORDER BY Object_ID) - 1, @s_date)\n    FROM    sys.all_objects\n)\nSELECT  *\nFROM    Dates\nWHERE   Date <= @e_Date\nAND     DATEPART(DAY, DATEADD(DAY, 1, Date)) = 1;\n
    \n

    Example on SQL Fiddle

    \n soup wrap:

    The easiest option is to have a calendar table, with a last day of the month flag, so your query would simply be:

    SELECT  *
    FROM    dbo.Calendar
    WHERE   Date >= @StartDate
    AND     Date <= @EndDate
    AND     EndOfMonth = 1;
    

    Assuming of course that you don't have a calendar table you can generate a list of dates on the fly:'

    DECLARE @s_date DATE = '20130101',
            @e_date DATE = '20130601';
    
    SELECT  Date = DATEADD(DAY, ROW_NUMBER() OVER(ORDER BY Object_ID) - 1, @s_date)
    FROM    sys.all_objects;
    

    Then once you have your dates you can limit them to where the date is the last day of the month (where adding one day makes it the first of the month):

    DECLARE @s_date DATE = '20130101',
            @e_date DATE = '20130601';
    
    WITH Dates AS
    (   SELECT  Date = DATEADD(DAY, ROW_NUMBER() OVER(ORDER BY Object_ID) - 1, @s_date)
        FROM    sys.all_objects
    )
    SELECT  *
    FROM    Dates
    WHERE   Date <= @e_Date
    AND     DATEPART(DAY, DATEADD(DAY, 1, Date)) = 1;
    

    Example on SQL Fiddle

    qid & accept id: (20783264, 20783422) query: Making select and delete queries as single statement soup:

    Try this:

    \n
    DELETE FROM posts \nWHERE id IN (SELECT id \n             FROM (SELECT post_title, MAX(id) id \n                   FROM posts \n                   WHERE post_title IN ('abc', 'xyz') \n                   GROUP BY post_title \n                  ) A \n            )\n
    \n

    OR

    \n
    DELETE FROM posts \nWHERE id IN (SELECT id \n             FROM (SELECT post_title, id \n                   FROM posts \n                   WHERE post_title IN ('abc', 'xyz') \n                   ORDER BY post_title, id DESC\n                 ) A \n            GROUP BY post_title)\n
    \n soup wrap:

    Try this:

    DELETE FROM posts 
    WHERE id IN (SELECT id 
                 FROM (SELECT post_title, MAX(id) id 
                       FROM posts 
                       WHERE post_title IN ('abc', 'xyz') 
                       GROUP BY post_title 
                      ) A 
                )
    

    OR

    DELETE FROM posts 
    WHERE id IN (SELECT id 
                 FROM (SELECT post_title, id 
                       FROM posts 
                       WHERE post_title IN ('abc', 'xyz') 
                       ORDER BY post_title, id DESC
                     ) A 
                GROUP BY post_title)
    
    qid & accept id: (20792891, 20792959) query: Selecting Spicific data placed in the middle of the database table soup:

    I guess you can make use of ROW_NUMBER Function something like this ....

    \n
    ;WITH OrderedData\n AS\n (\n  SELECT * , rn = ROW_NUMBER() OVER (ORDER BY SomeColumn)\n  FROM Table_Name\n )\nSELECT * FROM OrderedData\nWHERE rn >= @LowerLimit AND rn <= @UpperLimit\n
    \n

    Your Query

    \n
    select * from articles \nwhere articleid between @indexOfSelection AND @LimitOfselection\n
    \n

    You just need to add the key word AND between your upper lower limit variable and upper limit variable.

    \n

    Your Stored Procedure

    \n
    CREATE PROCEDURE ordered_articles \n@LowerBound int, \n@UpperBound int \nAS \nBEGIN\n  SET NOCOUNT ON;\n   select * from articles \n   where articleid between @LowerBound and @UpperBound \nEND\n
    \n

    To Select A range Of Rows

    \n
    CREATE PROCEDURE ordered_articles \n@LowerBound int, \n@UpperBound int \nAS \nBEGIN\n  SET NOCOUNT ON;\nWITH OrderedData\nAS\n (\n  SELECT * , rn = ROW_NUMBER() OVER (ORDER BY articleid)\n  FROM articles\n )\nSELECT * FROM OrderedData\nWHERE rn >= @LowerBound AND rn <= @UpperBound\n\nEND\n\n EXECUTE ordered_articles 10, 15  --<-- this will return 10 to 15 number row ordered by ArticleID\n
    \n soup wrap:

    I guess you can make use of ROW_NUMBER Function something like this ....

    ;WITH OrderedData
     AS
     (
      SELECT * , rn = ROW_NUMBER() OVER (ORDER BY SomeColumn)
      FROM Table_Name
     )
    SELECT * FROM OrderedData
    WHERE rn >= @LowerLimit AND rn <= @UpperLimit
    

    Your Query

    select * from articles 
    where articleid between @indexOfSelection AND @LimitOfselection
    

    You just need to add the key word AND between your upper lower limit variable and upper limit variable.

    Your Stored Procedure

    CREATE PROCEDURE ordered_articles 
    @LowerBound int, 
    @UpperBound int 
    AS 
    BEGIN
      SET NOCOUNT ON;
       select * from articles 
       where articleid between @LowerBound and @UpperBound 
    END
    

    To Select A range Of Rows

    CREATE PROCEDURE ordered_articles 
    @LowerBound int, 
    @UpperBound int 
    AS 
    BEGIN
      SET NOCOUNT ON;
    WITH OrderedData
    AS
     (
      SELECT * , rn = ROW_NUMBER() OVER (ORDER BY articleid)
      FROM articles
     )
    SELECT * FROM OrderedData
    WHERE rn >= @LowerBound AND rn <= @UpperBound
    
    END
    
     EXECUTE ordered_articles 10, 15  --<-- this will return 10 to 15 number row ordered by ArticleID
    
    qid & accept id: (20794860, 20795034) query: regex in SQL to detect one or more digit soup:

    Use REGEXP operator instead of LIKE operator

    \n

    Try this:

    \n
    SELECT '129387 store' REGEXP '^[0-9]* store$';\n\nSELECT * FROM shop WHERE `name` REGEXP '^[0-9]+ store$';\n
    \n

    Check the SQL FIDDLE DEMO

    \n

    OUTPUT

    \n
    |         NAME |\n|--------------|\n| 129387 store |\n
    \n soup wrap:

    Use REGEXP operator instead of LIKE operator

    Try this:

    SELECT '129387 store' REGEXP '^[0-9]* store$';
    
    SELECT * FROM shop WHERE `name` REGEXP '^[0-9]+ store$';
    

    Check the SQL FIDDLE DEMO

    OUTPUT

    |         NAME |
    |--------------|
    | 129387 store |
    
    qid & accept id: (20830315, 20831605) query: How to calculate running total (month to date) in SQL Server 2008 soup:
    \n

    A running total is the summation of a sequence of numbers which is\n updated each time a new number is added to the sequence, simply by\n adding the value of the new number to the running total.

    \n
    \n

    I THINK He wants a running total for Month by each Representative_Id, so a simple group by week isn't enough. He probably wants his Month_To_Date_Activities_Count to be updated at the end of every week.

    \n

    This query gives a running total (month to end-of-week date) ordered by Representative_Id, Week

    \n
    SELECT a.Representative_ID, l.month, l.Week, Count(*) AS Total_Week_Activity_Count\n    ,(SELECT  count(*)\n        FROM ACTIVITIES_FACT a2\n        INNER JOIN LU_TIME l2 ON a2.Date = l2.Date\n        AND a.Representative_ID = a2.Representative_ID\n        WHERE l2.week <=  l.week\n        AND l2.month = l.month) Month_To_Date_Activities_Count\nFROM ACTIVITIES_FACT a\nINNER JOIN LU_TIME l ON a.Date = l.Date\nGROUP BY a.Representative_ID, l.Week, l.month\nORDER BY a.Representative_ID, l.Week\n
    \n
    \n
    | REPRESENTATIVE_ID | MONTH | WEEK | TOTAL_WEEK_ACTIVITY_COUNT | MONTH_TO_DATE_ACTIVITIES_COUNT |\n|-------------------|-------|------|---------------------------|--------------------------------|\n|                40 |     7 | 7/08 |                         1 |                              1 |\n|                40 |     8 | 8/09 |                         1 |                              1 |\n|                40 |     8 | 8/10 |                         1 |                              2 |\n|                41 |     7 | 7/08 |                         2 |                              2 |\n|                41 |     8 | 8/08 |                         4 |                              4 |\n|                41 |     8 | 8/09 |                         3 |                              7 |\n|                41 |     8 | 8/10 |                         1 |                              8 |\n
    \n

    SQL Fiddle Sample

    \n soup wrap:

    A running total is the summation of a sequence of numbers which is updated each time a new number is added to the sequence, simply by adding the value of the new number to the running total.

    I THINK He wants a running total for Month by each Representative_Id, so a simple group by week isn't enough. He probably wants his Month_To_Date_Activities_Count to be updated at the end of every week.

    This query gives a running total (month to end-of-week date) ordered by Representative_Id, Week

    SELECT a.Representative_ID, l.month, l.Week, Count(*) AS Total_Week_Activity_Count
        ,(SELECT  count(*)
            FROM ACTIVITIES_FACT a2
            INNER JOIN LU_TIME l2 ON a2.Date = l2.Date
            AND a.Representative_ID = a2.Representative_ID
            WHERE l2.week <=  l.week
            AND l2.month = l.month) Month_To_Date_Activities_Count
    FROM ACTIVITIES_FACT a
    INNER JOIN LU_TIME l ON a.Date = l.Date
    GROUP BY a.Representative_ID, l.Week, l.month
    ORDER BY a.Representative_ID, l.Week
    

    | REPRESENTATIVE_ID | MONTH | WEEK | TOTAL_WEEK_ACTIVITY_COUNT | MONTH_TO_DATE_ACTIVITIES_COUNT |
    |-------------------|-------|------|---------------------------|--------------------------------|
    |                40 |     7 | 7/08 |                         1 |                              1 |
    |                40 |     8 | 8/09 |                         1 |                              1 |
    |                40 |     8 | 8/10 |                         1 |                              2 |
    |                41 |     7 | 7/08 |                         2 |                              2 |
    |                41 |     8 | 8/08 |                         4 |                              4 |
    |                41 |     8 | 8/09 |                         3 |                              7 |
    |                41 |     8 | 8/10 |                         1 |                              8 |
    

    SQL Fiddle Sample

    qid & accept id: (20838921, 20838992) query: Add values from the previous row of one column to another column in current row soup:

    You can use OUTER APPLY:

    \n
    CREATE TABLE #T (Amount INT);\nINSERT #T (Amount) VALUES (1), (2), (3), (4), (5), (6), (7);\n\nSELECT  T.Amount, T2.Amount\nFROM    #T T\n        OUTER APPLY\n        (   SELECT  Amount = SUM(Amount)\n            FROM    #T T2\n            WHERE   T2.Amount <= T.Amount\n        ) T2;\n\nDROP TABLE #T;\n
    \n

    Or a correlated subquery:

    \n
    CREATE TABLE #T (Amount INT);\nINSERT #T (Amount) VALUES (1), (2), (3), (4), (5), (6), (7);\n\nSELECT  T.Amount, \n        (   SELECT  Amount = SUM(Amount)\n            FROM    #T T2\n            WHERE   T2.Amount <= T.Amount\n        ) \nFROM    #T T\n\nDROP TABLE #T;\n
    \n

    Both should yield the same plan (In this case they are essentially the same and the IO is identical).

    \n
    \n

    Right, subtraction. Got there in the end, I will go through how I eventually got to the solution because it took me a while, it is not as straight forward as a cumulative sum..

    \n

    First I just wrote out a query that was exactly what the logic was, essentially:

    \n
    f(x) = x - f(x - 1);\n
    \n

    So by copy and pasting the formula from the previous line I got to:

    \n
    SELECT  [1] = 1,\n        [2] = 2 - 1,\n        [3] = 3 - (2 - 1),\n        [4] = 4 - (3 - (2 - 1)),\n        [5] = 5 - (4 - (3 - (2 - 1))),\n        [6] = 6 - (5 - (4 - (3 - (2 - 1)))),\n        [7] = 7 - (6 - (5 - (4 - (3 - (2 - 1)))));\n
    \n

    I then expanded out all the parentheses to give:

    \n
    SELECT  [1] = 1,\n        [2] = 2 - 1,\n        [3] = 3 - 2 + 1,\n        [4] = 4 - 3 + 2 - 1,\n        [5] = 5 - 4 + 3 - 2 + 1,\n        [6] = 6 - 5 + 4 - 3 + 2 - 1,\n        [7] = 7 - 6 + 5 - 4 + 3 - 2 + 1;\n
    \n

    As you can see the operator alternates between + and - for each amount as you move down (i.e. for 5 you add the 3, for 6 you minus the 3, then for 7 you add it again).

    \n

    This means you need to find out the position of each value to work out whether or not to add or subtract it. So using this:

    \n
    SELECT  T.Amount, \n        T2.RowNum,\n        T2.Amount\nFROM    #T T\n        OUTER APPLY\n        (   SELECT  Amount, RowNum = ROW_NUMBER() OVER(ORDER BY Amount DESC)\n            FROM    #T T2\n            WHERE   T2.Amount < T.Amount\n        ) T2\nWHERE   T.Amount IN (4, 5)\n
    \n

    You end up with:

    \n
    Amount  RowNum  Amount\n-------------------------\n4       1       3\n4       2       2\n4       3       1\n-------------------------\n5       1       4\n5       2       3\n5       3       2\n5       4       1\n
    \n

    So remembering the previous formala for these two:

    \n
    [4] = 4 - 3 + 2 - 1,\n[5] = 5 - 4 + 3 - 2 + 1,\n
    \n

    We can see that where RowNum is odd we need to - the second amount, where it is even we need to add it. We can't use ROW_NUMBER() inside a SUM function, so we then need to perform a second aggregate, giving a final query of:

    \n
    SELECT  T.Amount, \n        Subtraction = T.Amount - SUM(ISNULL(T2.Amount, 0))\nFROM    #T T\n        OUTER APPLY\n        (   SELECT  Amount = CASE WHEN ROW_NUMBER() OVER(ORDER BY Amount DESC) % 2 = 0 THEN -Amount ELSE Amount END\n            FROM    #T T2\n            WHERE   T2.Amount < T.Amount\n        ) T2\nGROUP BY T.Amount;\n
    \n

    Example on SQL Fiddle

    \n soup wrap:

    You can use OUTER APPLY:

    CREATE TABLE #T (Amount INT);
    INSERT #T (Amount) VALUES (1), (2), (3), (4), (5), (6), (7);
    
    SELECT  T.Amount, T2.Amount
    FROM    #T T
            OUTER APPLY
            (   SELECT  Amount = SUM(Amount)
                FROM    #T T2
                WHERE   T2.Amount <= T.Amount
            ) T2;
    
    DROP TABLE #T;
    

    Or a correlated subquery:

    CREATE TABLE #T (Amount INT);
    INSERT #T (Amount) VALUES (1), (2), (3), (4), (5), (6), (7);
    
    SELECT  T.Amount, 
            (   SELECT  Amount = SUM(Amount)
                FROM    #T T2
                WHERE   T2.Amount <= T.Amount
            ) 
    FROM    #T T
    
    DROP TABLE #T;
    

    Both should yield the same plan (In this case they are essentially the same and the IO is identical).


    Right, subtraction. Got there in the end, I will go through how I eventually got to the solution because it took me a while, it is not as straight forward as a cumulative sum..

    First I just wrote out a query that was exactly what the logic was, essentially:

    f(x) = x - f(x - 1);
    

    So by copy and pasting the formula from the previous line I got to:

    SELECT  [1] = 1,
            [2] = 2 - 1,
            [3] = 3 - (2 - 1),
            [4] = 4 - (3 - (2 - 1)),
            [5] = 5 - (4 - (3 - (2 - 1))),
            [6] = 6 - (5 - (4 - (3 - (2 - 1)))),
            [7] = 7 - (6 - (5 - (4 - (3 - (2 - 1)))));
    

    I then expanded out all the parentheses to give:

    SELECT  [1] = 1,
            [2] = 2 - 1,
            [3] = 3 - 2 + 1,
            [4] = 4 - 3 + 2 - 1,
            [5] = 5 - 4 + 3 - 2 + 1,
            [6] = 6 - 5 + 4 - 3 + 2 - 1,
            [7] = 7 - 6 + 5 - 4 + 3 - 2 + 1;
    

    As you can see the operator alternates between + and - for each amount as you move down (i.e. for 5 you add the 3, for 6 you minus the 3, then for 7 you add it again).

    This means you need to find out the position of each value to work out whether or not to add or subtract it. So using this:

    SELECT  T.Amount, 
            T2.RowNum,
            T2.Amount
    FROM    #T T
            OUTER APPLY
            (   SELECT  Amount, RowNum = ROW_NUMBER() OVER(ORDER BY Amount DESC)
                FROM    #T T2
                WHERE   T2.Amount < T.Amount
            ) T2
    WHERE   T.Amount IN (4, 5)
    

    You end up with:

    Amount  RowNum  Amount
    -------------------------
    4       1       3
    4       2       2
    4       3       1
    -------------------------
    5       1       4
    5       2       3
    5       3       2
    5       4       1
    

    So remembering the previous formala for these two:

    [4] = 4 - 3 + 2 - 1,
    [5] = 5 - 4 + 3 - 2 + 1,
    

    We can see that where RowNum is odd we need to - the second amount, where it is even we need to add it. We can't use ROW_NUMBER() inside a SUM function, so we then need to perform a second aggregate, giving a final query of:

    SELECT  T.Amount, 
            Subtraction = T.Amount - SUM(ISNULL(T2.Amount, 0))
    FROM    #T T
            OUTER APPLY
            (   SELECT  Amount = CASE WHEN ROW_NUMBER() OVER(ORDER BY Amount DESC) % 2 = 0 THEN -Amount ELSE Amount END
                FROM    #T T2
                WHERE   T2.Amount < T.Amount
            ) T2
    GROUP BY T.Amount;
    

    Example on SQL Fiddle

    qid & accept id: (20922520, 20928271) query: Select data from rows into collection of oracle udt objects soup:

    SQL Fiddle

    \n

    Oracle 11g R2 Schema Setup:

    \n
    CREATE TABLE Test ( A, B, C, D, E ) AS\nSELECT LEVEL, LEVEL * 500, SQRT( LEVEL ), CHR( 64 + LEVEL ), RPAD( CHR( 64 + LEVEL ), 8, CHR( 64 + LEVEL ) )\nFROM DUAL\nCONNECT BY LEVEL <= 26\n/\n\nCREATE TYPE Test_Record AS OBJECT (\n  A NUMBER,\n  B NUMBER,\n  C NUMBER,\n  D CHAR(1),\n  E CHAR(8)\n)\n/\n\nCREATE TYPE Test_Record_Table AS TABLE OF Test_Record\n/\n\nCREATE PROCEDURE get_Table_Of_Test_Records (\n  p_records OUT Test_Record_Table\n)\nIS\nBEGIN\n  SELECT Test_Record( A, B, C, D, E )\n  BULK COLLECT INTO p_records\n  FROM   Test;\nEND get_Table_Of_Test_Records;\n/\n
    \n

    Query 1:

    \n
    DECLARE\n  trt Test_Record_Table;\nBEGIN\n  get_Table_Of_Test_Records( trt );\n\n  -- Do something with the collection.\nEND;\n
    \n soup wrap:

    SQL Fiddle

    Oracle 11g R2 Schema Setup:

    CREATE TABLE Test ( A, B, C, D, E ) AS
    SELECT LEVEL, LEVEL * 500, SQRT( LEVEL ), CHR( 64 + LEVEL ), RPAD( CHR( 64 + LEVEL ), 8, CHR( 64 + LEVEL ) )
    FROM DUAL
    CONNECT BY LEVEL <= 26
    /
    
    CREATE TYPE Test_Record AS OBJECT (
      A NUMBER,
      B NUMBER,
      C NUMBER,
      D CHAR(1),
      E CHAR(8)
    )
    /
    
    CREATE TYPE Test_Record_Table AS TABLE OF Test_Record
    /
    
    CREATE PROCEDURE get_Table_Of_Test_Records (
      p_records OUT Test_Record_Table
    )
    IS
    BEGIN
      SELECT Test_Record( A, B, C, D, E )
      BULK COLLECT INTO p_records
      FROM   Test;
    END get_Table_Of_Test_Records;
    /
    

    Query 1:

    DECLARE
      trt Test_Record_Table;
    BEGIN
      get_Table_Of_Test_Records( trt );
    
      -- Do something with the collection.
    END;
    
    qid & accept id: (20935221, 20935611) query: SQL - select a list of lists soup:

    How about

    \n
    SELECT firstname, lastname, merge_id \nFROM table t\nORDER BY t.merge_id\n
    \n

    That would give you a record per person, and the merge_id will be ascending:

    \n
    1 | Jane Doe \n1 | John Doe\n2 | max payne\n3 | sub zero\n
    \n

    Otherwise, you can use GROUP_CONCAT:

    \n
    SELECT merge_id , GROUP_CONCAT(CONCAT(firstname, ' ', lastname))\nFROM table t\nGROUP BY t.merge_id\nORDER BY t.merge_id\n
    \n

    Which will give one record per merge_id:

    \n
    1 | Jane Doe, John Doe\n2 | max payne\n3 | sub zero\n
    \n soup wrap:

    How about

    SELECT firstname, lastname, merge_id 
    FROM table t
    ORDER BY t.merge_id
    

    That would give you a record per person, and the merge_id will be ascending:

    1 | Jane Doe 
    1 | John Doe
    2 | max payne
    3 | sub zero
    

    Otherwise, you can use GROUP_CONCAT:

    SELECT merge_id , GROUP_CONCAT(CONCAT(firstname, ' ', lastname))
    FROM table t
    GROUP BY t.merge_id
    ORDER BY t.merge_id
    

    Which will give one record per merge_id:

    1 | Jane Doe, John Doe
    2 | max payne
    3 | sub zero
    
    qid & accept id: (20935240, 20935527) query: Point exist in circle soup:

    Test Data

    \n
    DECLARE @t TABLE (x NUMERIC(10,2), y NUMERIC(10,2), radius NUMERIC(10,2))\nINSERT INTO @t\nVALUES (3.5,3.5, 5.5),(20.5,20.5, 10.5), (30.5,30.5, 20.5)\n
    \n

    Query

    \n
    DECLARE @p1 NUMERIC(10,2) = 5.5   --<-- Point to check\nDECLARE @p2 NUMERIC(10,2) = 5.5\n\n\nSELECT *, CASE WHEN POWER( @p1 - x, 2) + POWER( @p2 - y, 2) <= POWER(radius, 2)\n             THEN 'Inside The Circle'\n            WHEN POWER( @p1 - x, 2) + POWER( @p2 - y, 2) > POWER(radius, 2)\n             THEN 'Outside the Circle' END   [Inside/Outside]\nFROM @t\n
    \n

    Result Set

    \n
    ╔═══════╦═══════╦════════╦════════════════════╗\n║   x   ║   y   ║ radius ║   Inside/Outside   ║\n╠═══════╬═══════╬════════╬════════════════════╣\n║ 3.50  ║ 3.50  ║ 5.50   ║ Inside The Circle  ║\n║ 20.50 ║ 20.50 ║ 10.50  ║ Outside the Circle ║\n║ 30.50 ║ 30.50 ║ 20.50  ║ Outside the Circle ║\n╚═══════╩═══════╩════════╩════════════════════╝\n
    \n

    As question was closed, could not add another answer, so I edited this to include solution using Sql Server Geometry types... [Uses same data points as above, plus one to demo exactly on the circle]

    \n
    Declare @t TABLE \n   (x NUMERIC(10,2), y NUMERIC(10,2), \n    radius NUMERIC(10,2))\nInsert @t\nValues (3.5,3.5, 5.5),(20.5,20.5, 10.5), \n       (30.5,30.5, 20.5), (-5.5, 5.5, 11.0)\n\n-- --------------------------\nDeclare @pX float = 5.5    \nDeclare @pY float = 5.5\nDeclare @c geometry;\nDeclare @p geometry;\nSelect x, y, radius, \n      (geometry::Point(X, Y, 0)).STDistance(geometry::Point(@pX, @pY, 0))\nFrom @T\nWhere (geometry::Point(X, Y, 0)).STDistance(geometry::Point(@pX, @pY, 0)) > radius\n
    \n soup wrap:

    Test Data

    DECLARE @t TABLE (x NUMERIC(10,2), y NUMERIC(10,2), radius NUMERIC(10,2))
    INSERT INTO @t
    VALUES (3.5,3.5, 5.5),(20.5,20.5, 10.5), (30.5,30.5, 20.5)
    

    Query

    DECLARE @p1 NUMERIC(10,2) = 5.5   --<-- Point to check
    DECLARE @p2 NUMERIC(10,2) = 5.5
    
    
    SELECT *, CASE WHEN POWER( @p1 - x, 2) + POWER( @p2 - y, 2) <= POWER(radius, 2)
                 THEN 'Inside The Circle'
                WHEN POWER( @p1 - x, 2) + POWER( @p2 - y, 2) > POWER(radius, 2)
                 THEN 'Outside the Circle' END   [Inside/Outside]
    FROM @t
    

    Result Set

    ╔═══════╦═══════╦════════╦════════════════════╗
    ║   x   ║   y   ║ radius ║   Inside/Outside   ║
    ╠═══════╬═══════╬════════╬════════════════════╣
    ║ 3.50  ║ 3.50  ║ 5.50   ║ Inside The Circle  ║
    ║ 20.50 ║ 20.50 ║ 10.50  ║ Outside the Circle ║
    ║ 30.50 ║ 30.50 ║ 20.50  ║ Outside the Circle ║
    ╚═══════╩═══════╩════════╩════════════════════╝
    

    As question was closed, could not add another answer, so I edited this to include solution using Sql Server Geometry types... [Uses same data points as above, plus one to demo exactly on the circle]

    Declare @t TABLE 
       (x NUMERIC(10,2), y NUMERIC(10,2), 
        radius NUMERIC(10,2))
    Insert @t
    Values (3.5,3.5, 5.5),(20.5,20.5, 10.5), 
           (30.5,30.5, 20.5), (-5.5, 5.5, 11.0)
    
    -- --------------------------
    Declare @pX float = 5.5    
    Declare @pY float = 5.5
    Declare @c geometry;
    Declare @p geometry;
    Select x, y, radius, 
          (geometry::Point(X, Y, 0)).STDistance(geometry::Point(@pX, @pY, 0))
    From @T
    Where (geometry::Point(X, Y, 0)).STDistance(geometry::Point(@pX, @pY, 0)) > radius
    
    qid & accept id: (20954662, 20954835) query: Merge queries into 1 for sorting soup:

    You can do this with nested subqueries:

    \n
    select u.user_id, count(*) as numusers,\n       (SELECT COUNT(user_id), FROM visitors v WHERE v.user_id = u.user_id) as NumVisitors,\n       (SELECT SUM(amount) FROM visitors v WHERE v.user_id = u.user_id) as VisitorAmount,\n       (SELECT COUNT(user_id) FROM sales s WHERE s.user_id = u.user_id) as NumSales\nfrom users u\ngroup by u.user_id;\n
    \n

    You can also do this by joining pre-aggregated queries:

    \n
    select u.user_id, v.NumVisitors, v.VisitorAmount, s.NumSales\nfrom (select u.user_id, count(*) as NumUsers\n      from users u\n      group by u.user_id\n     ) u left outer join\n     (select v.user_id, count(user_id) as NumVisitors, sum(amount) as VisitorAmount\n      from visitors v\n      group by v.user_id\n     ) v\n     on u.user_id = v.visitor_id left outer join\n     (select s.user_id, count(user_id) as NumSales\n      from sales s\n      group by s.user_id\n     ) s\n     on s.user_id = u.user_id;\n
    \n soup wrap:

    You can do this with nested subqueries:

    select u.user_id, count(*) as numusers,
           (SELECT COUNT(user_id), FROM visitors v WHERE v.user_id = u.user_id) as NumVisitors,
           (SELECT SUM(amount) FROM visitors v WHERE v.user_id = u.user_id) as VisitorAmount,
           (SELECT COUNT(user_id) FROM sales s WHERE s.user_id = u.user_id) as NumSales
    from users u
    group by u.user_id;
    

    You can also do this by joining pre-aggregated queries:

    select u.user_id, v.NumVisitors, v.VisitorAmount, s.NumSales
    from (select u.user_id, count(*) as NumUsers
          from users u
          group by u.user_id
         ) u left outer join
         (select v.user_id, count(user_id) as NumVisitors, sum(amount) as VisitorAmount
          from visitors v
          group by v.user_id
         ) v
         on u.user_id = v.visitor_id left outer join
         (select s.user_id, count(user_id) as NumSales
          from sales s
          group by s.user_id
         ) s
         on s.user_id = u.user_id;
    
    qid & accept id: (21075815, 21076138) query: Interactive Query soup:

    Assuming you are coding an app wherein the user supplies the inputs, there are multiple ways to create a query that uses those values as variables - one way is as follows:

    \n
    SET @t1=1, @t2=2, @t3:=4;\nSELECT @t1, @t2;\n
    \n

    Source: http://dev.mysql.com/doc/refman/5.5/en/user-variables.html

    \n

    So for your particular case, replacing all the instances of X with the MySQL syntax for a user-defined variable @X, it would look something like this:

    \n
    SET @X = user_input;\nSELECT @X AS DISTANCE,\nSUM(ABS(LOCX) <= @X AND ABS(LOCY) <= @X) AS QUANTITY,\nCOUNT(*) AS TOTAL,\nCONCAT(AVG(ABS(LOCX) <= @X AND ABS(LOCY) <= @X)*100, '%') AS PERCENTAGE\nFROM CUSTOMER;\n
    \n soup wrap:

    Assuming you are coding an app wherein the user supplies the inputs, there are multiple ways to create a query that uses those values as variables - one way is as follows:

    SET @t1=1, @t2=2, @t3:=4;
    SELECT @t1, @t2;
    

    Source: http://dev.mysql.com/doc/refman/5.5/en/user-variables.html

    So for your particular case, replacing all the instances of X with the MySQL syntax for a user-defined variable @X, it would look something like this:

    SET @X = user_input;
    SELECT @X AS DISTANCE,
    SUM(ABS(LOCX) <= @X AND ABS(LOCY) <= @X) AS QUANTITY,
    COUNT(*) AS TOTAL,
    CONCAT(AVG(ABS(LOCX) <= @X AND ABS(LOCY) <= @X)*100, '%') AS PERCENTAGE
    FROM CUSTOMER;
    
    qid & accept id: (21136618, 21136726) query: SQLite create table from table soup:

    with

    \n
    SELECT sql FROM sqlite_master WHERE type='table' AND name='mytable' \n
    \n

    you can get the the structure. This you can modify and create your new table. And finally you can

    \n
    INSERT INTO 'MyTableCopy' (*) SELECT * FROM 'mytable'\n
    \n soup wrap:

    with

    SELECT sql FROM sqlite_master WHERE type='table' AND name='mytable' 
    

    you can get the the structure. This you can modify and create your new table. And finally you can

    INSERT INTO 'MyTableCopy' (*) SELECT * FROM 'mytable'
    
    qid & accept id: (21167225, 21167787) query: Select from table during update soup:

    This is one of the times you need to denormalise. Create a table

    \n
    create table PreProcessedTotal (\n   JaccardTotal decimal(18, 4) not null\n)\n
    \n

    (substitute the appropriate data type). You need to add three triggers to table PreProcessed:

    \n
      \n
    • An Insert trigger to add the value of Jaccard in the new row
    • \n
    • An Update, to add the Inserted value and substract the DELETED
    • \n
    • A Delete trigger to subtract the deleted value
    • \n
    \n

    You can then use:

    \n
    select Jaccard / JaccardTotal\nfrom Preprocessed with (nolock)\ncross join PreProcessedTotal with (nolock)\n
    \n

    The with (nolock) may not be needed. You'll also need to populate the PreProcessedTotal table with the current total when you put it live.

    \n soup wrap:

    This is one of the times you need to denormalise. Create a table

    create table PreProcessedTotal (
       JaccardTotal decimal(18, 4) not null
    )
    

    (substitute the appropriate data type). You need to add three triggers to table PreProcessed:

    • An Insert trigger to add the value of Jaccard in the new row
    • An Update, to add the Inserted value and substract the DELETED
    • A Delete trigger to subtract the deleted value

    You can then use:

    select Jaccard / JaccardTotal
    from Preprocessed with (nolock)
    cross join PreProcessedTotal with (nolock)
    

    The with (nolock) may not be needed. You'll also need to populate the PreProcessedTotal table with the current total when you put it live.

    qid & accept id: (21234177, 21234992) query: Find first rows of change in historical table soup:
    CREATE TABLE T1 (A decimal(8,0), B int, C decimal(8,0))\nINSERT INTO T1 (A, B, C) VALUES (123, 0, 20130101),\n(123, 0, 20130102),(123, 1, 20130103),\n(123, 1, 20130104),(123, 0, 20130105),\n(123, 2, 20130106),(123, 2, 20130107),\n(123, 2, 20130108),(123, 0, 20130109),\n(123, 3, 20130110),(123, 3, 20130111),\n(123, 3, 20130112),(123, 3, 20130113)\n\n\n;with x as\n(\n  select t1.A, t1.B, t1.C, \n  row_number() over (partition by a order by c) rn \n  from T1\n)\nselect x1.A, x1.B, x1.C \nfrom x x1\nleft join x x2\non x1.rn = x2.rn +1 and x1.A = x2.A\nwhere x2.A is null\nor x1.B <> x2.B\n
    \n

    Result:

    \n
    A   B   C\n123 0   20130101\n123 1   20130103\n123 0   20130105\n123 2   20130106\n123 0   20130109\n123 3   20130110\n
    \n soup wrap:
    CREATE TABLE T1 (A decimal(8,0), B int, C decimal(8,0))
    INSERT INTO T1 (A, B, C) VALUES (123, 0, 20130101),
    (123, 0, 20130102),(123, 1, 20130103),
    (123, 1, 20130104),(123, 0, 20130105),
    (123, 2, 20130106),(123, 2, 20130107),
    (123, 2, 20130108),(123, 0, 20130109),
    (123, 3, 20130110),(123, 3, 20130111),
    (123, 3, 20130112),(123, 3, 20130113)
    
    
    ;with x as
    (
      select t1.A, t1.B, t1.C, 
      row_number() over (partition by a order by c) rn 
      from T1
    )
    select x1.A, x1.B, x1.C 
    from x x1
    left join x x2
    on x1.rn = x2.rn +1 and x1.A = x2.A
    where x2.A is null
    or x1.B <> x2.B
    

    Result:

    A   B   C
    123 0   20130101
    123 1   20130103
    123 0   20130105
    123 2   20130106
    123 0   20130109
    123 3   20130110
    
    qid & accept id: (21250631, 21257149) query: SQL Server - PIVOT - two columns into rows soup:

    There are a few different ways that you can get the result that you want. Similar to @Sheela K R's answer you can use an aggregate function with a CASE expression but it can be written in a more concise way:

    \n
    select \n  max(case when rowid = 1 then first end) First1,\n  max(case when rowid = 1 then last end) Last1,\n  max(case when rowid = 2 then first end) First2,\n  max(case when rowid = 2 then last end) Last2,\n  max(case when rowid = 3 then first end) First3,\n  max(case when rowid = 3 then last end) Last3,\n  max(case when rowid = 4 then first end) First4,\n  max(case when rowid = 4 then last end) Last4,\n  max(case when rowid = 5 then first end) First5,\n  max(case when rowid = 5 then last end) Last5\nfrom yourtable;\n
    \n

    See SQL Fiddle with Demo.

    \n

    This could also be written using the PIVOT function, however since you want to pivot multiple columns then you would first want to look at unpivoting your First and Last columns.

    \n

    The unpivot process will convert your multiple columns into multiple rows of data. You did not specify what version of SQL Server you are using but you can use a SELECT with UNION ALL with CROSS APPLY or even the UNPIVOT function to perform the first conversion:

    \n
    select col = col + cast(rowid as varchar(10)), value\nfrom yourtable\ncross apply \n(\n  select 'First', First union all\n  select 'Last', Last\n) c (col, value)\n
    \n

    See SQL Fiddle with Demo. This converts your data into the format:

    \n
    |    COL |       VALUE |\n|--------|-------------|\n| First1 | RandomName1 |\n|  Last1 | RandomLast1 |\n| First2 | RandomName2 |\n|  Last2 | RandomLast2 |\n
    \n

    Once the data is in multiple rows, then you can easily apply the PIVOT function:

    \n
    select First1, Last1, \n  First2, Last2,\n  First3, Last3, \n  First4, Last4, \n  First5, Last5\nfrom\n(\n  select col = col + cast(rowid as varchar(10)), value\n  from yourtable\n  cross apply \n  (\n    select 'First', First union all\n    select 'Last', Last\n  ) c (col, value)\n) d\npivot\n(\n  max(value)\n  for col in (First1, Last1, First2, Last2,\n              First3, Last3, First4, Last4, First5, Last5)\n) piv;\n
    \n

    See SQL Fiddle with Demo

    \n

    Both give a result of:

    \n
    |      FIRST1 |       LAST1 |      FIRST2 |       LAST2 |      FIRST3 |       LAST3 |      FIRST4 |       LAST4 |      FIRST5 |       LAST5 |\n|-------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------|\n| RandomName1 | RandomLast1 | RandomName2 | RandomLast2 | RandomName3 | RandomLast3 | RandomName4 | RandomLast4 | RandomName5 | RandomLast5 |\n
    \n soup wrap:

    There are a few different ways that you can get the result that you want. Similar to @Sheela K R's answer you can use an aggregate function with a CASE expression but it can be written in a more concise way:

    select 
      max(case when rowid = 1 then first end) First1,
      max(case when rowid = 1 then last end) Last1,
      max(case when rowid = 2 then first end) First2,
      max(case when rowid = 2 then last end) Last2,
      max(case when rowid = 3 then first end) First3,
      max(case when rowid = 3 then last end) Last3,
      max(case when rowid = 4 then first end) First4,
      max(case when rowid = 4 then last end) Last4,
      max(case when rowid = 5 then first end) First5,
      max(case when rowid = 5 then last end) Last5
    from yourtable;
    

    See SQL Fiddle with Demo.

    This could also be written using the PIVOT function, however since you want to pivot multiple columns then you would first want to look at unpivoting your First and Last columns.

    The unpivot process will convert your multiple columns into multiple rows of data. You did not specify what version of SQL Server you are using but you can use a SELECT with UNION ALL with CROSS APPLY or even the UNPIVOT function to perform the first conversion:

    select col = col + cast(rowid as varchar(10)), value
    from yourtable
    cross apply 
    (
      select 'First', First union all
      select 'Last', Last
    ) c (col, value)
    

    See SQL Fiddle with Demo. This converts your data into the format:

    |    COL |       VALUE |
    |--------|-------------|
    | First1 | RandomName1 |
    |  Last1 | RandomLast1 |
    | First2 | RandomName2 |
    |  Last2 | RandomLast2 |
    

    Once the data is in multiple rows, then you can easily apply the PIVOT function:

    select First1, Last1, 
      First2, Last2,
      First3, Last3, 
      First4, Last4, 
      First5, Last5
    from
    (
      select col = col + cast(rowid as varchar(10)), value
      from yourtable
      cross apply 
      (
        select 'First', First union all
        select 'Last', Last
      ) c (col, value)
    ) d
    pivot
    (
      max(value)
      for col in (First1, Last1, First2, Last2,
                  First3, Last3, First4, Last4, First5, Last5)
    ) piv;
    

    See SQL Fiddle with Demo

    Both give a result of:

    |      FIRST1 |       LAST1 |      FIRST2 |       LAST2 |      FIRST3 |       LAST3 |      FIRST4 |       LAST4 |      FIRST5 |       LAST5 |
    |-------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------|
    | RandomName1 | RandomLast1 | RandomName2 | RandomLast2 | RandomName3 | RandomLast3 | RandomName4 | RandomLast4 | RandomName5 | RandomLast5 |
    
    qid & accept id: (21251963, 21252025) query: Select Single and Duplicate Row and Return Multiple Columns soup:

    Could it be as simple as:

    \n
    SELECT DISTINCT Code, Stuff FROM MyTable\n
    \n

    Or, just add stuff to the partition by clause:

    \n
    PARTITION BY Code,Stuff ORDER BY Code\n
    \n soup wrap:

    Could it be as simple as:

    SELECT DISTINCT Code, Stuff FROM MyTable
    

    Or, just add stuff to the partition by clause:

    PARTITION BY Code,Stuff ORDER BY Code
    
    qid & accept id: (21259677, 25725716) query: How to store Word documents in SQL Server 2008? soup:

    finaly i got answer any file you can store into db:-

    \n
    \n

    Step 1: Get a document informations as a binary (Convert all text into ascii binary format becouse if you have any functional operator it will broke your INSERT QUERY).

    \n

    Step 2: Get Docment extensions for example (.docx, .pdf, .ppt) and include with your INSERT QUERY.

    \n
    \n
    if (file != null && file.ContentLength > 0)\n            {\n                string contentType = file.ContentType;\n\n                byte[] fileData = new byte[file.InputStream.Length];\n                file.InputStream.Read(fileData, 0, fileData.Length);\n\n                string OriginalName = Path.GetFileName(file.FileName);\n                string Username = User.Identity.Name;\n\n                Models.File myFile = new Models.File(contentType, OriginalName, fileData, Username);\n                myFile.Save();\n            }\n
    \n
    \n

    step 3: On Retriving your documents you can use like this

    \n
    \n
       public ActionResult Download()\n    {\n        string Originalname = string.Empty;\n        byte[] FileData = null;\n        var requestedID = RouteData.Values["id"];\n        if (requestedID.ToString() != null)\n        {\n            Guid id = new Guid(requestedID.ToString());\n            DataSet ds = new DataSet();\n            Models.UsersGroups dt = new Models.UsersGroups();\n            ds = dt.GetItem(id);\n            foreach (DataRow item in ds.Tables[0].Rows)\n            {\n                Originalname = item["OriginalName"].ToString();\n                FileData = (byte[])item["FileData"];\n            }\n            Response.AppendHeader("Content-Disposition", "attachment;filename=\"" + Originalname + "\"");\n            Response.BinaryWrite(FileData);\n        }\n        return File(FileData, "application/x-unknown");\n    }\n
    \n soup wrap:

    finaly i got answer any file you can store into db:-

    Step 1: Get a document informations as a binary (Convert all text into ascii binary format becouse if you have any functional operator it will broke your INSERT QUERY).

    Step 2: Get Docment extensions for example (.docx, .pdf, .ppt) and include with your INSERT QUERY.

    if (file != null && file.ContentLength > 0)
                {
                    string contentType = file.ContentType;
    
                    byte[] fileData = new byte[file.InputStream.Length];
                    file.InputStream.Read(fileData, 0, fileData.Length);
    
                    string OriginalName = Path.GetFileName(file.FileName);
                    string Username = User.Identity.Name;
    
                    Models.File myFile = new Models.File(contentType, OriginalName, fileData, Username);
                    myFile.Save();
                }
    

    step 3: On Retriving your documents you can use like this

       public ActionResult Download()
        {
            string Originalname = string.Empty;
            byte[] FileData = null;
            var requestedID = RouteData.Values["id"];
            if (requestedID.ToString() != null)
            {
                Guid id = new Guid(requestedID.ToString());
                DataSet ds = new DataSet();
                Models.UsersGroups dt = new Models.UsersGroups();
                ds = dt.GetItem(id);
                foreach (DataRow item in ds.Tables[0].Rows)
                {
                    Originalname = item["OriginalName"].ToString();
                    FileData = (byte[])item["FileData"];
                }
                Response.AppendHeader("Content-Disposition", "attachment;filename=\"" + Originalname + "\"");
                Response.BinaryWrite(FileData);
            }
            return File(FileData, "application/x-unknown");
        }
    
    qid & accept id: (21270528, 21270597) query: How to add more than one foreign key? soup:
    \n

    How can I connect Member Name to other table

    \n
    \n

    Don't - leave Member Name in the member table. There should not be any reason to have a Member Name field in the Member_Fees_Record table if you can join it back to Member through the ID:

    \n
    Member (Member ID, Member_Name, Age, Address)\n\nMember_Fees_Record (Member ID, Fee)\n
    \n

    Example query:

    \n
    SELECT m.MemberId, f.Fee, m.Member_Name, m.Address, m.Age\nFROM Member m\nINNER JOIN Member_Fees_Record mf ON m.MemberID = f.MemberID\n
    \n soup wrap:

    How can I connect Member Name to other table

    Don't - leave Member Name in the member table. There should not be any reason to have a Member Name field in the Member_Fees_Record table if you can join it back to Member through the ID:

    Member (Member ID, Member_Name, Age, Address)
    
    Member_Fees_Record (Member ID, Fee)
    

    Example query:

    SELECT m.MemberId, f.Fee, m.Member_Name, m.Address, m.Age
    FROM Member m
    INNER JOIN Member_Fees_Record mf ON m.MemberID = f.MemberID
    
    qid & accept id: (21280605, 21280968) query: Update Multiple SQL Server Columns from Access 2010 Form soup:

    You can enumerate selected items in each ListBox and build the SQL. Something like this

    \n
    sql = "UPDATE tableName SET ColumnToUpdate = '" & txtZ & "' "\nsql = sql & "WHERE Column1 IN (" & GetValuesFromList(listBoxX) & ") "\nsql = sql & "AND Column2 IN (" & GetValuesFromList(listBoxy) & ")"\n
    \n

    And the function GetValuesFromList:

    \n
    Private Function GetValuesFromList(ListBox lst) as String\nDim Items As String\nDim Item As Variant\n\n    Items = ""\n    For Each Item In lst.ItemsSelected\n        Items = Items & lst.ItemData(Item) & ","\n    Next\n    GetValuesFromList = Left(Items, Len(Items) - 1)\nEnd Function\n
    \n

    If the selected values in the list boxes are string values, you should modify the function to concatenate the quotes.

    \n soup wrap:

    You can enumerate selected items in each ListBox and build the SQL. Something like this

    sql = "UPDATE tableName SET ColumnToUpdate = '" & txtZ & "' "
    sql = sql & "WHERE Column1 IN (" & GetValuesFromList(listBoxX) & ") "
    sql = sql & "AND Column2 IN (" & GetValuesFromList(listBoxy) & ")"
    

    And the function GetValuesFromList:

    Private Function GetValuesFromList(ListBox lst) as String
    Dim Items As String
    Dim Item As Variant
    
        Items = ""
        For Each Item In lst.ItemsSelected
            Items = Items & lst.ItemData(Item) & ","
        Next
        GetValuesFromList = Left(Items, Len(Items) - 1)
    End Function
    

    If the selected values in the list boxes are string values, you should modify the function to concatenate the quotes.

    qid & accept id: (21281481, 21281590) query: making a new column with last 10 digits of an other colulmn soup:

    You can use Right function

    \n

    MySQL RIGHT() extracts a specified number of characters from the right side of a string.

    \n
    UPDATE user SET phone_last_ten = RIGHT(phone, 10) \n
    \n

    Or

    \n
    UPDATE user SET phone_last_ten = RIGHT(CONVERT(Phone, CHAR(50)), 10) \n
    \n

    DEMO

    \n soup wrap:

    You can use Right function

    MySQL RIGHT() extracts a specified number of characters from the right side of a string.

    UPDATE user SET phone_last_ten = RIGHT(phone, 10) 
    

    Or

    UPDATE user SET phone_last_ten = RIGHT(CONVERT(Phone, CHAR(50)), 10) 
    

    DEMO

    qid & accept id: (21286642, 21286868) query: Return latest row ordered by ID while using group by soup:

    You can use the substring_index()/group_concat() trick:

    \n
    select a.title,\n       substring_index(group_concat(status order by id desc), ',', 1) as laststatus\nfrom b join\n     a\n     on a.id - b.a_id\ngroup by a.title;\n
    \n

    EDIT:

    \n

    If you just want the last record from b, you can do:

    \n
    select a.title, b.status\nfrom b join\n     a\n     on a.id - b.a_id\norder by b.id desc\nlimit 1;\n
    \n soup wrap:

    You can use the substring_index()/group_concat() trick:

    select a.title,
           substring_index(group_concat(status order by id desc), ',', 1) as laststatus
    from b join
         a
         on a.id - b.a_id
    group by a.title;
    

    EDIT:

    If you just want the last record from b, you can do:

    select a.title, b.status
    from b join
         a
         on a.id - b.a_id
    order by b.id desc
    limit 1;
    
    qid & accept id: (21286804, 21287383) query: How to select only numbers from a text field soup:

    It is possible that this or a variation may suit:

    \n
     SELECT t.Field1, Mid([Field1],InStr([field1],"(")+1,4) AS Stripped\n FROM TheTable As t\n
    \n

    For example:

    \n
     UPDATE TheTable AS t SET [field2] = Mid([Field1],InStr([field1],"(")+1,4);\n
    \n

    EDIT re comment

    \n

    If the field ends u), that is, alpha bracket, you can say:

    \n
     UPDATE TheTable AS t SET [field2] =\n Mid([Field1],InStr([field1],"(")+1,Len(Mid([Field1],InStr([field1],"(")))-3)\n
    \n soup wrap:

    It is possible that this or a variation may suit:

     SELECT t.Field1, Mid([Field1],InStr([field1],"(")+1,4) AS Stripped
     FROM TheTable As t
    

    For example:

     UPDATE TheTable AS t SET [field2] = Mid([Field1],InStr([field1],"(")+1,4);
    

    EDIT re comment

    If the field ends u), that is, alpha bracket, you can say:

     UPDATE TheTable AS t SET [field2] =
     Mid([Field1],InStr([field1],"(")+1,Len(Mid([Field1],InStr([field1],"(")))-3)
    
    qid & accept id: (21302307, 21842979) query: How to migrate from CodeIgniter database to Laravel database soup:

    Databases are pretty much the same in Laravel or Codeigniter, if your tables are good the way they are for you and they have a primary key named id (this also is not mandatory) you can just connect with Laravel in your database and it will work just fine.

    \n

    For your new tables, you can create new migrations and Laravel will not complaint about this.

    \n

    Well, but if you really need to migrate to a whole new database, you can do the following:

    \n

    1) rename the tables you need to migrate

    \n
    php artisan migrate:make\n
    \n

    2) create all your migrations with your and migrate them:

    \n
    php artisan migrate\n
    \n

    3) use your database server sql utility to copy data from one table to another, it will be way faster than creating everything in Laravel, believe me. Most databases will let you do things like:

    \n
    INSERT INTO users (FirstName, LastName)\nSELECT FirstName, LastName\nFROM users_old\n
    \n

    And in some you'll be able to do the same using two different databases and columns names

    \n
    INSERT INTO NEWdatabasename.users (firstName+' '+Lastname, email)\nSELECT name, email\nFROM OLDdatabasename.\n
    \n

    Or you can just export data to a CSV file and then create a method in your Laravel seeding class to load that data into your database, with a lot of data to import, you just have to remember to execute:

    \n
    DB::disableQueryLog();\n
    \n

    So your PHP doesn't run out of memory.

    \n

    See? There are a lot of options, probably many more, so pick one and if you need help, shoot more questions.

    \n soup wrap:

    Databases are pretty much the same in Laravel or Codeigniter, if your tables are good the way they are for you and they have a primary key named id (this also is not mandatory) you can just connect with Laravel in your database and it will work just fine.

    For your new tables, you can create new migrations and Laravel will not complaint about this.

    Well, but if you really need to migrate to a whole new database, you can do the following:

    1) rename the tables you need to migrate

    php artisan migrate:make
    

    2) create all your migrations with your and migrate them:

    php artisan migrate
    

    3) use your database server sql utility to copy data from one table to another, it will be way faster than creating everything in Laravel, believe me. Most databases will let you do things like:

    INSERT INTO users (FirstName, LastName)
    SELECT FirstName, LastName
    FROM users_old
    

    And in some you'll be able to do the same using two different databases and columns names

    INSERT INTO NEWdatabasename.users (firstName+' '+Lastname, email)
    SELECT name, email
    FROM OLDdatabasename.
    

    Or you can just export data to a CSV file and then create a method in your Laravel seeding class to load that data into your database, with a lot of data to import, you just have to remember to execute:

    DB::disableQueryLog();
    

    So your PHP doesn't run out of memory.

    See? There are a lot of options, probably many more, so pick one and if you need help, shoot more questions.

    qid & accept id: (21311393, 21313710) query: MS SQL - User Defined Function - Slope Intercept RSquare ; How to Group by Portfolio soup:

    Wow, this is a real cool example of how to use nested CTE's in a In Line Table Value Function. You want to use a ITVF since they are fast. See Wayne Sheffield’s blog article that attests to this fact.

    \n

    I always start with a sample database/table if it is really complicated to make sure I give the user a correct solution.

    \n

    Lets create a database named [test] based on model.

    \n
    --\n-- Create a simple db\n--\n\n-- use master\nuse master;\ngo\n\n-- delete existing databases\nIF EXISTS (SELECT name FROM sys.databases WHERE name = N'Test')\nDROP DATABASE Test\nGO\n\n-- simple db based on model\ncreate database Test;\ngo\n\n-- switch to new db\nuse [Test];\ngo\n
    \n

    Lets create a table type named [InputToLinearReg].

    \n
    --\n-- Create table type to pass data\n--\n\n-- Delete the existing table type\nIF  EXISTS (SELECT * FROM sys.systypes WHERE name = 'InputToLinearReg')\nDROP TYPE dbo.InputToLinearReg\nGO\n\n--  Create the table type\nCREATE TYPE InputToLinearReg AS TABLE\n(\nportfolio_cd char(1),\nmonth_num int,\ncollections_amt money\n);\ngo\n
    \n

    Okay, here is the multi-layered SELECT statement that uses CTE's. The query analyzer treats this as a SQL statement which can be executed in parallel versus a regular function that can't. See the black box section of Wayne's article.

    \n
    --\n-- Create in line table value function (fast)\n--\n\n-- Remove if it exists\nIF OBJECT_ID('CalculateLinearReg') > 0\nDROP FUNCTION CalculateLinearReg\nGO\n\n-- Create the function\nCREATE FUNCTION CalculateLinearReg\n( \n    @ParmInTable AS dbo.InputToLinearReg READONLY \n) \nRETURNS TABLE \nAS\nRETURN\n(\n\n  WITH cteRawData as\n  (\n    SELECT\n        T.portfolio_cd,\n        CAST(T.month_num as decimal(18, 6)) as x,\n        LOG(CAST(T.collections_amt as decimal(18, 6))) as y\n    FROM\n        @ParmInTable as T\n  ),\n\n  cteAvgByPortfolio as\n  (\n    SELECT\n        portfolio_cd,\n        AVG(x) as xavg,\n        AVG(y) as yavg\n    FROM\n        cteRawData \n    GROUP BY \n        portfolio_cd\n  ),\n\n  cteSlopeByPortfolio as\n  (\n    SELECT\n        R.portfolio_cd,\n        SUM((R.x - A.xavg) * (R.y - A.yavg)) / SUM(POWER(R.x - A.xavg, 2)) as slope\n    FROM\n        cteRawData as R \n    INNER JOIN \n        cteAvgByPortfolio A\n    ON \n        R.portfolio_cd = A.portfolio_cd\n    GROUP BY \n        R.portfolio_cd\n  ),\n\n  cteInterceptByPortfolio as\n  (\n    SELECT\n        A.portfolio_cd,\n        (A.yavg - (S.slope * A.xavg)) as intercept\n    FROM\n        cteAvgByPortfolio as A\n    INNER JOIN \n        cteSlopeByPortfolio S\n    ON \n        A.portfolio_cd = S.portfolio_cd\n\n  )\n\n  SELECT \n      A.portfolio_cd,\n      A.xavg,\n      A.yavg,\n      S.slope,\n      I.intercept,\n      1 - (SUM(POWER(R.y - (I.intercept + S.slope * R.x), 2)) /\n      (SUM(POWER(R.y - (I.intercept + S.slope * R.x), 2)) + \n      SUM(POWER(((I.intercept + S.slope * R.x) - A.yavg), 2)))) as rsquared\n  FROM\n      cteRawData as R \n        INNER JOIN \n      cteAvgByPortfolio as A ON R.portfolio_cd = A.portfolio_cd\n        INNER JOIN \n      cteSlopeByPortfolio S ON A.portfolio_cd = S.portfolio_cd\n        INNER JOIN \n      cteInterceptByPortfolio I ON S.portfolio_cd = I.portfolio_cd\n  GROUP BY \n      A.portfolio_cd,\n      A.xavg,\n      A.yavg,\n      S.slope,\n      I.intercept\n);\n
    \n

    Last but not least, setup a Table Variable and get the answers. Unlike you solution above, it groups by portfolio id.

    \n
    -- Load data into variable\nDECLARE @InTable AS InputToLinearReg;\n\n-- insert data\ninsert into @InTable\nvalues\n('A', 1, 100.00),\n('A', 2, 90.00),\n('A', 3, 80.00),\n('A', 4, 70.00),\n('B', 1, 100.00),\n('B', 2, 90.00),\n('B', 3, 80.00);\n\n-- show data\nselect * from CalculateLinearReg(@InTable)\ngo\n
    \n

    Here is a picture of the results using your data.

    \n

    enter image description here

    \n soup wrap:

    Wow, this is a real cool example of how to use nested CTE's in a In Line Table Value Function. You want to use a ITVF since they are fast. See Wayne Sheffield’s blog article that attests to this fact.

    I always start with a sample database/table if it is really complicated to make sure I give the user a correct solution.

    Lets create a database named [test] based on model.

    --
    -- Create a simple db
    --
    
    -- use master
    use master;
    go
    
    -- delete existing databases
    IF EXISTS (SELECT name FROM sys.databases WHERE name = N'Test')
    DROP DATABASE Test
    GO
    
    -- simple db based on model
    create database Test;
    go
    
    -- switch to new db
    use [Test];
    go
    

    Lets create a table type named [InputToLinearReg].

    --
    -- Create table type to pass data
    --
    
    -- Delete the existing table type
    IF  EXISTS (SELECT * FROM sys.systypes WHERE name = 'InputToLinearReg')
    DROP TYPE dbo.InputToLinearReg
    GO
    
    --  Create the table type
    CREATE TYPE InputToLinearReg AS TABLE
    (
    portfolio_cd char(1),
    month_num int,
    collections_amt money
    );
    go
    

    Okay, here is the multi-layered SELECT statement that uses CTE's. The query analyzer treats this as a SQL statement which can be executed in parallel versus a regular function that can't. See the black box section of Wayne's article.

    --
    -- Create in line table value function (fast)
    --
    
    -- Remove if it exists
    IF OBJECT_ID('CalculateLinearReg') > 0
    DROP FUNCTION CalculateLinearReg
    GO
    
    -- Create the function
    CREATE FUNCTION CalculateLinearReg
    ( 
        @ParmInTable AS dbo.InputToLinearReg READONLY 
    ) 
    RETURNS TABLE 
    AS
    RETURN
    (
    
      WITH cteRawData as
      (
        SELECT
            T.portfolio_cd,
            CAST(T.month_num as decimal(18, 6)) as x,
            LOG(CAST(T.collections_amt as decimal(18, 6))) as y
        FROM
            @ParmInTable as T
      ),
    
      cteAvgByPortfolio as
      (
        SELECT
            portfolio_cd,
            AVG(x) as xavg,
            AVG(y) as yavg
        FROM
            cteRawData 
        GROUP BY 
            portfolio_cd
      ),
    
      cteSlopeByPortfolio as
      (
        SELECT
            R.portfolio_cd,
            SUM((R.x - A.xavg) * (R.y - A.yavg)) / SUM(POWER(R.x - A.xavg, 2)) as slope
        FROM
            cteRawData as R 
        INNER JOIN 
            cteAvgByPortfolio A
        ON 
            R.portfolio_cd = A.portfolio_cd
        GROUP BY 
            R.portfolio_cd
      ),
    
      cteInterceptByPortfolio as
      (
        SELECT
            A.portfolio_cd,
            (A.yavg - (S.slope * A.xavg)) as intercept
        FROM
            cteAvgByPortfolio as A
        INNER JOIN 
            cteSlopeByPortfolio S
        ON 
            A.portfolio_cd = S.portfolio_cd
    
      )
    
      SELECT 
          A.portfolio_cd,
          A.xavg,
          A.yavg,
          S.slope,
          I.intercept,
          1 - (SUM(POWER(R.y - (I.intercept + S.slope * R.x), 2)) /
          (SUM(POWER(R.y - (I.intercept + S.slope * R.x), 2)) + 
          SUM(POWER(((I.intercept + S.slope * R.x) - A.yavg), 2)))) as rsquared
      FROM
          cteRawData as R 
            INNER JOIN 
          cteAvgByPortfolio as A ON R.portfolio_cd = A.portfolio_cd
            INNER JOIN 
          cteSlopeByPortfolio S ON A.portfolio_cd = S.portfolio_cd
            INNER JOIN 
          cteInterceptByPortfolio I ON S.portfolio_cd = I.portfolio_cd
      GROUP BY 
          A.portfolio_cd,
          A.xavg,
          A.yavg,
          S.slope,
          I.intercept
    );
    

    Last but not least, setup a Table Variable and get the answers. Unlike you solution above, it groups by portfolio id.

    -- Load data into variable
    DECLARE @InTable AS InputToLinearReg;
    
    -- insert data
    insert into @InTable
    values
    ('A', 1, 100.00),
    ('A', 2, 90.00),
    ('A', 3, 80.00),
    ('A', 4, 70.00),
    ('B', 1, 100.00),
    ('B', 2, 90.00),
    ('B', 3, 80.00);
    
    -- show data
    select * from CalculateLinearReg(@InTable)
    go
    

    Here is a picture of the results using your data.

    enter image description here

    qid & accept id: (21313983, 21314536) query: How to return only 1 (specific) instance of column value when multiple instances exist soup:

    I think this is the logic that you want to get the date:

    \n
    select itemcode,\n       coalesce(min(case when qty_available > 0 then date end), min(date)) as thedate\nfrom timtest tt\nwhere date >= date(now())\ngroup by itemcode;\n
    \n

    The expression coalesce(min(case when qty > 0 then date end), min(date)) seems to encapsulate your logic. The first part of the coalesce returns the first date when qty > 0. If none of these exist, then it finds the first date with 0. You don't state what to do when there is no record for today, but there is a record in the future for 0. This returns the first such record.

    \n

    To get the quantity, let's join back to this:

    \n
    select tt.*\nfrom timtest tt join\n     (select itemcode,\n             coalesce(min(case when qty_available > 0 then date end), min(date)) as thedate\n      from timtest tt\n      where date >= date(now())\n      group by itemcode\n     ) id\n     on tt.itemcode = id.itemcode and tt.date = id.thedate;\n
    \n

    EDIT:

    \n

    No accounting for bad date formats. Here is a version for this situation:

    \n
    select tt.*\nfrom timtest tt join\n     (select itemcode,\n             coalesce(min(case when qty_available > 0 then thedate end), min(thedate)) as thedate\n      from (select tt.*, str_to_date(date, '%m/%d/%Y') as thedate\n            from timtest tt\n           ) tt\n      where thedate >= date(now())\n      group by itemcode\n     ) id\n     on tt.itemcode = id.itemcode and str_to_date(tt.date, '%m/%d/%Y') = id.thedate;\n
    \n

    Advice for the future: store dates in the database as a date/datetime data time and not as strings. If you have store store them as strings, use the YYYY-MM-DD format, because you can use comparisons and order by.

    \n soup wrap:

    I think this is the logic that you want to get the date:

    select itemcode,
           coalesce(min(case when qty_available > 0 then date end), min(date)) as thedate
    from timtest tt
    where date >= date(now())
    group by itemcode;
    

    The expression coalesce(min(case when qty > 0 then date end), min(date)) seems to encapsulate your logic. The first part of the coalesce returns the first date when qty > 0. If none of these exist, then it finds the first date with 0. You don't state what to do when there is no record for today, but there is a record in the future for 0. This returns the first such record.

    To get the quantity, let's join back to this:

    select tt.*
    from timtest tt join
         (select itemcode,
                 coalesce(min(case when qty_available > 0 then date end), min(date)) as thedate
          from timtest tt
          where date >= date(now())
          group by itemcode
         ) id
         on tt.itemcode = id.itemcode and tt.date = id.thedate;
    

    EDIT:

    No accounting for bad date formats. Here is a version for this situation:

    select tt.*
    from timtest tt join
         (select itemcode,
                 coalesce(min(case when qty_available > 0 then thedate end), min(thedate)) as thedate
          from (select tt.*, str_to_date(date, '%m/%d/%Y') as thedate
                from timtest tt
               ) tt
          where thedate >= date(now())
          group by itemcode
         ) id
         on tt.itemcode = id.itemcode and str_to_date(tt.date, '%m/%d/%Y') = id.thedate;
    

    Advice for the future: store dates in the database as a date/datetime data time and not as strings. If you have store store them as strings, use the YYYY-MM-DD format, because you can use comparisons and order by.

    qid & accept id: (21352556, 21352901) query: Using unique records as table header soup:

    The generic SQL approach is to use conditional aggregation:

    \n
    select s.studentName,\n       max(case when s.subjectName = 'subject1' then g.grade end) as Subject1,\n       max(case when s.subjectName = 'subject2' then g.grade end) as Subject2,\n       max(case when s.subjectName = 'subject3' then g.grade end) as Subject3\nfrom (students s join\n      grades g\n      on s.student_id = g.student_id\n     ) join\n     subjects su\n     on g.subject_id = su.subject_id\ngroup by s.studentid, s.studentName;\n
    \n

    Several databases also support the pivot syntax to do this.

    \n

    EDIT:

    \n

    The Access query is:

    \n
    select s.studentName,\n       max(iif(s.subjectName = 'subject1', grade,  NULL)) as Subject1,\n       max(iif(s.subjectName = 'subject2', grade,  NULL)) as Subject2,\n       max(iif(s.subjectName = 'subject3', grade,  NULL)) as Subject3\nfrom students s inner join\n     grades g\n     on s.student_id = g.student_id inner join\n     subjects su\n     on g.subject_id = su.subject_id\ngroup by s.studentid, s.studentName;\n
    \n soup wrap:

    The generic SQL approach is to use conditional aggregation:

    select s.studentName,
           max(case when s.subjectName = 'subject1' then g.grade end) as Subject1,
           max(case when s.subjectName = 'subject2' then g.grade end) as Subject2,
           max(case when s.subjectName = 'subject3' then g.grade end) as Subject3
    from (students s join
          grades g
          on s.student_id = g.student_id
         ) join
         subjects su
         on g.subject_id = su.subject_id
    group by s.studentid, s.studentName;
    

    Several databases also support the pivot syntax to do this.

    EDIT:

    The Access query is:

    select s.studentName,
           max(iif(s.subjectName = 'subject1', grade,  NULL)) as Subject1,
           max(iif(s.subjectName = 'subject2', grade,  NULL)) as Subject2,
           max(iif(s.subjectName = 'subject3', grade,  NULL)) as Subject3
    from students s inner join
         grades g
         on s.student_id = g.student_id inner join
         subjects su
         on g.subject_id = su.subject_id
    group by s.studentid, s.studentName;
    
    qid & accept id: (21367807, 21367977) query: How to select last published comment created by student? soup:

    Try one of following solutions:

    \n
    SELECT  src.Id, src.FirstName, src.LastName, src.Comment, src.InsertAt\nFROM \n(\n    SELECT  s.Id, s.FirstName, s.LastName, sc.Comment, sc.InsertAt,\n            ROW_NUMBER() OVER(PARTITION BY sc.StudentId ORDER BY sc.InsertAt DESC) RowNum\n    FROM    dbo.Students s INNER JOIN dbo.StudentComments sc ON s.Id = sc.StudentId\n    --WHERE sc.IsPublished = 1\n) src\nWHERE   src.RowNum = 1; \n
    \n

    or

    \n
    SELECT  s.Id, s.FirstName, s.LastName, lc.Comment, lc.InsertAt\nFROM    dbo.Students s \nCROSS APPLY (\n    SELECT  TOP(1) sc.Comment, sc.InsertAt\n    FROM    dbo.StudentComments sc \n    WHERE   s.Id = sc.StudentId\n    --AND       sc.IsPublished = 1\n    ORDER BY sc.InsertAt DESC\n) lc; -- Last comment\n
    \n soup wrap:

    Try one of following solutions:

    SELECT  src.Id, src.FirstName, src.LastName, src.Comment, src.InsertAt
    FROM 
    (
        SELECT  s.Id, s.FirstName, s.LastName, sc.Comment, sc.InsertAt,
                ROW_NUMBER() OVER(PARTITION BY sc.StudentId ORDER BY sc.InsertAt DESC) RowNum
        FROM    dbo.Students s INNER JOIN dbo.StudentComments sc ON s.Id = sc.StudentId
        --WHERE sc.IsPublished = 1
    ) src
    WHERE   src.RowNum = 1; 
    

    or

    SELECT  s.Id, s.FirstName, s.LastName, lc.Comment, lc.InsertAt
    FROM    dbo.Students s 
    CROSS APPLY (
        SELECT  TOP(1) sc.Comment, sc.InsertAt
        FROM    dbo.StudentComments sc 
        WHERE   s.Id = sc.StudentId
        --AND       sc.IsPublished = 1
        ORDER BY sc.InsertAt DESC
    ) lc; -- Last comment
    
    qid & accept id: (21384239, 21387895) query: SQL Query to show all available rooms under a property soup:

    it sounds like you try to build a Report and try to do the display in SQL instead of your web solution.

    \n

    Keep the data and its presentation separate.
    \nGet your datatable, and then loop through it with PHP, creating a table for every building.

    \n

    Ordinarely, you would use recursion, but MySQL doesn't support it.

    \n

    You can use

    \n
    ORDER BY premise.name, premise.id, room.nr, room.id\n
    \n

    My guess is you need to group by room and property fields, using the max aggregate function for address and city fields, because a property (building) can have multiple addresses, one for each entrance...

    \n
    SELECT \n     premises.field_1\n    ,premises.field_2\n    ,premises.field_3\n\n    ,room.field_1\n    ,room.field_2\n    ,room.field_3\n\n    ,max(address.field1) as adr_f1\n    ,max(address.field2) as adr_f2\n    ,max(address.field3) as adr_f3   \nFROM Whatever\n\nJOIN WHATEVER\n\nWHERE (1=1) \nAND (whatever)\n\nGROUP BY \n\n     premises.field_1\n    ,premises.field_2\n    ,premises.field_3\n\n    ,room.field_1\n    ,room.field_2\n    ,room.field_3\n\nHAVING (WHATEVER)\n\nORDER BY premises.field_x, room.field_y\n
    \n soup wrap:

    it sounds like you try to build a Report and try to do the display in SQL instead of your web solution.

    Keep the data and its presentation separate.
    Get your datatable, and then loop through it with PHP, creating a table for every building.

    Ordinarely, you would use recursion, but MySQL doesn't support it.

    You can use

    ORDER BY premise.name, premise.id, room.nr, room.id
    

    My guess is you need to group by room and property fields, using the max aggregate function for address and city fields, because a property (building) can have multiple addresses, one for each entrance...

    SELECT 
         premises.field_1
        ,premises.field_2
        ,premises.field_3
    
        ,room.field_1
        ,room.field_2
        ,room.field_3
    
        ,max(address.field1) as adr_f1
        ,max(address.field2) as adr_f2
        ,max(address.field3) as adr_f3   
    FROM Whatever
    
    JOIN WHATEVER
    
    WHERE (1=1) 
    AND (whatever)
    
    GROUP BY 
    
         premises.field_1
        ,premises.field_2
        ,premises.field_3
    
        ,room.field_1
        ,room.field_2
        ,room.field_3
    
    HAVING (WHATEVER)
    
    ORDER BY premises.field_x, room.field_y
    
    qid & accept id: (21389431, 21389784) query: how do I know the minimum date in a query? soup:

    Either:

    \n
    select min(stamp) from tbl\n
    \n

    Or:

    \n
    select stamp from tbl order by stamp asc limit 1\n
    \n

    The first can also be used as a window function, if you need it on an entire set without grouping.

    \n

    If you need the date in the stamp, cast it:

    \n
    select min(stamp::date) from tbl\n
    \n

    Or:

    \n
    select stamp::date from tbl order by stamp asc limit 1\n
    \n soup wrap:

    Either:

    select min(stamp) from tbl
    

    Or:

    select stamp from tbl order by stamp asc limit 1
    

    The first can also be used as a window function, if you need it on an entire set without grouping.

    If you need the date in the stamp, cast it:

    select min(stamp::date) from tbl
    

    Or:

    select stamp::date from tbl order by stamp asc limit 1
    
    qid & accept id: (21400367, 21400496) query: Separate rows based on a column that has min value soup:

    You're almost there. Just remove the AttendanceTime from the group by.

    \n
    SELECT tal.PersonNo, min(tal.AttendanceTime) \n  FROM mqa.T_AttendanceLog tal\n GROUP BY tal.PersonNo;\n
    \n

    If you want the entire row (incase you have other columns) you can use something like this:

    \n
    select *\n  from mqa.T_AttendanceLog a\n where (PersonNo, AttendanceTime) in(\n         select b.PersonNo, min(b.AttendanceTime)\n           from mqa.T_AttendanceLog b\n          group by b.PersonNo);\n
    \n soup wrap:

    You're almost there. Just remove the AttendanceTime from the group by.

    SELECT tal.PersonNo, min(tal.AttendanceTime) 
      FROM mqa.T_AttendanceLog tal
     GROUP BY tal.PersonNo;
    

    If you want the entire row (incase you have other columns) you can use something like this:

    select *
      from mqa.T_AttendanceLog a
     where (PersonNo, AttendanceTime) in(
             select b.PersonNo, min(b.AttendanceTime)
               from mqa.T_AttendanceLog b
              group by b.PersonNo);
    
    qid & accept id: (21409033, 21409141) query: How to iterate through a table from last row to first? soup:

    Change MySQL statement to be

    \n
    SELECT * FROM 'mytable' ORDER BY 'id' DESC\n
    \n

    or reverse the array using PHPs reverse array function

    \n
    return array_reverse($data);\n
    \n soup wrap:

    Change MySQL statement to be

    SELECT * FROM 'mytable' ORDER BY 'id' DESC
    

    or reverse the array using PHPs reverse array function

    return array_reverse($data);
    
    qid & accept id: (21424132, 21424242) query: Replace values in an sql query according to results of a nested query soup:

    You can use FIND_IN_SET()

    \n
    SELECT *\n  FROM request r JOIN locations l\n    ON FIND_IN_SET(loc_id, locations) > 0\n WHERE loc_name = 'mordor'\n
    \n

    Here is SQLFiddle demo

    \n

    But you better normalize your data by introducing a many-to-many table that may look like

    \n
    CREATE TABLE request_location\n(\n  request_id INT NOT NULL,\n  loc_id INT NOT NULL,\n  PRIMARY KEY (request_id, loc_id),\n  FOREIGN KEY (request_id) REFERENCES request (request_id),\n  FOREIGN KEY (loc_id) REFERENCES locations (loc_id)\n);\n
    \n

    This will pay off big time in a long run enabling you to maintain and query your data normally.

    \n

    Your query then may look like

    \n
    SELECT *\n  FROM request_location rl JOIN request r \n    ON rl.request_id = r.request_id JOIN locations l\n    ON rl.loc_id = l.loc_id\n WHERE l.loc_name = 'mordor'\n
    \n

    or even

    \n
    SELECT rl.request_id\n  FROM request_location rl JOIN locations l\n    ON rl.loc_id = l.loc_id\n WHERE l.loc_name = 'mordor';\n
    \n

    if you need to return only request_id

    \n

    Here is SQLFiddle demo

    \n soup wrap:

    You can use FIND_IN_SET()

    SELECT *
      FROM request r JOIN locations l
        ON FIND_IN_SET(loc_id, locations) > 0
     WHERE loc_name = 'mordor'
    

    Here is SQLFiddle demo

    But you better normalize your data by introducing a many-to-many table that may look like

    CREATE TABLE request_location
    (
      request_id INT NOT NULL,
      loc_id INT NOT NULL,
      PRIMARY KEY (request_id, loc_id),
      FOREIGN KEY (request_id) REFERENCES request (request_id),
      FOREIGN KEY (loc_id) REFERENCES locations (loc_id)
    );
    

    This will pay off big time in a long run enabling you to maintain and query your data normally.

    Your query then may look like

    SELECT *
      FROM request_location rl JOIN request r 
        ON rl.request_id = r.request_id JOIN locations l
        ON rl.loc_id = l.loc_id
     WHERE l.loc_name = 'mordor'
    

    or even

    SELECT rl.request_id
      FROM request_location rl JOIN locations l
        ON rl.loc_id = l.loc_id
     WHERE l.loc_name = 'mordor';
    

    if you need to return only request_id

    Here is SQLFiddle demo

    qid & accept id: (21438881, 21438974) query: How to pass the returned value of SELECT statement to DELETE query in stored procedure? soup:

    Let's suppose you have the following schema

    \n
    CREATE TABLE customers\n(\n  customer_id INT, \n  customer_email VARCHAR(17),\n  PRIMARY KEY (customer_id)\n);\nCREATE TABLE child_table\n(\n  child_id INT,\n  customer_id INT, \n  value INT,\n  PRIMARY KEY (child_id),\n  FOREIGN KEY (customer_id) REFERENCES customers (customer_id)\n);\n
    \n

    Now to delete all child records knowing an email of the customer you can use multi-table delete syntax

    \n
    CREATE PROCEDURE deleteCustomerData(IN emailAddr VARCHAR(50)) \n  DELETE t\n    FROM child_table t JOIN customers c \n      ON t.customer_id = c.customer_id\n   WHERE c.customer_email = emailAddr;\n
    \n

    Here is SQLFiddle demo

    \n
    \n
    \n

    ...but if i want to pass the returned value of SELECT stmt to DELETE...

    \n
    \n

    That is exactly what you're doing in above mentioned example. But you can always rewrite it this way

    \n
    DELETE t\n  FROM child_table t JOIN \n(\n  SELECT customer_id \n    FROM customers JOIN ...\n   WHERE customer_email = emailAddr\n     AND ...\n) c\n    ON t.customer_id = c.customer_id\n
    \n

    or

    \n
    DELETE \n  FROM child_table \n WHERE customer_id IN \n(\n  SELECT customer_id \n    FROM customers JOIN ...\n   WHERE customer_email = emailAddr\n     AND ...\n) \n
    \n soup wrap:

    Let's suppose you have the following schema

    CREATE TABLE customers
    (
      customer_id INT, 
      customer_email VARCHAR(17),
      PRIMARY KEY (customer_id)
    );
    CREATE TABLE child_table
    (
      child_id INT,
      customer_id INT, 
      value INT,
      PRIMARY KEY (child_id),
      FOREIGN KEY (customer_id) REFERENCES customers (customer_id)
    );
    

    Now to delete all child records knowing an email of the customer you can use multi-table delete syntax

    CREATE PROCEDURE deleteCustomerData(IN emailAddr VARCHAR(50)) 
      DELETE t
        FROM child_table t JOIN customers c 
          ON t.customer_id = c.customer_id
       WHERE c.customer_email = emailAddr;
    

    Here is SQLFiddle demo


    ...but if i want to pass the returned value of SELECT stmt to DELETE...

    That is exactly what you're doing in above mentioned example. But you can always rewrite it this way

    DELETE t
      FROM child_table t JOIN 
    (
      SELECT customer_id 
        FROM customers JOIN ...
       WHERE customer_email = emailAddr
         AND ...
    ) c
        ON t.customer_id = c.customer_id
    

    or

    DELETE 
      FROM child_table 
     WHERE customer_id IN 
    (
      SELECT customer_id 
        FROM customers JOIN ...
       WHERE customer_email = emailAddr
         AND ...
    ) 
    
    qid & accept id: (21481598, 21481638) query: How to SELECT records from One table If Matching Record In Not Found In Other Table soup:

    Just add this to your WHERE clause:

    \n
    AND DU.das_id_fk IS NULL\n
    \n

    Say I have the following two tables:

    \n
    \n+-------------------------+   +-------------------------+\n| Person                  |   | Pet                     |\n+----------+--------------+   +-------------------------+\n| PersonID | INT(11)      |   | PetID    | INT(11)      |\n| Name     | VARCHAR(255) |   | PersonID | INT(11)      |\n+----------+--------------+   | Name     | VARCHAR(255) |\n                              +----------+--------------+\n
    \n

    And my tables contain the following data:

    \n
    \n+------------------------+    +---------------------------+\n| Person                 |    | Pet                       |\n+----------+-------------+    +-------+----------+--------+\n| PersonID | Name        |    | PetID | PersonID | Name   |\n+----------+-------------+    +-------+----------+--------+\n| 1        | Sean        |    | 5     | 1        | Lucy   |\n| 2        | Javier      |    | 6     | 1        | Cooper |\n| 3        | tradebel123 |    | 7     | 2        | Fluffy |\n+----------+-------------+    +-------+----------+--------+\n
    \n

    Now, if I want a list of all Persons:

    \n
    SELECT pr.PersonID, pr.Name\nFROM\n    Person pr\n
    \n

    If I want a list of Persons that have pets (including their pet's names):

    \n
    SELECT pr.PersonID, pr.Name, pt.Name AS PetName\nFROM\n    Person pr\n    INNER JOIN Pet pt ON pr.PersonID = pt.PersonID\n
    \n

    If I want a list of Persons that have no pets:

    \n
    SELECT pr.PersonID, pr.`Name`\nFROM\n    Person pr\n    LEFT JOIN Pet pt ON pr.PersonID = pt.PersonID\nWHERE\n    pt.`PetID` IS NULL\n
    \n

    If I want a list of all Persons and their pets (even if they don't have pets):

    \n
    SELECT\n    pr.PersonID,\n    pr.Name,\n    COALESCE(pt.Name, '') AS PetName\nFROM\n    Person pr\n    LEFT JOIN Pet pt ON pr.PersonID = pt.PersonID\n
    \n

    If I want a list of Persons and a count of how many pets they have:

    \n
    SELECT pr.PersonID, pr.Name, COUNT(pt.PetID) AS NumPets\nFROM\n    Person pr\n    LEFT JOIN Pet pt ON pr.PersonID = pt.PersonID\nGROUP BY\n    pr.PersonID, pr.Name\n
    \n

    Same as above, but don't show Persons with 0 pets:

    \n
    SELECT pr.PersonID, pr.Name, COUNT(pt.PetID) AS NumPets\nFROM\n    Person pr\n    LEFT JOIN Pet pt ON pr.PersonID = pt.PersonID\nGROUP BY\n    pr.PersonID, pr.Name\nHAVING COUNT(pt.PetID) > 0\n
    \n soup wrap:

    Just add this to your WHERE clause:

    AND DU.das_id_fk IS NULL
    

    Say I have the following two tables:

    +-------------------------+   +-------------------------+
    | Person                  |   | Pet                     |
    +----------+--------------+   +-------------------------+
    | PersonID | INT(11)      |   | PetID    | INT(11)      |
    | Name     | VARCHAR(255) |   | PersonID | INT(11)      |
    +----------+--------------+   | Name     | VARCHAR(255) |
                                  +----------+--------------+
    

    And my tables contain the following data:

    +------------------------+    +---------------------------+
    | Person                 |    | Pet                       |
    +----------+-------------+    +-------+----------+--------+
    | PersonID | Name        |    | PetID | PersonID | Name   |
    +----------+-------------+    +-------+----------+--------+
    | 1        | Sean        |    | 5     | 1        | Lucy   |
    | 2        | Javier      |    | 6     | 1        | Cooper |
    | 3        | tradebel123 |    | 7     | 2        | Fluffy |
    +----------+-------------+    +-------+----------+--------+
    

    Now, if I want a list of all Persons:

    SELECT pr.PersonID, pr.Name
    FROM
        Person pr
    

    If I want a list of Persons that have pets (including their pet's names):

    SELECT pr.PersonID, pr.Name, pt.Name AS PetName
    FROM
        Person pr
        INNER JOIN Pet pt ON pr.PersonID = pt.PersonID
    

    If I want a list of Persons that have no pets:

    SELECT pr.PersonID, pr.`Name`
    FROM
        Person pr
        LEFT JOIN Pet pt ON pr.PersonID = pt.PersonID
    WHERE
        pt.`PetID` IS NULL
    

    If I want a list of all Persons and their pets (even if they don't have pets):

    SELECT
        pr.PersonID,
        pr.Name,
        COALESCE(pt.Name, '') AS PetName
    FROM
        Person pr
        LEFT JOIN Pet pt ON pr.PersonID = pt.PersonID
    

    If I want a list of Persons and a count of how many pets they have:

    SELECT pr.PersonID, pr.Name, COUNT(pt.PetID) AS NumPets
    FROM
        Person pr
        LEFT JOIN Pet pt ON pr.PersonID = pt.PersonID
    GROUP BY
        pr.PersonID, pr.Name
    

    Same as above, but don't show Persons with 0 pets:

    SELECT pr.PersonID, pr.Name, COUNT(pt.PetID) AS NumPets
    FROM
        Person pr
        LEFT JOIN Pet pt ON pr.PersonID = pt.PersonID
    GROUP BY
        pr.PersonID, pr.Name
    HAVING COUNT(pt.PetID) > 0
    
    qid & accept id: (21528762, 21529300) query: (Possibly) Complex Join across four tables using aggregates soup:

    If you only want the latest row you can turn each of your subqueries into an APPLY:

    \n
    SELECT  Account.Name, \n        AnnAccs.PeriodEnd AS AnnAccsPeriodEnd, \n        AnnAccs.LastPeriod AS AnnAccsLastPeriod,\n        CorpTax.PeriodEnd AS CorpTaxPeriodEnd, \n        CorpTax.LastPeriod AS CorpTaxLastPeriod,\n        SelfAss.PeriodEnd AS SAPeriodEnd, \n        SelfAss.LastPeriod AS SALastPeriod\nFROM    dbo.Account \n        OUTER APPLY\n        (   SELECT  TOP 1\n                    ca.new_PeriodEnd AS PeriodEnd, \n                    ca.new_LastPeriod AS LastPeriod, \n                    new_CorporationTaxActivityId AS AccId \n            FROM    new_corporationtaxactivity ca\n            WHERE   ca.AccId = Account.AccountId \n            ORDER BY ca.new_PeriodEnd DESC\n        ) AS CorpTax \n        OUTER APPLY\n        (   SELECT  TOP 1 aa.new_PeriodEnd AS PeriodEnd, \n                    aa.new_LastPeriod AS LastPeriod, \n                    aa.new_AnnualAccountsActivityId AS AccId \n            FROM    new_annualaccountsactivity aa\n            WHERE   aa.new_AnnualAccountsActivityId = Account.AccountId \n            ORDER BY aa.new_PeriodEnd DESC\n        ) AS AnnAccs \n        OUTER APPLY\n        (   SELECT  TOP 1 sa.new_PeriodEnd AS PeriodEnd, \n                    sa.new_LastPeriod AS LastPeriod, \n                    sa.new_SelfAssessmentActivityId AS AccId \n            FROM    new_selfassessmentactivity sa\n            WHERE   sa.new_SelfAssessmentActivityId = Account.AccountId\n            ORDER BY sa.new_PeriodEnd DESC\n        ) As SelfAss \nWHERE   (Account.new_ClientStatus = '100000000' OR Account.new_ClientStatus = '100000001')\nAND     (AnnAccs.LastPeriod = '1' OR CorpTax.LastPeriod = '1' OR SelfAss.LastPeriod = '1')\n
    \n

    Or you can add ROW_NUMBER() to each of your subqueries and limit it to the top result (RowNum = 1):

    \n
    SELECT  Account.Name, \n        AnnAccs.PeriodEnd AS AnnAccsPeriodEnd, \n        AnnAccs.LastPeriod AS AnnAccsLastPeriod,\n        CorpTax.PeriodEnd AS CorpTaxPeriodEnd, \n        CorpTax.LastPeriod AS CorpTaxLastPeriod,\n        SelfAss.PeriodEnd AS SAPeriodEnd, \n        SelfAss.LastPeriod AS SALastPeriod\nFROM    dbo.Account \n        LEFT JOIN \n        (   SELECT  ca.new_PeriodEnd AS PeriodEnd, \n                    ca.new_LastPeriod AS LastPeriod, \n                    ca.new_CorporationTaxActivityId AS AccId,\n                    ROW_NUMBER() OVER(PARTITION BY ca.new_CorporationTaxActivityId ORDER BY ca.new_PeriodEnd DESC) AS RowNum\n            FROM    new_corporationtaxactivity  ca\n        ) AS CorpTax \n            ON CorpTax.AccId = Account.AccountId \n            AND CorpTax.RowNum = 1\n        LEFT JOIN \n        (   SELECT  aa.new_PeriodEnd AS PeriodEnd, \n                    aa.new_LastPeriod AS LastPeriod, \n                    aa.new_AnnualAccountsActivityId AS AccId,\n                    ROW_NUMBER() OVER(PARTITION BY aa.new_AnnualAccountsActivityId ORDER BY aa.new_PeriodEnd DESC) AS RowNum\n            FROM    new_annualaccountsactivity aa\n        ) AS AnnAccs \n            ON AnnAccs.AccId = Account.AccountId\n            AND AnnAccs.RowNum = 1\n        LEFT JOIN \n        (   SELECT  sa.new_PeriodEnd AS PeriodEnd, \n                    sa.new_LastPeriod AS LastPeriod, \n                    sa.new_SelfAssessmentActivityId AS AccId,\n                    ROW_NUMBER() OVER(PARTITION BY sa.new_SelfAssessmentActivityId ORDER BY sa.new_PeriodEnd DESC) AS RowNum\n            FROM    new_selfassessmentactivity sa\n        ) As SelfAss \n            ON SelfAss.AccId = Account.AccountId\n            AND SelfAss.RowNum = 1\nWHERE   (Account.new_ClientStatus = '100000000' OR Account.new_ClientStatus = '100000001')\nAND     (AnnAccs.LastPeriod = '1' OR CorpTax.LastPeriod = '1' OR SelfAss.LastPeriod = '1');\n
    \n soup wrap:

    If you only want the latest row you can turn each of your subqueries into an APPLY:

    SELECT  Account.Name, 
            AnnAccs.PeriodEnd AS AnnAccsPeriodEnd, 
            AnnAccs.LastPeriod AS AnnAccsLastPeriod,
            CorpTax.PeriodEnd AS CorpTaxPeriodEnd, 
            CorpTax.LastPeriod AS CorpTaxLastPeriod,
            SelfAss.PeriodEnd AS SAPeriodEnd, 
            SelfAss.LastPeriod AS SALastPeriod
    FROM    dbo.Account 
            OUTER APPLY
            (   SELECT  TOP 1
                        ca.new_PeriodEnd AS PeriodEnd, 
                        ca.new_LastPeriod AS LastPeriod, 
                        new_CorporationTaxActivityId AS AccId 
                FROM    new_corporationtaxactivity ca
                WHERE   ca.AccId = Account.AccountId 
                ORDER BY ca.new_PeriodEnd DESC
            ) AS CorpTax 
            OUTER APPLY
            (   SELECT  TOP 1 aa.new_PeriodEnd AS PeriodEnd, 
                        aa.new_LastPeriod AS LastPeriod, 
                        aa.new_AnnualAccountsActivityId AS AccId 
                FROM    new_annualaccountsactivity aa
                WHERE   aa.new_AnnualAccountsActivityId = Account.AccountId 
                ORDER BY aa.new_PeriodEnd DESC
            ) AS AnnAccs 
            OUTER APPLY
            (   SELECT  TOP 1 sa.new_PeriodEnd AS PeriodEnd, 
                        sa.new_LastPeriod AS LastPeriod, 
                        sa.new_SelfAssessmentActivityId AS AccId 
                FROM    new_selfassessmentactivity sa
                WHERE   sa.new_SelfAssessmentActivityId = Account.AccountId
                ORDER BY sa.new_PeriodEnd DESC
            ) As SelfAss 
    WHERE   (Account.new_ClientStatus = '100000000' OR Account.new_ClientStatus = '100000001')
    AND     (AnnAccs.LastPeriod = '1' OR CorpTax.LastPeriod = '1' OR SelfAss.LastPeriod = '1')
    

    Or you can add ROW_NUMBER() to each of your subqueries and limit it to the top result (RowNum = 1):

    SELECT  Account.Name, 
            AnnAccs.PeriodEnd AS AnnAccsPeriodEnd, 
            AnnAccs.LastPeriod AS AnnAccsLastPeriod,
            CorpTax.PeriodEnd AS CorpTaxPeriodEnd, 
            CorpTax.LastPeriod AS CorpTaxLastPeriod,
            SelfAss.PeriodEnd AS SAPeriodEnd, 
            SelfAss.LastPeriod AS SALastPeriod
    FROM    dbo.Account 
            LEFT JOIN 
            (   SELECT  ca.new_PeriodEnd AS PeriodEnd, 
                        ca.new_LastPeriod AS LastPeriod, 
                        ca.new_CorporationTaxActivityId AS AccId,
                        ROW_NUMBER() OVER(PARTITION BY ca.new_CorporationTaxActivityId ORDER BY ca.new_PeriodEnd DESC) AS RowNum
                FROM    new_corporationtaxactivity  ca
            ) AS CorpTax 
                ON CorpTax.AccId = Account.AccountId 
                AND CorpTax.RowNum = 1
            LEFT JOIN 
            (   SELECT  aa.new_PeriodEnd AS PeriodEnd, 
                        aa.new_LastPeriod AS LastPeriod, 
                        aa.new_AnnualAccountsActivityId AS AccId,
                        ROW_NUMBER() OVER(PARTITION BY aa.new_AnnualAccountsActivityId ORDER BY aa.new_PeriodEnd DESC) AS RowNum
                FROM    new_annualaccountsactivity aa
            ) AS AnnAccs 
                ON AnnAccs.AccId = Account.AccountId
                AND AnnAccs.RowNum = 1
            LEFT JOIN 
            (   SELECT  sa.new_PeriodEnd AS PeriodEnd, 
                        sa.new_LastPeriod AS LastPeriod, 
                        sa.new_SelfAssessmentActivityId AS AccId,
                        ROW_NUMBER() OVER(PARTITION BY sa.new_SelfAssessmentActivityId ORDER BY sa.new_PeriodEnd DESC) AS RowNum
                FROM    new_selfassessmentactivity sa
            ) As SelfAss 
                ON SelfAss.AccId = Account.AccountId
                AND SelfAss.RowNum = 1
    WHERE   (Account.new_ClientStatus = '100000000' OR Account.new_ClientStatus = '100000001')
    AND     (AnnAccs.LastPeriod = '1' OR CorpTax.LastPeriod = '1' OR SelfAss.LastPeriod = '1');
    
    qid & accept id: (21532604, 21537534) query: SUM of DATEDIFF in minutes for each 2 rows soup:

    Using @DaveZych sample data I have managed to calculated the same results as him, using the SQL statement below:

    \n
    ;WITH DataSource ([StartOrEnd], [badge_no], [punch_timestamp]) AS\n(\n    SELECT ROW_NUMBER() OVER (PARTITION BY [badge_no] ORDER BY [punch_timestamp]) +\n           ROW_NUMBER() OVER (PARTITION BY [badge_no] ORDER BY [punch_timestamp])  % 2\n          ,[badge_no]\n          ,[punch_timestamp]\n    FROM #Time\n),\nTimesPerBadge_No ([badge_no], [StartOrEnd], [Minutes]) AS\n(\n    SELECT  [badge_no]\n           ,[StartOrEnd] \n           ,DATEDIFF(MINUTE, MIN([punch_timestamp]), MAX([punch_timestamp]))\n    FROM DataSource\n    GROUP BY [badge_no]\n            ,[StartOrEnd] \n)\nSELECT [badge_no]\n      ,SUM([Minutes])\nFROM TimesPerBadge_No\nGROUP BY [badge_no]\n
    \n
    \n

    Here can see the values of each CTE:

    \n

    First, we ned to group each start and end date:

    \n
     SELECT ROW_NUMBER() OVER (PARTITION BY [badge_no] ORDER BY [punch_timestamp]) +\n           ROW_NUMBER() OVER (PARTITION BY [badge_no] ORDER BY [punch_timestamp])  % 2\n          ,[badge_no]\n          ,[punch_timestamp]\n    FROM #Time\n
    \n

    enter image description here

    \n

    Now, we can calculate the minutes difference in each group:

    \n
    SELECT  [badge_no]\n        ,[StartOrEnd] \n        ,DATEDIFF(MINUTE, MIN([punch_timestamp]), MAX([punch_timestamp]))\nFROM DataSource\nGROUP BY [badge_no]\n        ,[StartOrEnd] \n
    \n

    enter image description here

    \n

    and finally sumarize the minutes for each badge_no:

    \n
    SELECT [badge_no]\n      ,SUM([Minutes])\nFROM TimesPerBadge_No\nGROUP BY [badge_no]\n
    \n

    enter image description here

    \n soup wrap:

    Using @DaveZych sample data I have managed to calculated the same results as him, using the SQL statement below:

    ;WITH DataSource ([StartOrEnd], [badge_no], [punch_timestamp]) AS
    (
        SELECT ROW_NUMBER() OVER (PARTITION BY [badge_no] ORDER BY [punch_timestamp]) +
               ROW_NUMBER() OVER (PARTITION BY [badge_no] ORDER BY [punch_timestamp])  % 2
              ,[badge_no]
              ,[punch_timestamp]
        FROM #Time
    ),
    TimesPerBadge_No ([badge_no], [StartOrEnd], [Minutes]) AS
    (
        SELECT  [badge_no]
               ,[StartOrEnd] 
               ,DATEDIFF(MINUTE, MIN([punch_timestamp]), MAX([punch_timestamp]))
        FROM DataSource
        GROUP BY [badge_no]
                ,[StartOrEnd] 
    )
    SELECT [badge_no]
          ,SUM([Minutes])
    FROM TimesPerBadge_No
    GROUP BY [badge_no]
    

    Here can see the values of each CTE:

    First, we ned to group each start and end date:

     SELECT ROW_NUMBER() OVER (PARTITION BY [badge_no] ORDER BY [punch_timestamp]) +
               ROW_NUMBER() OVER (PARTITION BY [badge_no] ORDER BY [punch_timestamp])  % 2
              ,[badge_no]
              ,[punch_timestamp]
        FROM #Time
    

    enter image description here

    Now, we can calculate the minutes difference in each group:

    SELECT  [badge_no]
            ,[StartOrEnd] 
            ,DATEDIFF(MINUTE, MIN([punch_timestamp]), MAX([punch_timestamp]))
    FROM DataSource
    GROUP BY [badge_no]
            ,[StartOrEnd] 
    

    enter image description here

    and finally sumarize the minutes for each badge_no:

    SELECT [badge_no]
          ,SUM([Minutes])
    FROM TimesPerBadge_No
    GROUP BY [badge_no]
    

    enter image description here

    qid & accept id: (21535167, 21535283) query: SQL Copy only data from table1 where it doesnt exist in table 2? soup:

    Assuming SQL Server given the SELECT INTO in your question:

    \n

    Using your sample query to populate a new table with only records from Table1 where the item value wasn't in Table2:

    \n
    SELECT a.Item \nINTO new_table2\nFROM table1 a\nLEFT JOIN Table2 b\n  ON a.item = b.item\nWHERE b.item IS NULL\n
    \n

    If you didn't want a new table and just want to add to Table2 the records from Table1 that aren't already there:

    \n
    INSERT INTO Table2 (Item) \nSELECT a.Item\nFROM table1 a\nLEFT JOIN Table2 b\n  ON a.item = b.item\nWHERE b.item IS NULL\n
    \n soup wrap:

    Assuming SQL Server given the SELECT INTO in your question:

    Using your sample query to populate a new table with only records from Table1 where the item value wasn't in Table2:

    SELECT a.Item 
    INTO new_table2
    FROM table1 a
    LEFT JOIN Table2 b
      ON a.item = b.item
    WHERE b.item IS NULL
    

    If you didn't want a new table and just want to add to Table2 the records from Table1 that aren't already there:

    INSERT INTO Table2 (Item) 
    SELECT a.Item
    FROM table1 a
    LEFT JOIN Table2 b
      ON a.item = b.item
    WHERE b.item IS NULL
    
    qid & accept id: (21540219, 21554872) query: Finding Outliers In SQL soup:

    Sometimes simple is best- No need for an intro to statistics yet. I would recommend starting with simple grouping. Within that function you can Average, get the minimum, the Maximum and other useful bits of data. Here are a couple of examples to get you started:

    \n
        SELECT Table1.State, Table1.Yr, Count(Table1.Price) AS CountOfPrice, Min(Table1.Price) AS MinOfPrice, Max(Table1.Price) AS MaxOfPrice, Avg(Table1.Price) AS AvgOfPrice\nFROM Table1\nGROUP BY Table1.State, Table1.Yr;\n
    \n

    Or (in case you want month data included)

    \n
        SELECT Table1.State, Table1.Yr, Month([Dt]) AS Mnth, Count(Table1.Price) AS CountOfPrice, Min(Table1.Price) AS MinOfPrice, Max(Table1.Price) AS MaxOfPrice\nFROM Table1\nGROUP BY Table1.State, Table1.Yr, Month([Dt]);\n
    \n

    Obviously you'll need to modify the table and field names (Just so you know though- 'Year' and 'Date' are both reserved words and best not used for field names.)

    \n soup wrap:

    Sometimes simple is best- No need for an intro to statistics yet. I would recommend starting with simple grouping. Within that function you can Average, get the minimum, the Maximum and other useful bits of data. Here are a couple of examples to get you started:

        SELECT Table1.State, Table1.Yr, Count(Table1.Price) AS CountOfPrice, Min(Table1.Price) AS MinOfPrice, Max(Table1.Price) AS MaxOfPrice, Avg(Table1.Price) AS AvgOfPrice
    FROM Table1
    GROUP BY Table1.State, Table1.Yr;
    

    Or (in case you want month data included)

        SELECT Table1.State, Table1.Yr, Month([Dt]) AS Mnth, Count(Table1.Price) AS CountOfPrice, Min(Table1.Price) AS MinOfPrice, Max(Table1.Price) AS MaxOfPrice
    FROM Table1
    GROUP BY Table1.State, Table1.Yr, Month([Dt]);
    

    Obviously you'll need to modify the table and field names (Just so you know though- 'Year' and 'Date' are both reserved words and best not used for field names.)

    qid & accept id: (21546809, 21549472) query: Split text value insert another cell soup:

    Create this function:

    \n
    create function f_parca\n(\n @name varchar(100)\n) returns varchar(max)\nas\nbegin\ndeclare @rv varchar(max) = ''\n\nif @name is not null\nselect top (len(@name)) @rv += ','+ left(@name, number + 1) \nfrom master..spt_values v\nwhere type = 'p'\n\nreturn stuff(@rv, 1,1,'')\nend\n
    \n

    Testing the function

    \n
    select dbo.f_parca('TClausen')\n
    \n

    Result:

    \n
    T,TC,TCl,TCla,TClau,TClaus,TClause,TClausen\n
    \n

    Update your table like this:

    \n
    UPDATE export1\nSET PARCA = dbo.f_parca(name)\n
    \n soup wrap:

    Create this function:

    create function f_parca
    (
     @name varchar(100)
    ) returns varchar(max)
    as
    begin
    declare @rv varchar(max) = ''
    
    if @name is not null
    select top (len(@name)) @rv += ','+ left(@name, number + 1) 
    from master..spt_values v
    where type = 'p'
    
    return stuff(@rv, 1,1,'')
    end
    

    Testing the function

    select dbo.f_parca('TClausen')
    

    Result:

    T,TC,TCl,TCla,TClau,TClaus,TClause,TClausen
    

    Update your table like this:

    UPDATE export1
    SET PARCA = dbo.f_parca(name)
    
    qid & accept id: (21613270, 21616500) query: Returning only the most recent values of a query soup:

    Of course. You just need a sub-query to identify the most recent record for each agent. Something like (untested):

    \n
    select a.eventdatetime\n        ,b.resourcename\n        ,b.extension\n        ,a.eventtype \n    from agentstatedetail as a\n        ,resource as b\n        ,team as c\n        ,(SELECT agentid, MAX(eventdatetime) AS lastevent\n           FROM agentstatedetail \n           WHERE DATE(eventdatetime) = TODAY\n           GROUP BY agentid) AS d \nwhere (a.agentid = b.resourceid) \n    and (b.assignedteamid = 10) \n    and (c.teamname like 'teamnamehere %') \n    and (d.agentid = a.agentid and a.eventdatetime = d.lastevent)\ngroup by a.eventdatetime\n    ,b.resourcename\n    ,b.extension\n    ,a.eventtype \norder by eventdatetime desc\n
    \n

    You may need to look at indexing agentstatedetail to get maximum efficiency.

    \n

    EDIT

    \n

    Per your comment about avoiding the nested query and handling the skipping of agentid values already seen, that's a fairly trivial client-side solution. I don't know exactly how you're handling this on the PHP side, but you'd basically want to do something like this:

    \n
    $data = $db->query("select a.eventdatetime, b.resourcename, b.extension, a.eventtype\n                    from agentstatedetail as a, resource as b, team as c \n                    where date(eventdatetime) = date(current)\n                    and (a.agentid = b.resourceid) and (b.assignedteamid = 10)\n                    and (c.teamname like 'ITS Help Desk %')\n                    group by a.eventdatetime, b.resourcename,\n                             b.extension, a.eventtype\n                    order by eventdatetime desc");\n\n$agent = Array();\n\nforeach($data as $row){\n    if(!$agent[$row['RESOURCENAME']]++) {\n        echo\n        "
    ";\n }\n}\n\n

    The associative array $agent tracks how many records have been seen for a particular agent. When that's empty, it's the first occurrence. The exact non-zero number is not really useful, we just use a post-increment for efficiency, rather than setting $agent[$row['RESOURCENAME']] explicitly in the loop.

    \n soup wrap:

    Of course. You just need a sub-query to identify the most recent record for each agent. Something like (untested):

    select a.eventdatetime
            ,b.resourcename
            ,b.extension
            ,a.eventtype 
        from agentstatedetail as a
            ,resource as b
            ,team as c
            ,(SELECT agentid, MAX(eventdatetime) AS lastevent
               FROM agentstatedetail 
               WHERE DATE(eventdatetime) = TODAY
               GROUP BY agentid) AS d 
    where (a.agentid = b.resourceid) 
        and (b.assignedteamid = 10) 
        and (c.teamname like 'teamnamehere %') 
        and (d.agentid = a.agentid and a.eventdatetime = d.lastevent)
    group by a.eventdatetime
        ,b.resourcename
        ,b.extension
        ,a.eventtype 
    order by eventdatetime desc
    

    You may need to look at indexing agentstatedetail to get maximum efficiency.

    EDIT

    Per your comment about avoiding the nested query and handling the skipping of agentid values already seen, that's a fairly trivial client-side solution. I don't know exactly how you're handling this on the PHP side, but you'd basically want to do something like this:

    $data = $db->query("select a.eventdatetime, b.resourcename, b.extension, a.eventtype
                        from agentstatedetail as a, resource as b, team as c 
                        where date(eventdatetime) = date(current)
                        and (a.agentid = b.resourceid) and (b.assignedteamid = 10)
                        and (c.teamname like 'ITS Help Desk %')
                        group by a.eventdatetime, b.resourcename,
                                 b.extension, a.eventtype
                        order by eventdatetime desc");
    
    $agent = Array();
    
    foreach($data as $row){
        if(!$agent[$row['RESOURCENAME']]++) {
            echo
            "
    "; } }

    The associative array $agent tracks how many records have been seen for a particular agent. When that's empty, it's the first occurrence. The exact non-zero number is not really useful, we just use a post-increment for efficiency, rather than setting $agent[$row['RESOURCENAME']] explicitly in the loop.

    qid & accept id: (21622435, 21623431) query: SQL CASE WHEN, when i want an "including" row soup:

    Query:

    \n
    SELECT CASE WHEN mark = 'Ford' THEN 'Ford' END AS Mark,\nCOUNT(*)\nFROM Table1 t\nWHERE mark = 'Ford'\nGROUP BY mark\nUNION ALL\nSELECT CASE WHEN mark = 'Ford' AND Transmition = 'A' \n              THEN 'including Fords with automatic transmitions' END AS Mark,\nCOUNT(*)\nFROM Table1 t\nWHERE mark = 'Ford'\nAND Transmition = 'A' \nGROUP BY CASE WHEN mark = 'Ford' AND Transmition = 'A' \n              THEN 'including Fords with automatic transmitions' END\n
    \n

    Result:

    \n
    |                                        MARK | COUNT(*) |\n|---------------------------------------------|----------|\n|                                        Ford |        4 |\n| including Fords with automatic transmitions |        3 |\n
    \n soup wrap:

    Query:

    SELECT CASE WHEN mark = 'Ford' THEN 'Ford' END AS Mark,
    COUNT(*)
    FROM Table1 t
    WHERE mark = 'Ford'
    GROUP BY mark
    UNION ALL
    SELECT CASE WHEN mark = 'Ford' AND Transmition = 'A' 
                  THEN 'including Fords with automatic transmitions' END AS Mark,
    COUNT(*)
    FROM Table1 t
    WHERE mark = 'Ford'
    AND Transmition = 'A' 
    GROUP BY CASE WHEN mark = 'Ford' AND Transmition = 'A' 
                  THEN 'including Fords with automatic transmitions' END
    

    Result:

    |                                        MARK | COUNT(*) |
    |---------------------------------------------|----------|
    |                                        Ford |        4 |
    | including Fords with automatic transmitions |        3 |
    
    qid & accept id: (21626432, 21626549) query: Comparing String,if it is NULL in Sql Server 2008 soup:

    !=/<> '' is not the same as IS NOT NULL! You need this:

    \n
    IF(Name <> '')\n    // Do some stuff\nELSE IF(Phone  <> '')\n    // Do some stuff\nELSE\n    // Do some other stuff\n
    \n

    If Name or Phone can be NULL, you need this:

    \n
    IF(ISNULL(Name, '') <> '')\n    // Do some stuff\nELSE IF(ISNULL(Phone, '')  <> '')\n    // Do some stuff\nELSE\n    // Do some other stuff\n
    \n

    In SQL, NULL is always <> ''. In fact, in most configurations, NULL is also <> NULL.

    \n soup wrap:

    !=/<> '' is not the same as IS NOT NULL! You need this:

    IF(Name <> '')
        // Do some stuff
    ELSE IF(Phone  <> '')
        // Do some stuff
    ELSE
        // Do some other stuff
    

    If Name or Phone can be NULL, you need this:

    IF(ISNULL(Name, '') <> '')
        // Do some stuff
    ELSE IF(ISNULL(Phone, '')  <> '')
        // Do some stuff
    ELSE
        // Do some other stuff
    

    In SQL, NULL is always <> ''. In fact, in most configurations, NULL is also <> NULL.

    qid & accept id: (21640927, 21640997) query: remove duplicate records in oracle soup:

    This works for sql server;

    \n
    delete a from newproducts as a\n where \nexists(\nselect * from newproducts b\nwhere a.id = b.id and a.date < b.date)\n
    \n

    Same or following should work on oracle;

    \n
    delete from newproducts a\n where \nexists(\nselect * from newproducts b\nwhere a.id = b.id and a.date < b.date)\n
    \n soup wrap:

    This works for sql server;

    delete a from newproducts as a
     where 
    exists(
    select * from newproducts b
    where a.id = b.id and a.date < b.date)
    

    Same or following should work on oracle;

    delete from newproducts a
     where 
    exists(
    select * from newproducts b
    where a.id = b.id and a.date < b.date)
    
    qid & accept id: (21646708, 21646749) query: check which names have the same field in a database soup:

    How about this:

    \n
    select group_concat(name) as names, time\nfrom table t\ngroup by time\nhaving count(*) > 1;\n
    \n

    This will give you output such as:

    \n
    Names               Time\nRichard,Luigi       8:00\n. . .\n
    \n

    Which can then format on the application side.

    \n soup wrap:

    How about this:

    select group_concat(name) as names, time
    from table t
    group by time
    having count(*) > 1;
    

    This will give you output such as:

    Names               Time
    Richard,Luigi       8:00
    . . .
    

    Which can then format on the application side.

    qid & accept id: (21669936, 21670216) query: Join and get only single row with respect to each id soup:

    You can select only one imageId (the minimum) per each productId by joining to filtered imageId like this :

    \n
    SELECT p.ProductId, ProductName, i.imageId, imagePath\nFROM product p\n    INNER JOIN Image i \n        ON i.ProductId = p.ProductId\n    INNER JOIN\n        (SELECT MIN(imageId) As imageId, ProductId\n         FROM image\n         GROUP BY ProductId\n         ) o \n         ON o.imageId = i.imageId\n
    \n

    or by filtering imageId using WHERE clause :

    \n
    SELECT p.ProductId, ProductName, imageId, imagePath\nFROM product p\n    INNER JOIN Image i \n        ON i.ProductId = p.ProductId\nWHERE imageId IN\n    (SELECT MIN(imageId) As imageId\n     FROM image\n     GROUP BY ProductId\n     )\n
    \n

    SQLFiddle Demo

    \n soup wrap:

    You can select only one imageId (the minimum) per each productId by joining to filtered imageId like this :

    SELECT p.ProductId, ProductName, i.imageId, imagePath
    FROM product p
        INNER JOIN Image i 
            ON i.ProductId = p.ProductId
        INNER JOIN
            (SELECT MIN(imageId) As imageId, ProductId
             FROM image
             GROUP BY ProductId
             ) o 
             ON o.imageId = i.imageId
    

    or by filtering imageId using WHERE clause :

    SELECT p.ProductId, ProductName, imageId, imagePath
    FROM product p
        INNER JOIN Image i 
            ON i.ProductId = p.ProductId
    WHERE imageId IN
        (SELECT MIN(imageId) As imageId
         FROM image
         GROUP BY ProductId
         )
    

    SQLFiddle Demo

    qid & accept id: (21692871, 21722269) query: Combine multiple rows into multiple columns dynamically in SQL Server soup:

    I would do it using dynamic sql, but this is (http://sqlfiddle.com/#!6/a63a6/1/0) the PIVOT solution:

    \n
    SELECT badge, name, [AP_KDa], [AP_Match], [ADC_KDA],[ADC_Match],[TOP_KDA],[TOP_Match] FROM\n(\nSELECT badge, name, col, val FROM(\n SELECT *, Job+'_KDA' as Col, KDA as Val FROM @T \n UNION\n SELECT *, Job+'_Match' as Col,Match as Val  FROM @T\n) t\n) tt\nPIVOT ( max(val) for Col in ([AP_KDa], [AP_Match], [ADC_KDA],[ADC_Match],[TOP_KDA],[TOP_Match]) ) AS pvt\n
    \n

    Bonus: This how PIVOT could be combined with dynamic SQL (http://sqlfiddle.com/#!6/a63a6/7/0), again I would prefer to do it simpler, without PIVOT, but this is just good exercising for me :

    \n
    SELECT badge, name, cast(Job+'_KDA' as nvarchar(128)) as Col, KDA as Val INTO #Temp1 FROM Temp \nINSERT INTO #Temp1 SELECT badge, name, Job+'_Match' as Col, Match as Val FROM Temp\n\nDECLARE @columns nvarchar(max)\nSELECT @columns = COALESCE(@columns + ', ', '') + Col FROM #Temp1 GROUP BY Col\n\nDECLARE @sql nvarchar(max) = 'SELECT badge, name, '+@columns+' FROM #Temp1 PIVOT ( max(val) for Col in ('+@columns+') ) AS pvt'\nexec (@sql)\n\nDROP TABLE #Temp1\n
    \n soup wrap:

    I would do it using dynamic sql, but this is (http://sqlfiddle.com/#!6/a63a6/1/0) the PIVOT solution:

    SELECT badge, name, [AP_KDa], [AP_Match], [ADC_KDA],[ADC_Match],[TOP_KDA],[TOP_Match] FROM
    (
    SELECT badge, name, col, val FROM(
     SELECT *, Job+'_KDA' as Col, KDA as Val FROM @T 
     UNION
     SELECT *, Job+'_Match' as Col,Match as Val  FROM @T
    ) t
    ) tt
    PIVOT ( max(val) for Col in ([AP_KDa], [AP_Match], [ADC_KDA],[ADC_Match],[TOP_KDA],[TOP_Match]) ) AS pvt
    

    Bonus: This how PIVOT could be combined with dynamic SQL (http://sqlfiddle.com/#!6/a63a6/7/0), again I would prefer to do it simpler, without PIVOT, but this is just good exercising for me :

    SELECT badge, name, cast(Job+'_KDA' as nvarchar(128)) as Col, KDA as Val INTO #Temp1 FROM Temp 
    INSERT INTO #Temp1 SELECT badge, name, Job+'_Match' as Col, Match as Val FROM Temp
    
    DECLARE @columns nvarchar(max)
    SELECT @columns = COALESCE(@columns + ', ', '') + Col FROM #Temp1 GROUP BY Col
    
    DECLARE @sql nvarchar(max) = 'SELECT badge, name, '+@columns+' FROM #Temp1 PIVOT ( max(val) for Col in ('+@columns+') ) AS pvt'
    exec (@sql)
    
    DROP TABLE #Temp1
    
    qid & accept id: (21731573, 21732258) query: Infering missing ranges in a continuous scale soup:

    You don't need the view.

    \n

    This should do what you want (change the 2 literal to a variable, I tested it w/ a 2).

    \n

    The first query grabs the discount if there's a discount. The second (connected by union) would grab a penalty if there's a penalty, but of an amount above the first row's from_amount, and the third (connected by union) would grab the penalty if there is one and it's below the first row's from_amount.

    \n

    You can test it here: http://sqlfiddle.com/#!4/d41d8/25188/0

    \n
    with discounts as\n( select 25 as from_amount, 39 as to_amount, .02 as discount from dual union all\n  select 40 as from_amount, 49 as to_amount, .05 as discount from dual union all\n  select 50 as from_amount, 99999 as to_amount, .10 as discount from dual  ) \n   , penalties as\n( select 5 as from_amount, 9 as to_amount, .10 as penalty from dual union all\n  select 10 as from_amount, 19 as to_amount, .05 as penalty from dual)\nselect discount as change\nfrom discounts\nwhere 2 between from_amount and to_amount\nunion all\nselect -penalty as change\nfrom penalties\nwhere 2 between from_amount and to_amount\nunion all\nselect -penalty as change\nfrom penalties\nwhere 2 < (select min(from_amount) from penalties)\nand from_amount = (select min(from_amount) from penalties)\n
    \n

    Regarding your last edit, the query below would show "0" for any amount for which there is neither a penalty nor a discount (the version of my query above would just show no rows for such a situation). You may prefer that it show zero, like this:

    \n
    select discount as change\nfrom discounts\nwhere 22 between from_amount and to_amount\nunion all\nselect -penalty as change\nfrom penalties\nwhere 22 between from_amount and to_amount\nunion all\nselect -penalty as change\nfrom penalties\nwhere 22 < (select min(from_amount) from penalties)\nand from_amount = (select min(from_amount) from penalties)\nunion all\nselect 0 as change\nfrom dual\nwhere not exists (select 1 from discounts where 22 between from_amount and to_amount)\n  and not exists (select 1 from penalties where 22 between from_amount and to_amount)\n  and 22 >= (select min(from_amount) from penalties)\n
    \n

    If you change the SQL for that view to the below, you should get the range in between to show zero:

    \n
    select discounts.from_amount as from_amount,\n       discounts.to_amount as to_amount,\n       discounts.discount * -1 as change\n  from discounts\nunion\nselect penalties.from_amount as from_amount,\n       penalties.to_amount   as to_amount,\n       penalties.penalty     as change\n  from penalties\nunion\nselect p.to_amount + 1, d.from_amount - 1, 0 as change\n  from discounts d, penalties p\n where d.from_amount = (select min(from_amount) from discounts) and\n p.to_amount = (select max(to_amount) from penalties)\n order by from_amount desc\n
    \n soup wrap:

    You don't need the view.

    This should do what you want (change the 2 literal to a variable, I tested it w/ a 2).

    The first query grabs the discount if there's a discount. The second (connected by union) would grab a penalty if there's a penalty, but of an amount above the first row's from_amount, and the third (connected by union) would grab the penalty if there is one and it's below the first row's from_amount.

    You can test it here: http://sqlfiddle.com/#!4/d41d8/25188/0

    with discounts as
    ( select 25 as from_amount, 39 as to_amount, .02 as discount from dual union all
      select 40 as from_amount, 49 as to_amount, .05 as discount from dual union all
      select 50 as from_amount, 99999 as to_amount, .10 as discount from dual  ) 
       , penalties as
    ( select 5 as from_amount, 9 as to_amount, .10 as penalty from dual union all
      select 10 as from_amount, 19 as to_amount, .05 as penalty from dual)
    select discount as change
    from discounts
    where 2 between from_amount and to_amount
    union all
    select -penalty as change
    from penalties
    where 2 between from_amount and to_amount
    union all
    select -penalty as change
    from penalties
    where 2 < (select min(from_amount) from penalties)
    and from_amount = (select min(from_amount) from penalties)
    

    Regarding your last edit, the query below would show "0" for any amount for which there is neither a penalty nor a discount (the version of my query above would just show no rows for such a situation). You may prefer that it show zero, like this:

    select discount as change
    from discounts
    where 22 between from_amount and to_amount
    union all
    select -penalty as change
    from penalties
    where 22 between from_amount and to_amount
    union all
    select -penalty as change
    from penalties
    where 22 < (select min(from_amount) from penalties)
    and from_amount = (select min(from_amount) from penalties)
    union all
    select 0 as change
    from dual
    where not exists (select 1 from discounts where 22 between from_amount and to_amount)
      and not exists (select 1 from penalties where 22 between from_amount and to_amount)
      and 22 >= (select min(from_amount) from penalties)
    

    If you change the SQL for that view to the below, you should get the range in between to show zero:

    select discounts.from_amount as from_amount,
           discounts.to_amount as to_amount,
           discounts.discount * -1 as change
      from discounts
    union
    select penalties.from_amount as from_amount,
           penalties.to_amount   as to_amount,
           penalties.penalty     as change
      from penalties
    union
    select p.to_amount + 1, d.from_amount - 1, 0 as change
      from discounts d, penalties p
     where d.from_amount = (select min(from_amount) from discounts) and
     p.to_amount = (select max(to_amount) from penalties)
     order by from_amount desc
    
    qid & accept id: (21740326, 21740481) query: ROLLUP Function; Replace NULL with 'Total' w/ Column data type INT not VARCHAR soup:

    Test Data

    \n
    DECLARE @MyTable TABLE (Column1 INT,Column2 INT)\nINSERT INTO @MyTable VALUES\n(1,1),(1,2),(1,3),(2,1),(2,2),(2,3),(3,1),(3,2),(3,3)\n\nSELECT CASE\n         WHEN GROUPING(Column1) = 1 THEN 'Total'\n         ELSE CAST(Column1 AS VARCHAR(10))     --<-- Cast as Varchar\n       END  Column1\n      , SUM(Column2) AS MySum\nFROM @MyTable\nGROUP BY Column1 \nWITH ROLLUP;\n
    \n

    Result Set

    \n
    ╔═════════╦═══════╗\n║ Column1 ║ MySum ║\n╠═════════╬═══════╣\n║ 1       ║     6 ║\n║ 2       ║     6 ║\n║ 3       ║     6 ║\n║ Total   ║    18 ║\n╚═════════╩═══════╝\n
    \n

    Note

    \n

    The reason you couldnt do what you were trying to do is because when you use a CASE statement in each case the returned datatype should be the same.

    \n

    In above query I have just CAST the colum1 to varchar and it worked.

    \n soup wrap:

    Test Data

    DECLARE @MyTable TABLE (Column1 INT,Column2 INT)
    INSERT INTO @MyTable VALUES
    (1,1),(1,2),(1,3),(2,1),(2,2),(2,3),(3,1),(3,2),(3,3)
    
    SELECT CASE
             WHEN GROUPING(Column1) = 1 THEN 'Total'
             ELSE CAST(Column1 AS VARCHAR(10))     --<-- Cast as Varchar
           END  Column1
          , SUM(Column2) AS MySum
    FROM @MyTable
    GROUP BY Column1 
    WITH ROLLUP;
    

    Result Set

    ╔═════════╦═══════╗
    ║ Column1 ║ MySum ║
    ╠═════════╬═══════╣
    ║ 1       ║     6 ║
    ║ 2       ║     6 ║
    ║ 3       ║     6 ║
    ║ Total   ║    18 ║
    ╚═════════╩═══════╝
    

    Note

    The reason you couldnt do what you were trying to do is because when you use a CASE statement in each case the returned datatype should be the same.

    In above query I have just CAST the colum1 to varchar and it worked.

    qid & accept id: (21746336, 21746384) query: How to repeat the same SQL query for different column values soup:

    Try this query

    \n
    SELECT ENAME, EID, Salary FROM  WHERE ENAME IN ('AAA','DDD','ZZZ');\n
    \n

    or

    \n
    SELECT ENAME, EID, Salary FROM  WHERE ENAME IN (SELECT ENAME FROM  WHERE );\n
    \n soup wrap:

    Try this query

    SELECT ENAME, EID, Salary FROM  WHERE ENAME IN ('AAA','DDD','ZZZ');
    

    or

    SELECT ENAME, EID, Salary FROM  WHERE ENAME IN (SELECT ENAME FROM  WHERE );
    
    qid & accept id: (21765911, 21766175) query: In MySQL how to write SQL to search for words in a field? soup:

    This will work for your particular example:

    \n
    select comment \nfrom tbl\nwhere soundex(comment) like '%D510%' or comment like '%dumb%';\n
    \n

    It won't find misspellings in the comment.

    \n

    EDIT:

    \n

    You could do something like this:

    \n
    select comment\nfrom tbl\nwhere soundex(comment) = soundex('dumb') or\n      soundex(substring_index(substring_index(comment, ' ', 2), -1)  = soundex('dumb') or\n      soundex(substring_index(substring_index(comment, ' ', 3), -1)  = soundex('dumb') or\n      soundex(substring_index(substring_index(comment, ' ', 4), -1)  = soundex('dumb') or\n      soundex(substring_index(substring_index(comment, ' ', 5), -1)  = soundex('dumb');\n
    \n

    A bit brute force.

    \n

    The need to do this suggests that you should consider a full text index.

    \n soup wrap:

    This will work for your particular example:

    select comment 
    from tbl
    where soundex(comment) like '%D510%' or comment like '%dumb%';
    

    It won't find misspellings in the comment.

    EDIT:

    You could do something like this:

    select comment
    from tbl
    where soundex(comment) = soundex('dumb') or
          soundex(substring_index(substring_index(comment, ' ', 2), -1)  = soundex('dumb') or
          soundex(substring_index(substring_index(comment, ' ', 3), -1)  = soundex('dumb') or
          soundex(substring_index(substring_index(comment, ' ', 4), -1)  = soundex('dumb') or
          soundex(substring_index(substring_index(comment, ' ', 5), -1)  = soundex('dumb');
    

    A bit brute force.

    The need to do this suggests that you should consider a full text index.

    qid & accept id: (21786302, 21788209) query: SQL Sum MTD & YTD soup:
    SELECT\n  Period = 'MTD',\n  Total_value = SUM(T0.TotalSumSy) \nFROM dbo.INV1  T0 \n  INNER JOIN dbo.OINV  T1 \n     ON T1.DocEntry = T0.DocEntry\nWHERE \n    T1.DocDate >= DATEADD(month,DATEDIFF(month,'20010101',GETDATE()),'20010101')\n  AND \n    T1.DocDate < DATEADD(month,1+DATEDIFF(month,'20010101',GETDATE()),'20010101')\n\nUNION ALL\n\nSELECT\n  'YTD', \n  SUM(T0.TotalSumSy) \nFROM dbo.INV1  T0 \n  INNER JOIN dbo.OINV  T1 \n     ON T1.DocEntry = T0.DocEntry\nWHERE \n    T1.DocDate >= DATEADD(year,DATEDIFF(year,'20010101',GETDATE()),'20010101')\n  AND \n    T1.DocDate < DATEADD(year,1+DATEDIFF(year,'20010101',GETDATE()),'20010101') ;\n
    \n

    The (complicated) conditions at the WHERE clauses are used instead of the YEAR(column) = YEAR(GETDATE() and the other you had previously, so indexes can be used. WHen you apply a function to a column, you make indexes unsuable (with some minor exceptions for some functions and some verios of SQL-Server.) So, the best thing is to try to convert the conditions to this type:

    \n
    column  AnyComplexFunction()\n
    \n soup wrap:
    SELECT
      Period = 'MTD',
      Total_value = SUM(T0.TotalSumSy) 
    FROM dbo.INV1  T0 
      INNER JOIN dbo.OINV  T1 
         ON T1.DocEntry = T0.DocEntry
    WHERE 
        T1.DocDate >= DATEADD(month,DATEDIFF(month,'20010101',GETDATE()),'20010101')
      AND 
        T1.DocDate < DATEADD(month,1+DATEDIFF(month,'20010101',GETDATE()),'20010101')
    
    UNION ALL
    
    SELECT
      'YTD', 
      SUM(T0.TotalSumSy) 
    FROM dbo.INV1  T0 
      INNER JOIN dbo.OINV  T1 
         ON T1.DocEntry = T0.DocEntry
    WHERE 
        T1.DocDate >= DATEADD(year,DATEDIFF(year,'20010101',GETDATE()),'20010101')
      AND 
        T1.DocDate < DATEADD(year,1+DATEDIFF(year,'20010101',GETDATE()),'20010101') ;
    

    The (complicated) conditions at the WHERE clauses are used instead of the YEAR(column) = YEAR(GETDATE() and the other you had previously, so indexes can be used. WHen you apply a function to a column, you make indexes unsuable (with some minor exceptions for some functions and some verios of SQL-Server.) So, the best thing is to try to convert the conditions to this type:

    column  AnyComplexFunction()
    
    qid & accept id: (21832842, 21832920) query: Get Last message loaded based on message type soup:

    You can use the ROW_NUMBER() Function to assign each of your messages a rank by Message date (starting at 1 again for each message type), then just limit the results to the top ranked message:

    \n
    WITH AllMessages AS\n(   SELECT  MessageTypes.MessageType, \n            Messages.MessageDate, \n            Messages.ValueDate, \n            Messages.MessageReference, \n            Messages.Beneficiary, \n            Messages.StatusId,\n            MessageStatus.Status, \n            BICProfile.BIC,\n            RowNumber = ROW_NUMBER() OVER(PARTITION BY Messages.MessageTypeId \n                                            ORDER BY Messages.MessageDate DESC)\n    FROM    Messages \n            INNER JOIN MessageStatus \n                ON Messages.StatusId = MessageStatus.Id \n            INNER JOIN MessageTypes \n                ON Messages.MessageTypeId = MessageTypes.MessageTypeId \n            INNER JOIN BICProfile  \n                ON Messages.SenderId = dbo.BICProfile.BicId \n    WHERE   BICProfile.BIC = 'someValue'\n    AND     Messages.StatusId IN (4, 5, 6)\n)\nSELECT  MessageType, \n        MessageDate, \n        ValueDate, \n        MessageReference, \n        Beneficiary, \n        StatusId,\n        Status, \n        BIC \nFROM    AllMessages\nWHERE   RowNumber = 1;\n
    \n

    If you can't use ROW_NUMBER then you can use a subquery to get the latest message date per type:

    \n
    SELECT  Messages.MessageTypeID, MessageDate = MAX(Messages.MessageDate)\nFROM    Messages\n        INNER JOIN BICProfile  \n            ON Messages.SenderId = dbo.BICProfile.BicId \nWHERE   BICProfile.BIC = 'someValue'\nAND     Messages.StatusId IN (4, 5, 6)\nGROUP BY Messages.MessageTypeID\n
    \n

    Then inner join the results of this back to your main query to filter the results:

    \n
    SELECT  MessageTypes.MessageType, \n        Messages.MessageDate, \n        Messages.ValueDate, \n        Messages.MessageReference, \n        Messages.Beneficiary, \n        Messages.StatusId,\n        MessageStatus.Status, \n        BICProfile.BIC\nFROM    Messages \n        INNER JOIN MessageStatus \n            ON Messages.StatusId = MessageStatus.Id \n        INNER JOIN MessageTypes \n            ON Messages.MessageTypeId = MessageTypes.MessageTypeId \n        INNER JOIN BICProfile  \n            ON Messages.SenderId = dbo.BICProfile.BicId \n        INNER JOIN \n        (   SELECT  Messages.MessageTypeID, \n                    MessageDate = MAX(Messages.MessageDate)\n            FROM    Messages\n                    INNER JOIN BICProfile  \n                        ON Messages.SenderId = dbo.BICProfile.BicId \n            WHERE   BICProfile.BIC = 'someValue'\n            AND     Messages.StatusId IN (4, 5, 6)\n            GROUP BY Messages.MessageTypeID\n        ) AS MaxMessage\n            ON MaxMessage.MessageTypeID = Messages.MessageTypeID\n            AND MaxMessage.MessageDate = Messages.MessageDate\nWHERE   BICProfile.BIC = 'someValue'\nAND     Messages.StatusId IN (4, 5, 6);\n
    \n

    N.B This second method will return multiple rows per message type if the latest message date is common among more than one message. This behaviour can be replicated in the first query by replacing ROW_NUMBER with RANK

    \n
    \n

    EDIT

    \n

    If you will have multiple messages with the same date and only want to return one of them you need to expand the ordering within the row_number function, i.e. if you wanted to pick the message with the maximum id when there were ties you could make it:

    \n
    RowNumber = ROW_NUMBER() OVER(PARTITION BY Messages.MessageTypeId \n                                ORDER BY Messages.MessageDate DESC,\n                                        Messages.MessageID DESC)\n
    \n

    So the full query would be:

    \n
    WITH AllMessages AS\n(   SELECT  MessageTypes.MessageType, \n            Messages.MessageDate, \n            Messages.ValueDate, \n            Messages.MessageReference, \n            Messages.Beneficiary, \n            Messages.StatusId,\n            MessageStatus.Status, \n            BICProfile.BIC,\n            RowNumber = ROW_NUMBER() OVER(PARTITION BY Messages.MessageTypeId \n                                            ORDER BY Messages.MessageDate DESC,\n                                                    Messages.MessageID DESC)\n    FROM    Messages \n            INNER JOIN MessageStatus \n                ON Messages.StatusId = MessageStatus.Id \n            INNER JOIN MessageTypes \n                ON Messages.MessageTypeId = MessageTypes.MessageTypeId \n            INNER JOIN BICProfile  \n                ON Messages.SenderId = dbo.BICProfile.BicId \n    WHERE   BICProfile.BIC = 'someValue'\n    AND     Messages.StatusId IN (4, 5, 6)\n)\nSELECT  MessageType, \n        MessageDate, \n        ValueDate, \n        MessageReference, \n        Beneficiary, \n        StatusId,\n        Status, \n        BIC \nFROM    AllMessages\nWHERE   RowNumber = 1;\n
    \n soup wrap:

    You can use the ROW_NUMBER() Function to assign each of your messages a rank by Message date (starting at 1 again for each message type), then just limit the results to the top ranked message:

    WITH AllMessages AS
    (   SELECT  MessageTypes.MessageType, 
                Messages.MessageDate, 
                Messages.ValueDate, 
                Messages.MessageReference, 
                Messages.Beneficiary, 
                Messages.StatusId,
                MessageStatus.Status, 
                BICProfile.BIC,
                RowNumber = ROW_NUMBER() OVER(PARTITION BY Messages.MessageTypeId 
                                                ORDER BY Messages.MessageDate DESC)
        FROM    Messages 
                INNER JOIN MessageStatus 
                    ON Messages.StatusId = MessageStatus.Id 
                INNER JOIN MessageTypes 
                    ON Messages.MessageTypeId = MessageTypes.MessageTypeId 
                INNER JOIN BICProfile  
                    ON Messages.SenderId = dbo.BICProfile.BicId 
        WHERE   BICProfile.BIC = 'someValue'
        AND     Messages.StatusId IN (4, 5, 6)
    )
    SELECT  MessageType, 
            MessageDate, 
            ValueDate, 
            MessageReference, 
            Beneficiary, 
            StatusId,
            Status, 
            BIC 
    FROM    AllMessages
    WHERE   RowNumber = 1;
    

    If you can't use ROW_NUMBER then you can use a subquery to get the latest message date per type:

    SELECT  Messages.MessageTypeID, MessageDate = MAX(Messages.MessageDate)
    FROM    Messages
            INNER JOIN BICProfile  
                ON Messages.SenderId = dbo.BICProfile.BicId 
    WHERE   BICProfile.BIC = 'someValue'
    AND     Messages.StatusId IN (4, 5, 6)
    GROUP BY Messages.MessageTypeID
    

    Then inner join the results of this back to your main query to filter the results:

    SELECT  MessageTypes.MessageType, 
            Messages.MessageDate, 
            Messages.ValueDate, 
            Messages.MessageReference, 
            Messages.Beneficiary, 
            Messages.StatusId,
            MessageStatus.Status, 
            BICProfile.BIC
    FROM    Messages 
            INNER JOIN MessageStatus 
                ON Messages.StatusId = MessageStatus.Id 
            INNER JOIN MessageTypes 
                ON Messages.MessageTypeId = MessageTypes.MessageTypeId 
            INNER JOIN BICProfile  
                ON Messages.SenderId = dbo.BICProfile.BicId 
            INNER JOIN 
            (   SELECT  Messages.MessageTypeID, 
                        MessageDate = MAX(Messages.MessageDate)
                FROM    Messages
                        INNER JOIN BICProfile  
                            ON Messages.SenderId = dbo.BICProfile.BicId 
                WHERE   BICProfile.BIC = 'someValue'
                AND     Messages.StatusId IN (4, 5, 6)
                GROUP BY Messages.MessageTypeID
            ) AS MaxMessage
                ON MaxMessage.MessageTypeID = Messages.MessageTypeID
                AND MaxMessage.MessageDate = Messages.MessageDate
    WHERE   BICProfile.BIC = 'someValue'
    AND     Messages.StatusId IN (4, 5, 6);
    

    N.B This second method will return multiple rows per message type if the latest message date is common among more than one message. This behaviour can be replicated in the first query by replacing ROW_NUMBER with RANK


    EDIT

    If you will have multiple messages with the same date and only want to return one of them you need to expand the ordering within the row_number function, i.e. if you wanted to pick the message with the maximum id when there were ties you could make it:

    RowNumber = ROW_NUMBER() OVER(PARTITION BY Messages.MessageTypeId 
                                    ORDER BY Messages.MessageDate DESC,
                                            Messages.MessageID DESC)
    

    So the full query would be:

    WITH AllMessages AS
    (   SELECT  MessageTypes.MessageType, 
                Messages.MessageDate, 
                Messages.ValueDate, 
                Messages.MessageReference, 
                Messages.Beneficiary, 
                Messages.StatusId,
                MessageStatus.Status, 
                BICProfile.BIC,
                RowNumber = ROW_NUMBER() OVER(PARTITION BY Messages.MessageTypeId 
                                                ORDER BY Messages.MessageDate DESC,
                                                        Messages.MessageID DESC)
        FROM    Messages 
                INNER JOIN MessageStatus 
                    ON Messages.StatusId = MessageStatus.Id 
                INNER JOIN MessageTypes 
                    ON Messages.MessageTypeId = MessageTypes.MessageTypeId 
                INNER JOIN BICProfile  
                    ON Messages.SenderId = dbo.BICProfile.BicId 
        WHERE   BICProfile.BIC = 'someValue'
        AND     Messages.StatusId IN (4, 5, 6)
    )
    SELECT  MessageType, 
            MessageDate, 
            ValueDate, 
            MessageReference, 
            Beneficiary, 
            StatusId,
            Status, 
            BIC 
    FROM    AllMessages
    WHERE   RowNumber = 1;
    
    qid & accept id: (21835289, 21835578) query: Store multiple data tables in single database table soup:

    Consider this\nCreate three table product, feature, product_feature and maybe product_photos

    \n

    Product database will be

    \n
    pid, p_name, p_description, p_price, ...\ninsert query \nINSERT INTO (p_name, p_description, p_price, ....) VALUES(?,?,?,...)\n
    \n

    feature table will

    \n
    fid, f_name, f_description, ...\ninsert query \nINSERT INTO (F_name, F_description, ....) VALUES(?,?,?,...)\n
    \n

    now the product_feature table will be

    \n
    id, pid, fid \nquery for one product\n// say a product Id is 1\nINSERT INTO (pid, fid) VALUES(1, 10) \nINSERT INTO (pid, fid) VALUES(1, 15\nINSERT INTO (pid, fid) VALUES(1, 30) \n
    \n

    where pid and fid are foreign keys with relations, phpmyadmin can do that for you\nyou can then add a product with multiple features

    \n

    then maybe the photo table

    \n
    foto_id, photo_name, photo_path ....\n
    \n

    use InnoDB for all the tables

    \n

    Let me know if you need further help

    \n soup wrap:

    Consider this Create three table product, feature, product_feature and maybe product_photos

    Product database will be

    pid, p_name, p_description, p_price, ...
    insert query 
    INSERT INTO (p_name, p_description, p_price, ....) VALUES(?,?,?,...)
    

    feature table will

    fid, f_name, f_description, ...
    insert query 
    INSERT INTO (F_name, F_description, ....) VALUES(?,?,?,...)
    

    now the product_feature table will be

    id, pid, fid 
    query for one product
    // say a product Id is 1
    INSERT INTO (pid, fid) VALUES(1, 10) 
    INSERT INTO (pid, fid) VALUES(1, 15
    INSERT INTO (pid, fid) VALUES(1, 30) 
    

    where pid and fid are foreign keys with relations, phpmyadmin can do that for you you can then add a product with multiple features

    then maybe the photo table

    foto_id, photo_name, photo_path ....
    

    use InnoDB for all the tables

    Let me know if you need further help

    qid & accept id: (21841623, 21843075) query: Combing multiple rows into one row soup:

    I am not quite sure how the index in your query matches the index column in your data. But the query that you want is:

    \n
    SELECT index,\n       max(CASE WHEN index = 1 THEN Booknumber END) AS BookNumber1 ,\n       max(CASE WHEN index = 2 THEN Booknumber END) AS BookNumber2,\n       max(CASE WHEN index = 3 THEN Booknumber END) AS BookNumber3\nFROM Mytable\nGROUP BY index;\n
    \n

    Give your data, the query seems more like:

    \n
    SELECT index,\n       max(CASE WHEN ind = 1 THEN Booknumber END) AS BookNumber1 ,\n       max(CASE WHEN ind = 2 THEN Booknumber END) AS BookNumber2,\n       max(CASE WHEN ind = 3 THEN Booknumber END) AS BookNumber3\nFROM (select mt.*, row_number() over (partition by index order by BookNumber) as ind\n      from Mytable mt\n     ) mt\nGROUP BY index;\n
    \n

    By the way, "index" is a reserved word, so I assume that it is just a placeholder for another column name. Otherwise, you need to escape it with double quotes or square braces.

    \n soup wrap:

    I am not quite sure how the index in your query matches the index column in your data. But the query that you want is:

    SELECT index,
           max(CASE WHEN index = 1 THEN Booknumber END) AS BookNumber1 ,
           max(CASE WHEN index = 2 THEN Booknumber END) AS BookNumber2,
           max(CASE WHEN index = 3 THEN Booknumber END) AS BookNumber3
    FROM Mytable
    GROUP BY index;
    

    Give your data, the query seems more like:

    SELECT index,
           max(CASE WHEN ind = 1 THEN Booknumber END) AS BookNumber1 ,
           max(CASE WHEN ind = 2 THEN Booknumber END) AS BookNumber2,
           max(CASE WHEN ind = 3 THEN Booknumber END) AS BookNumber3
    FROM (select mt.*, row_number() over (partition by index order by BookNumber) as ind
          from Mytable mt
         ) mt
    GROUP BY index;
    

    By the way, "index" is a reserved word, so I assume that it is just a placeholder for another column name. Otherwise, you need to escape it with double quotes or square braces.

    qid & accept id: (21869166, 21869585) query: MySQL: For each row in table, change one row in another table soup:

    You are selecting a field that is not part of the group by or being aggregated.

    \n
    SELECT data.id from \ndata INNER JOIN changes ON\n    data.c=changes.c_old AND data.g=changes.g \nGROUP BY changes.id\n
    \n

    You should use an aggregate function on the data.id in the select, or add data.id to the groupby (though I suspect that is not the result you want either)

    \n

    The INNER JOIN is result in this dataset

    \n
    +---------+--------+--------+------------+---------------+---------------+-----------+\n| data.id | data.c | data.g | changes.id | changes.c_old | changes.c_new | changes.g |\n+---------+--------+--------+------------+---------------+---------------+-----------+\n|       1 |      1 |      2 |          1 |             1 |             2 |         2 |\n|       1 |      1 |      2 |          3 |             1 |             2 |         2 |\n|       2 |      1 |      2 |          1 |             1 |             2 |         2 |\n|       2 |      1 |      2 |          3 |             1 |             2 |         2 |\n|       3 |      1 |      2 |          1 |             1 |             2 |         2 |\n|       3 |      1 |      2 |          3 |             1 |             2 |         2 |\n|       6 |      2 |      3 |          2 |             2 |             1 |         3 |\n|       7 |      2 |      3 |          2 |             2 |             1 |         3 |\n+---------+--------+--------+------------+---------------+---------------+-----------+\n
    \n

    1,2,3 are expanded out due to multiple matches in the join, and 4,5 are eliminated due to no match

    \n

    You then are grouping by changes.id, which is going to result in (showing with values in CSV list after grouping)

    \n
    +---------+--------+--------+------------+---------------+---------------+-----------+\n| data.id | data.c | data.g | changes.id | changes.c_old | changes.c_new | changes.g |\n+---------+--------+--------+------------+---------------+---------------+-----------+\n|   1,2,3 |  1,1,1 |  2,2,2 |          1 |         1,1,1 |         2,2,2 |     2,2,2 |\n|   1,2,3 |  1,1,1 |  2,2,2 |          3 |         1,1,1 |         2,2,2 |     2,2,2 |\n|     6,7 |    2,2 |    3,3 |          2 |           2,2 |           1,1 |       3,3 |\n+---------+--------+--------+------------+---------------+---------------+-----------+\n
    \n

    Since no aggregate or deterministic way of choosing the values from the available options, you are getting the 1 from data.id chosen for both changes.id 1 and 3

    \n

    Depending on what you are wanting, are you wanting 3 rows? all distinct values? you should add that deterministic behavior to the select.

    \n

    btw, I am pretty sure other SQL engines would not allow that select (such as MSSQL) because its ambiguous. As for MySQL behavior in that situation, I believe it chooses the first value from the first row stored, and thus why you probably get 1 in both cases, but it is free to choose whatever value it wishes.

    \n

    http://dev.mysql.com/doc/refman/5.7/en/group-by-extensions.html

    \n
    \n

    MySQL extends the use of GROUP BY so that the select list can refer to nonaggregated columns not named in the GROUP BY clause. This means that the preceding query is legal in MySQL. You can use this feature to get better performance by avoiding unnecessary column sorting and grouping. However, this is useful primarily when all values in each nonaggregated column not named in the GROUP BY are the same for each group. The server is free to choose any value from each group, so unless they are the same, the values chosen are indeterminate. Furthermore, the selection of values from each group cannot be influenced by adding an ORDER BY clause. Sorting of the result set occurs after values have been chosen, and ORDER BY does not affect which values within each group the server chooses.

    \n
    \n soup wrap:

    You are selecting a field that is not part of the group by or being aggregated.

    SELECT data.id from 
    data INNER JOIN changes ON
        data.c=changes.c_old AND data.g=changes.g 
    GROUP BY changes.id
    

    You should use an aggregate function on the data.id in the select, or add data.id to the groupby (though I suspect that is not the result you want either)

    The INNER JOIN is result in this dataset

    +---------+--------+--------+------------+---------------+---------------+-----------+
    | data.id | data.c | data.g | changes.id | changes.c_old | changes.c_new | changes.g |
    +---------+--------+--------+------------+---------------+---------------+-----------+
    |       1 |      1 |      2 |          1 |             1 |             2 |         2 |
    |       1 |      1 |      2 |          3 |             1 |             2 |         2 |
    |       2 |      1 |      2 |          1 |             1 |             2 |         2 |
    |       2 |      1 |      2 |          3 |             1 |             2 |         2 |
    |       3 |      1 |      2 |          1 |             1 |             2 |         2 |
    |       3 |      1 |      2 |          3 |             1 |             2 |         2 |
    |       6 |      2 |      3 |          2 |             2 |             1 |         3 |
    |       7 |      2 |      3 |          2 |             2 |             1 |         3 |
    +---------+--------+--------+------------+---------------+---------------+-----------+
    

    1,2,3 are expanded out due to multiple matches in the join, and 4,5 are eliminated due to no match

    You then are grouping by changes.id, which is going to result in (showing with values in CSV list after grouping)

    +---------+--------+--------+------------+---------------+---------------+-----------+
    | data.id | data.c | data.g | changes.id | changes.c_old | changes.c_new | changes.g |
    +---------+--------+--------+------------+---------------+---------------+-----------+
    |   1,2,3 |  1,1,1 |  2,2,2 |          1 |         1,1,1 |         2,2,2 |     2,2,2 |
    |   1,2,3 |  1,1,1 |  2,2,2 |          3 |         1,1,1 |         2,2,2 |     2,2,2 |
    |     6,7 |    2,2 |    3,3 |          2 |           2,2 |           1,1 |       3,3 |
    +---------+--------+--------+------------+---------------+---------------+-----------+
    

    Since no aggregate or deterministic way of choosing the values from the available options, you are getting the 1 from data.id chosen for both changes.id 1 and 3

    Depending on what you are wanting, are you wanting 3 rows? all distinct values? you should add that deterministic behavior to the select.

    btw, I am pretty sure other SQL engines would not allow that select (such as MSSQL) because its ambiguous. As for MySQL behavior in that situation, I believe it chooses the first value from the first row stored, and thus why you probably get 1 in both cases, but it is free to choose whatever value it wishes.

    http://dev.mysql.com/doc/refman/5.7/en/group-by-extensions.html

    MySQL extends the use of GROUP BY so that the select list can refer to nonaggregated columns not named in the GROUP BY clause. This means that the preceding query is legal in MySQL. You can use this feature to get better performance by avoiding unnecessary column sorting and grouping. However, this is useful primarily when all values in each nonaggregated column not named in the GROUP BY are the same for each group. The server is free to choose any value from each group, so unless they are the same, the values chosen are indeterminate. Furthermore, the selection of values from each group cannot be influenced by adding an ORDER BY clause. Sorting of the result set occurs after values have been chosen, and ORDER BY does not affect which values within each group the server chooses.

    qid & accept id: (21875568, 21875758) query: using if count in the part of the sql statement soup:

    Using subquery, i would have done something like this in place of last condition :

    \n
    messages.from = 'Jack' AND \ntype = 'message' AND \n1 =(select count(primary_key) from messages /* 1=count : this would ensure that \n                                               condition works only if \n                                               1 row is returned*/\nwhere (messages.from='Jack' AND type='message')  )\n
    \n

    So final SQL would have been :

    \n
    SELECT \n    *\nFROM\n    messages\nWHERE\n       (messages.to = 'Jack' AND (type = 'message' OR type = 'reply'))\n    OR (messages.from = 'Jack' AND type = 'reply')\n    OR (messages.from = 'Jack' AND \n        type = 'message' AND \n        1 =(select count(primary_key) from messages\n        where (messages.from='Jack' AND type='message')  ))\n\n        ORDER BY messages.message_id DESC , messages.id DESC\n
    \n soup wrap:

    Using subquery, i would have done something like this in place of last condition :

    messages.from = 'Jack' AND 
    type = 'message' AND 
    1 =(select count(primary_key) from messages /* 1=count : this would ensure that 
                                                   condition works only if 
                                                   1 row is returned*/
    where (messages.from='Jack' AND type='message')  )
    

    So final SQL would have been :

    SELECT 
        *
    FROM
        messages
    WHERE
           (messages.to = 'Jack' AND (type = 'message' OR type = 'reply'))
        OR (messages.from = 'Jack' AND type = 'reply')
        OR (messages.from = 'Jack' AND 
            type = 'message' AND 
            1 =(select count(primary_key) from messages
            where (messages.from='Jack' AND type='message')  ))
    
            ORDER BY messages.message_id DESC , messages.id DESC
    
    qid & accept id: (21950759, 21950834) query: Extracting data from two tables with same in the form of appending soup:

    Use union

    \n
    UNION is used to combine the result from multiple SELECT statements into a single result set.\n
    \n
    \n
    select * from jay\nUNION \nselect * from Ren\n
    \n

    SQl FIDDLE

    \n

    OUTPUT

    \n

    enter image description here

    \n soup wrap:

    Use union

    UNION is used to combine the result from multiple SELECT statements into a single result set.
    

    select * from jay
    UNION 
    select * from Ren
    

    SQl FIDDLE

    OUTPUT

    enter image description here

    qid & accept id: (21956650, 21957167) query: SQL - How to list items which are below the average soup:

    Change the select list for whatever columns you want to display, but this will limit the results as you want, for a given testid (replace testXYZ with the actual test you're searching on)

    \n
    SELECT t.Test_name, s.*, sc.*\n  FROM Tests t\n  JOIN Scores sc\n    ON t.id_Tests = sc.Tests_id_Tests\n  JOIN Students s\n    ON sc.Students_id_Students = s.id_Students\n WHERE t.id_Tests = 'textXYZ'\n   and sc.result <\n       (select avg(x.result)\n          from scores x\n         where sc.Tests_id_Tests = x.Tests_id_Tests)\n
    \n

    Note: To run this for ALL tests, and have scores limited to those that are below the average for each test, you would just leave that one line out of the where clause and run:

    \n
    SELECT t.Test_name, s.*, sc.*\n  FROM Tests t\n  JOIN Scores sc\n    ON t.id_Tests = sc.Tests_id_Tests\n  JOIN Students s\n    ON sc.Students_id_Students = s.id_Students\n WHERE sc.result <\n       (select avg(x.result)\n          from scores x\n         where sc.Tests_id_Tests = x.Tests_id_Tests)\n
    \n soup wrap:

    Change the select list for whatever columns you want to display, but this will limit the results as you want, for a given testid (replace testXYZ with the actual test you're searching on)

    SELECT t.Test_name, s.*, sc.*
      FROM Tests t
      JOIN Scores sc
        ON t.id_Tests = sc.Tests_id_Tests
      JOIN Students s
        ON sc.Students_id_Students = s.id_Students
     WHERE t.id_Tests = 'textXYZ'
       and sc.result <
           (select avg(x.result)
              from scores x
             where sc.Tests_id_Tests = x.Tests_id_Tests)
    

    Note: To run this for ALL tests, and have scores limited to those that are below the average for each test, you would just leave that one line out of the where clause and run:

    SELECT t.Test_name, s.*, sc.*
      FROM Tests t
      JOIN Scores sc
        ON t.id_Tests = sc.Tests_id_Tests
      JOIN Students s
        ON sc.Students_id_Students = s.id_Students
     WHERE sc.result <
           (select avg(x.result)
              from scores x
             where sc.Tests_id_Tests = x.Tests_id_Tests)
    
    qid & accept id: (21969425, 21969491) query: SQL Plus - Running a query based on user input soup:

    Try:

    \n
    Select columnA, columnB, columnC, columnD\nfrom myTable t\nwhere t.&searchColumn in ('&searchParam')\n
    \n

    Also if they are going to be typing in the substitution values, you don't need to define them earlier.

    \n

    And I would change "IN" to "="

    \n

    Or if they need to type in multiple values to search on:

    \n
    Select columnA, columnB, columnC,columnD\nfrom myTable t\nwhere t.&searchColumn in (&searchParam)\n
    \n

    But they will have to have correct input, such as:

    \n

    'string','string1'

    \n

    2010,2011

    \n

    If you want them to be able to type the substitution values into the file (at the top) using DEFINE, this is what you would do:

    \n
    define searchColumn = column_name_here\ndefine searchParam = search_term_here\n\nSelect columnA, columnB, columnC,columnD\nfrom myTable t\nwhere t.&searchColumn in ('&searchParam')\n
    \n

    Again, you might want to change IN to =

    \n

    On a side note, if the substiution variable is not defined, the user will be prompted to enter it. So it depends on whether you want them to be prompted to enter it each time it's run, or if you want them to be able to define the variables at the top of the script, before they run it.

    \n soup wrap:

    Try:

    Select columnA, columnB, columnC, columnD
    from myTable t
    where t.&searchColumn in ('&searchParam')
    

    Also if they are going to be typing in the substitution values, you don't need to define them earlier.

    And I would change "IN" to "="

    Or if they need to type in multiple values to search on:

    Select columnA, columnB, columnC,columnD
    from myTable t
    where t.&searchColumn in (&searchParam)
    

    But they will have to have correct input, such as:

    'string','string1'

    2010,2011

    If you want them to be able to type the substitution values into the file (at the top) using DEFINE, this is what you would do:

    define searchColumn = column_name_here
    define searchParam = search_term_here
    
    Select columnA, columnB, columnC,columnD
    from myTable t
    where t.&searchColumn in ('&searchParam')
    

    Again, you might want to change IN to =

    On a side note, if the substiution variable is not defined, the user will be prompted to enter it. So it depends on whether you want them to be prompted to enter it each time it's run, or if you want them to be able to define the variables at the top of the script, before they run it.

    qid & accept id: (21977220, 21980477) query: Querying time series in Postgress soup:

    One problem with the way you are currently doing it is that it does not generate a \ndata point in any invervals which do not have any sample data. For example, if the \nuser wants a chart from seconds 0 - 10 in steps of 1, then your chart won't have any\npoints after 5. Maybe that doesn't matter in your use case though.

    \n

    Another issue, as you indicated, it would be nice to be able to use some kind of\nlinear interpolation to attribute the measurements in case the resolution of the\nrequested plots is greater than the available data.

    \n

    To solve the first of these, instead of selecting data purely from the sample table,\nwe can join together the data with a generated series that matches the user's\nrequest. The latter can be generated using this:

    \n
    SELECT int4range(rstart, rstart+1) AS srange \nFROM generate_series(0,10,1) AS seq(rstart)\n
    \n

    The above query will generate a series of ranges, from 0 to 10 with a step size\nof 1. The output looks like this:

    \n
     srange\n---------\n [0,1)\n [1,2)\n [2,3)\n [3,4)\n [4,5)\n [5,6)\n [6,7)\n [7,8)\n [8,9)\n [9,10)\n [10,11)\n(11 rows)\n
    \n

    We can join this to the data table, using the && operator (which filters on overlap).

    \n

    The second point can be addressed by calculating the proportion of each data row\nwhich falls into each sample window.

    \n

    Here is the full query:

    \n
    SELECT lower(srange) AS t,\n    sum (CASE \n        -- when data range is fully contained in sample range\n        WHEN drange <@ srange THEN value\n        -- when data range and sample range overlap, calculate the ratio of the intersection\n        -- and use that to apportion the value\n        ELSE CAST (value AS DOUBLE PRECISION) * (upper(drange*srange) - lower(drange*srange)) / (upper(drange)-lower(drange))\n    END) AS value\nFROM (\n    -- Generate the range to be plotted (the sample ranges).\n    -- To change the start / end of the range, change the 1st 2 arguments\n    -- of the generate_series. To change the step size change BOTH the 3rd\n    -- argument and the amount added to rstart (they must be equal).\n    SELECT int4range(rstart, rstart+1) AS srange FROM generate_series(0,10,1) AS seq(rstart)\n) AS s\nLEFT JOIN (\n    -- Note the use of the lag window function so that for each row, we get\n    -- a range from the previous timestamp up to the current timestamp\n    SELECT int4range(coalesce(lag(ts) OVER (order by ts), 0), ts) AS drange, value FROM data\n) AS d ON srange && drange\nGROUP BY lower(srange)\nORDER BY lower(srange)\n
    \n

    Result:

    \n
     t  |      value\n----+------------------\n  0 |                5\n  1 |                2\n  2 | 3.33333333333333\n  3 | 3.33333333333333\n  4 | 3.33333333333333\n  5 |\n  6 |\n  7 |\n  8 |\n  9 |\n 10 |\n(11 rows)\n
    \n

    It is not likely any index will be used on ts in this query as it stands, and\nif the data table is large then performance is going to be dreadful.

    \n

    There are some things you could try to help with this. One suggestion could be\nto redesign the data table such that the first column contains the time range of\nthe data sample, rather than just the ending time, and then you could add a\nrange index. You could then remove the windowing function from the second\nsubquery, and hopefully the index can be used.

    \n

    Read up on range types here.

    \n

    Caveat Emptor: I have not tested this other than on the tiny data sample you supplied.\nI have used something similar to this for a somewhat similar purpose though.

    \n soup wrap:

    One problem with the way you are currently doing it is that it does not generate a data point in any invervals which do not have any sample data. For example, if the user wants a chart from seconds 0 - 10 in steps of 1, then your chart won't have any points after 5. Maybe that doesn't matter in your use case though.

    Another issue, as you indicated, it would be nice to be able to use some kind of linear interpolation to attribute the measurements in case the resolution of the requested plots is greater than the available data.

    To solve the first of these, instead of selecting data purely from the sample table, we can join together the data with a generated series that matches the user's request. The latter can be generated using this:

    SELECT int4range(rstart, rstart+1) AS srange 
    FROM generate_series(0,10,1) AS seq(rstart)
    

    The above query will generate a series of ranges, from 0 to 10 with a step size of 1. The output looks like this:

     srange
    ---------
     [0,1)
     [1,2)
     [2,3)
     [3,4)
     [4,5)
     [5,6)
     [6,7)
     [7,8)
     [8,9)
     [9,10)
     [10,11)
    (11 rows)
    

    We can join this to the data table, using the && operator (which filters on overlap).

    The second point can be addressed by calculating the proportion of each data row which falls into each sample window.

    Here is the full query:

    SELECT lower(srange) AS t,
        sum (CASE 
            -- when data range is fully contained in sample range
            WHEN drange <@ srange THEN value
            -- when data range and sample range overlap, calculate the ratio of the intersection
            -- and use that to apportion the value
            ELSE CAST (value AS DOUBLE PRECISION) * (upper(drange*srange) - lower(drange*srange)) / (upper(drange)-lower(drange))
        END) AS value
    FROM (
        -- Generate the range to be plotted (the sample ranges).
        -- To change the start / end of the range, change the 1st 2 arguments
        -- of the generate_series. To change the step size change BOTH the 3rd
        -- argument and the amount added to rstart (they must be equal).
        SELECT int4range(rstart, rstart+1) AS srange FROM generate_series(0,10,1) AS seq(rstart)
    ) AS s
    LEFT JOIN (
        -- Note the use of the lag window function so that for each row, we get
        -- a range from the previous timestamp up to the current timestamp
        SELECT int4range(coalesce(lag(ts) OVER (order by ts), 0), ts) AS drange, value FROM data
    ) AS d ON srange && drange
    GROUP BY lower(srange)
    ORDER BY lower(srange)
    

    Result:

     t  |      value
    ----+------------------
      0 |                5
      1 |                2
      2 | 3.33333333333333
      3 | 3.33333333333333
      4 | 3.33333333333333
      5 |
      6 |
      7 |
      8 |
      9 |
     10 |
    (11 rows)
    

    It is not likely any index will be used on ts in this query as it stands, and if the data table is large then performance is going to be dreadful.

    There are some things you could try to help with this. One suggestion could be to redesign the data table such that the first column contains the time range of the data sample, rather than just the ending time, and then you could add a range index. You could then remove the windowing function from the second subquery, and hopefully the index can be used.

    Read up on range types here.

    Caveat Emptor: I have not tested this other than on the tiny data sample you supplied. I have used something similar to this for a somewhat similar purpose though.

    qid & accept id: (22021194, 22021360) query: How to set a value with the return value of a stored procedure soup:

    Create an OUTPUT parameter inside your stored procedure and use that Parameter to store the value and then use that parameter inside your Update statement. Something like this....

    \n
    DECLARE @OutParam Datatype;\n\nEXECUTE SP1 @param1=C1, @OUT_Param = @OutParam OUTPUT  --<--\n\n--Now you can use this OUTPUT parameter in your Update statement.\n\nUPDATE Table1 \nSET C2 = @OutParam\n
    \n

    UPDATE

    \n

    After reading your comments I think this is what you are trying to do pass value of C1 Column from Table Table1 to Stored Procedure and then Update the Relevant C2 Column of Table1 with the returned value of stored procedure.

    \n

    For this best way to do is to Create a Table Type Parameter and pass the values of C1 as a table. See here for a detailed answer about how to pass a table to a stored procedure.

    \n

    I havent tested it But in this situation I guess you could do something like this.. I dont recomend this method if you have a large table. in that case you are better off with a table type parameter Procedure.

    \n
    -- Get C1 Values In a Temp Table\n\nSELECT DISTINCT C1 INTO #temp\nFROM Table1\n\n-- Declare Two Varibles \n--1) Return Type of Stored Procedure\n--2) Datatype of C1\n\nDECLARE @C1_Var DataType;\nDECLARE @param1 DataType;\n\nWHILE EXISTS(SELECT * FROM #temp)\nBEGIN\n     -- Select Top 1 C1 to @C1_Var\n      SELECT TOP 1 @C1_Var = C1 FROM #temp\n\n      --Execute Proc and returned Value in @param1\n      EXECUTE SP1 @param1 = @C1_Var \n\n      -- Update the table\n      UPDATE Table1\n      SET   C2 = @param1\n      WHERE C1 = @C1_Var\n\n      -- Delete from Temp Table to entually exit the loop\n      DELETE FROM  #temp WHERE C1 =  @Var    \n\nEND\n
    \n soup wrap:

    Create an OUTPUT parameter inside your stored procedure and use that Parameter to store the value and then use that parameter inside your Update statement. Something like this....

    DECLARE @OutParam Datatype;
    
    EXECUTE SP1 @param1=C1, @OUT_Param = @OutParam OUTPUT  --<--
    
    --Now you can use this OUTPUT parameter in your Update statement.
    
    UPDATE Table1 
    SET C2 = @OutParam
    

    UPDATE

    After reading your comments I think this is what you are trying to do pass value of C1 Column from Table Table1 to Stored Procedure and then Update the Relevant C2 Column of Table1 with the returned value of stored procedure.

    For this best way to do is to Create a Table Type Parameter and pass the values of C1 as a table. See here for a detailed answer about how to pass a table to a stored procedure.

    I havent tested it But in this situation I guess you could do something like this.. I dont recomend this method if you have a large table. in that case you are better off with a table type parameter Procedure.

    -- Get C1 Values In a Temp Table
    
    SELECT DISTINCT C1 INTO #temp
    FROM Table1
    
    -- Declare Two Varibles 
    --1) Return Type of Stored Procedure
    --2) Datatype of C1
    
    DECLARE @C1_Var DataType;
    DECLARE @param1 DataType;
    
    WHILE EXISTS(SELECT * FROM #temp)
    BEGIN
         -- Select Top 1 C1 to @C1_Var
          SELECT TOP 1 @C1_Var = C1 FROM #temp
    
          --Execute Proc and returned Value in @param1
          EXECUTE SP1 @param1 = @C1_Var 
    
          -- Update the table
          UPDATE Table1
          SET   C2 = @param1
          WHERE C1 = @C1_Var
    
          -- Delete from Temp Table to entually exit the loop
          DELETE FROM  #temp WHERE C1 =  @Var    
    
    END
    
    qid & accept id: (22040663, 22057600) query: Flattening nested record in postgres soup:

    You don't need the ROW constructor there, and so you can expand the record by using (foo).*:

    \n
    WITH RECURSIVE t AS (\n    SELECT d as foo FROM some_multicolumn_table as d\nUNION ALL\n    SELECT foo FROM t WHERE random() < .5\n)\nSELECT (foo).* FROM t;\n
    \n

    Although this query could be simple written as:

    \n
    WITH RECURSIVE t AS (\n    SELECT d.* FROM some_multicolumn_table as d\nUNION ALL\n    SELECT t.* FROM t WHERE random() < .5\n)\nSELECT * FROM t;\n
    \n

    And I recommend trying to keep it as simple as possible. But I'm assuming it was just an exemplification.

    \n soup wrap:

    You don't need the ROW constructor there, and so you can expand the record by using (foo).*:

    WITH RECURSIVE t AS (
        SELECT d as foo FROM some_multicolumn_table as d
    UNION ALL
        SELECT foo FROM t WHERE random() < .5
    )
    SELECT (foo).* FROM t;
    

    Although this query could be simple written as:

    WITH RECURSIVE t AS (
        SELECT d.* FROM some_multicolumn_table as d
    UNION ALL
        SELECT t.* FROM t WHERE random() < .5
    )
    SELECT * FROM t;
    

    And I recommend trying to keep it as simple as possible. But I'm assuming it was just an exemplification.

    qid & accept id: (22052942, 22053080) query: Replace column output in a more readable form Oracle - SQL soup:

    If you want to hard-code the translations

    \n
    SELECT (CASE paymentType\n             WHEN 'ePay' THEN 'electronic payment'\n             WHEN 'cPay' THEN 'cash payment'\n             WHEN 'dPay' THEN 'deposit account payment' \n             WHEN 'ccPay' THEN 'credit card payment'\n             ELSE paymentType\n         END) payment_type,\n       other_columns\n  FROM payment\n
    \n

    Normally, though, you'd create a lookup table and join to that

    \n
    SELECT payment_type.payment_type_description,\n       <>\n  FROM payment pay\n       JOIN payment_type ON (pay.paymentType = payment_type.paymentType)\n
    \n soup wrap:

    If you want to hard-code the translations

    SELECT (CASE paymentType
                 WHEN 'ePay' THEN 'electronic payment'
                 WHEN 'cPay' THEN 'cash payment'
                 WHEN 'dPay' THEN 'deposit account payment' 
                 WHEN 'ccPay' THEN 'credit card payment'
                 ELSE paymentType
             END) payment_type,
           other_columns
      FROM payment
    

    Normally, though, you'd create a lookup table and join to that

    SELECT payment_type.payment_type_description,
           <>
      FROM payment pay
           JOIN payment_type ON (pay.paymentType = payment_type.paymentType)
    
    qid & accept id: (22055558, 22065746) query: Stored procedure to update temp table based on Date in SQL Server soup:

    You can write a stored procedure, like you've done and pass the date to it.

    \n
    CREATE PROCEDURE check_scoretable  \n( \n    @pDate DATE = NULL\n)\nas\n
    \n

    However, rather than a cursor, do something like

    \n
    SELECT tm.name,sum(tm.noMatches) as NumberMatches,sum(tm.ownGoals) as OwnGoals,\n       sum(tm.otherGoals) as Othergoals,sum(tm.Points) as Points\nFROM Team tm\nJOIN Matches mc on mc.homeId=tm.id or mc.outId=tm.id\nWHERE mc.matchDate <= @pDate\n
    \n

    This will give you the results you are looking for.

    \n

    CAVEAT: Your database design is not good, because of the redundant data in it. For example, you are tracking the number of matches in the team table, when you can compute the number of matches by

    \n
    SELECT count(*) FROM matches WHERE homeId=@id or OutId=@id\n
    \n

    Same type of operation for total goals, etc.

    \n

    The problem you might run into is, if for some reason, the team record is not updated, the number of matches listed in team could be different than the number of matches from totaling up the matches played.

    \n soup wrap:

    You can write a stored procedure, like you've done and pass the date to it.

    CREATE PROCEDURE check_scoretable  
    ( 
        @pDate DATE = NULL
    )
    as
    

    However, rather than a cursor, do something like

    SELECT tm.name,sum(tm.noMatches) as NumberMatches,sum(tm.ownGoals) as OwnGoals,
           sum(tm.otherGoals) as Othergoals,sum(tm.Points) as Points
    FROM Team tm
    JOIN Matches mc on mc.homeId=tm.id or mc.outId=tm.id
    WHERE mc.matchDate <= @pDate
    

    This will give you the results you are looking for.

    CAVEAT: Your database design is not good, because of the redundant data in it. For example, you are tracking the number of matches in the team table, when you can compute the number of matches by

    SELECT count(*) FROM matches WHERE homeId=@id or OutId=@id
    

    Same type of operation for total goals, etc.

    The problem you might run into is, if for some reason, the team record is not updated, the number of matches listed in team could be different than the number of matches from totaling up the matches played.

    qid & accept id: (22134638, 22134687) query: Max count() for every group of GROUP BY soup:

    To get the option counts, you can do:

    \n
    select `group`, `option`, count(*) as cnt\nfrom table t\ngroup by `group`, `option`;\n
    \n

    There are several ways to get the option corresponding to the maximum value. I think the easiest in this case is the substring_index()/group_concat() method:

    \n
    select `group`,\n       substring_index(group_concat(`option` order by cnt desc), ',', 1) as maxoption\nfrom (select `group`, `option`, count(*) as cnt\n      from table t\n      group by `group`, `option`\n     ) tgo\ngroup by `group`;\n
    \n soup wrap:

    To get the option counts, you can do:

    select `group`, `option`, count(*) as cnt
    from table t
    group by `group`, `option`;
    

    There are several ways to get the option corresponding to the maximum value. I think the easiest in this case is the substring_index()/group_concat() method:

    select `group`,
           substring_index(group_concat(`option` order by cnt desc), ',', 1) as maxoption
    from (select `group`, `option`, count(*) as cnt
          from table t
          group by `group`, `option`
         ) tgo
    group by `group`;
    
    qid & accept id: (22170964, 22173791) query: Laravel 4 Eloquent - Similar products based on price soup:

    Making Tzook's answer more Laravel friendly.

    \n

    In your Variant model, add the function.

    \n
    public function scopeOfSimilarPrice($query, $price, $limit = 3)\n{\n    return $query->orderBy(DB::raw('ABS(`price` - '.$price.')'))->take($limit);\n}\n
    \n

    Now this functionality is more dynamic and you can use it anywhere and is much easier to use.

    \n

    Now since we already know your product, I actually think lazy-loading is easier to read and understand.

    \n
    // Find your product\n$product = Product::find(1);\n\n// Eager load variants with closest price\n$product->load('variants')->ofSimilarPrice($productPrice);\n\nforeach($product->variants as $variant) {\n    echo $variant->details;\n    echo $variant->price;\n}\n
    \n soup wrap:

    Making Tzook's answer more Laravel friendly.

    In your Variant model, add the function.

    public function scopeOfSimilarPrice($query, $price, $limit = 3)
    {
        return $query->orderBy(DB::raw('ABS(`price` - '.$price.')'))->take($limit);
    }
    

    Now this functionality is more dynamic and you can use it anywhere and is much easier to use.

    Now since we already know your product, I actually think lazy-loading is easier to read and understand.

    // Find your product
    $product = Product::find(1);
    
    // Eager load variants with closest price
    $product->load('variants')->ofSimilarPrice($productPrice);
    
    foreach($product->variants as $variant) {
        echo $variant->details;
        echo $variant->price;
    }
    
    qid & accept id: (22180392, 22180851) query: How to get results for distinct values using sql in oracle soup:

    You can get the results you seem to want using aggregation:

    \n
    select max(MONITOR_ALERT_INSTANCE_ID) as Id, description, max(created_date) as created_date\nfrom monitor_alert_instance \nwhere description in (select description \n                      from monitor_alert_instance\n                      where co_mod_asset_id = 1223\n                     )\ngroup by description;\n
    \n

    Note that I simplified the subquery. The distinct is redundant when using group by. And neither is necessarily when using in.

    \n

    EDIT:

    \n

    I think you can get the same result with this query:

    \n
    select max(MONITOR_ALERT_INSTANCE_ID) as Id, description, max(created_date) as created_date\nfrom monitor_alert_instance \ngroup by description\nhaving max(case when co_mod_asset_id = 1223 then 1 else 0 end) = 1;\n
    \n

    The having clause makes sure that the description is for asset 1223.

    \n

    Which performs better depends on a number of factors, but this might perform better than the in version. (Or the table may be small enough that any difference in performance is negligible.)

    \n soup wrap:

    You can get the results you seem to want using aggregation:

    select max(MONITOR_ALERT_INSTANCE_ID) as Id, description, max(created_date) as created_date
    from monitor_alert_instance 
    where description in (select description 
                          from monitor_alert_instance
                          where co_mod_asset_id = 1223
                         )
    group by description;
    

    Note that I simplified the subquery. The distinct is redundant when using group by. And neither is necessarily when using in.

    EDIT:

    I think you can get the same result with this query:

    select max(MONITOR_ALERT_INSTANCE_ID) as Id, description, max(created_date) as created_date
    from monitor_alert_instance 
    group by description
    having max(case when co_mod_asset_id = 1223 then 1 else 0 end) = 1;
    

    The having clause makes sure that the description is for asset 1223.

    Which performs better depends on a number of factors, but this might perform better than the in version. (Or the table may be small enough that any difference in performance is negligible.)

    qid & accept id: (22184025, 22184098) query: using a single query to eliminate N+1 select issue soup:

    The simple way to do this in Postgres uses distinct on:

    \n
    select distinct on (unit_id) r.*\nfrom reports r\norder by unit_id, time desc;\n
    \n

    This construct is specific to Postgres and databases that use its code base. It the expression distinct on (unit_id) says "I want to keep only one row for each unit_id". The row chosen is the first row encountered with that unit_id based on the order by clause.

    \n

    EDIT:

    \n

    Your original query would be, assuming that id increases along with the time field:

    \n
    SELECT r.*\nFROM reports r\nWHERE id IN (SELECT max(id)\n             FROM reports\n             GROUP BY unit_id\n            );\n
    \n

    You might also try this as a not exists:

    \n
    select r.*\nfrom reports r\nwhere not exists (select 1\n                  from reports r2\n                  where r2.unit_id = r.unit_id and\n                        r2.time > r.time\n                 );\n
    \n

    I thought the distinct on would perform well. This last version (and maybe the previous) would really benefit from an index on reports(unit_id, time).

    \n soup wrap:

    The simple way to do this in Postgres uses distinct on:

    select distinct on (unit_id) r.*
    from reports r
    order by unit_id, time desc;
    

    This construct is specific to Postgres and databases that use its code base. It the expression distinct on (unit_id) says "I want to keep only one row for each unit_id". The row chosen is the first row encountered with that unit_id based on the order by clause.

    EDIT:

    Your original query would be, assuming that id increases along with the time field:

    SELECT r.*
    FROM reports r
    WHERE id IN (SELECT max(id)
                 FROM reports
                 GROUP BY unit_id
                );
    

    You might also try this as a not exists:

    select r.*
    from reports r
    where not exists (select 1
                      from reports r2
                      where r2.unit_id = r.unit_id and
                            r2.time > r.time
                     );
    

    I thought the distinct on would perform well. This last version (and maybe the previous) would really benefit from an index on reports(unit_id, time).

    qid & accept id: (22205060, 22205992) query: Insert based on another column's value (Oracle 11g) soup:
    Update table1 \nset Update_time = (case when value_a < 0.1 and Update_time is null then sysdate\n                        when value_a > 0.1 and Update_time is not null then null\n                   else Update_time end);\n
    \n

    Change sysdate to your desired value.

    \n

    EDIT:

    \n

    Include Edit in the merge statement. See the below query (not tested with the real data)\nIn this way we do not run the update on entire table.

    \n
    Merge into table1 t1\nusing table1_staging t1s\non t1.name = t1s.name\nwhen matched then\nupdate t1.value_a = t1s.value_a,\nt1.Update_time = (case when t1s.value_a < 0.1 and t1.Update_time is null then sysdate\n                            when t1s.value_a > 0.1 and t1.Update_time is not null then null\n                       else t1.Update_time end)\nwhen not matched then\nINSERT (name, values_a)\n    VALUES (t1s.name, t1s.values_a);\n
    \n soup wrap:
    Update table1 
    set Update_time = (case when value_a < 0.1 and Update_time is null then sysdate
                            when value_a > 0.1 and Update_time is not null then null
                       else Update_time end);
    

    Change sysdate to your desired value.

    EDIT:

    Include Edit in the merge statement. See the below query (not tested with the real data) In this way we do not run the update on entire table.

    Merge into table1 t1
    using table1_staging t1s
    on t1.name = t1s.name
    when matched then
    update t1.value_a = t1s.value_a,
    t1.Update_time = (case when t1s.value_a < 0.1 and t1.Update_time is null then sysdate
                                when t1s.value_a > 0.1 and t1.Update_time is not null then null
                           else t1.Update_time end)
    when not matched then
    INSERT (name, values_a)
        VALUES (t1s.name, t1s.values_a);
    
    qid & accept id: (22228967, 22229203) query: Showing all values in Group By with inclusion of CASE soup:

    The way I would go about this is to create your own table of values using a table value constructor:

    \n
    SELECT  OldSeverity, NewSeverity\nFROM    (VALUES \n            ('Critical', 'Critical'),\n            ('High', 'Critical'),\n            ('Medium', 'Medium'),\n            ('Low', 'Medium')\n        ) s (OldSeverity, NewSeverity);\n
    \n

    This gives a table you can select from, then left join to your existing table:

    \n
    SELECT  Severity = s.NewSeverity,\n        Total = COUNT(t.Severity)\nFROM    (VALUES \n            ('Critical', 'Critical'),\n            ('High', 'Critical'),\n            ('Medium', 'Medium'),\n            ('Low', 'Medium')\n        ) s (OldSeverity, NewSeverity)\n        LEFT JOIN #Test t\n            ON t.Severity = s.OldSeverity\nGROUP BY s.NewSeverity;\n
    \n

    This will give the desired results.

    \n

    Example on SQL Fiddle

    \n
    \n

    EDIT

    \n

    The problem you have with the way that you are implimenting the query, is that although you have immediately left joined to DimWorkItem you then inner join to subsequent tables and refer to columns in WorkItem in the where clause, which undoes your left join and turns it back into an inner join. You need to place your whole logic into a subquery, and left join to this:

    \n
    SELECT  s.NewSeverity AS 'Severity'\n        ,COUNT(WI.microsoft_vsts_common_severity) AS 'Total'\nFROM   ( VALUES\n            ('Critical','I-High')\n            ,('High','I-High')\n            ,('Medium','I-Low')\n            ,('Low','I-Low')\n        )s(OldSeverity,NewSeverity)\n       LEFT JOIN \n       (    SELECT  wi.Severity\n            FROM    DimWorkItem WI (NOLOCK) \n                   JOIN dbo.DimPerson P \n                     ON p.personsk = WI.system_assignedto__personsk \n                   JOIN DimTeamProject TP \n                     ON WI.TeamProjectSK = TP.ProjectNodeSK \n                   JOIN DimIteration Itr (NOLOCK) \n                     ON Itr.IterationSK = WI.IterationSK \n                   JOIN DimArea Ar (NOLOCK) \n                     ON Ar.AreaSK = WI.AreaSK \n            WHERE  TP.ProjectNodeName = 'ABC' \n                   AND WI.System_WorkItemType = 'Bug' \n                   AND WI.Microsoft_VSTS_CMMI_RootCause <> 'Change Request' \n                   AND Itr.IterationPath LIKE '%\ABC\R1234\Test\IT%' \n                   AND WI.System_State NOT IN ( 'Rejected', 'Closed' ) \n                   AND WI.System_RevisedDate = CONVERT(datetime, '9999', 126)         \n        ) WI\n            ON WI.Severity = s.OldSeverity   \nGROUP BY s.NewSeverity;\n
    \n soup wrap:

    The way I would go about this is to create your own table of values using a table value constructor:

    SELECT  OldSeverity, NewSeverity
    FROM    (VALUES 
                ('Critical', 'Critical'),
                ('High', 'Critical'),
                ('Medium', 'Medium'),
                ('Low', 'Medium')
            ) s (OldSeverity, NewSeverity);
    

    This gives a table you can select from, then left join to your existing table:

    SELECT  Severity = s.NewSeverity,
            Total = COUNT(t.Severity)
    FROM    (VALUES 
                ('Critical', 'Critical'),
                ('High', 'Critical'),
                ('Medium', 'Medium'),
                ('Low', 'Medium')
            ) s (OldSeverity, NewSeverity)
            LEFT JOIN #Test t
                ON t.Severity = s.OldSeverity
    GROUP BY s.NewSeverity;
    

    This will give the desired results.

    Example on SQL Fiddle


    EDIT

    The problem you have with the way that you are implimenting the query, is that although you have immediately left joined to DimWorkItem you then inner join to subsequent tables and refer to columns in WorkItem in the where clause, which undoes your left join and turns it back into an inner join. You need to place your whole logic into a subquery, and left join to this:

    SELECT  s.NewSeverity AS 'Severity'
            ,COUNT(WI.microsoft_vsts_common_severity) AS 'Total'
    FROM   ( VALUES
                ('Critical','I-High')
                ,('High','I-High')
                ,('Medium','I-Low')
                ,('Low','I-Low')
            )s(OldSeverity,NewSeverity)
           LEFT JOIN 
           (    SELECT  wi.Severity
                FROM    DimWorkItem WI (NOLOCK) 
                       JOIN dbo.DimPerson P 
                         ON p.personsk = WI.system_assignedto__personsk 
                       JOIN DimTeamProject TP 
                         ON WI.TeamProjectSK = TP.ProjectNodeSK 
                       JOIN DimIteration Itr (NOLOCK) 
                         ON Itr.IterationSK = WI.IterationSK 
                       JOIN DimArea Ar (NOLOCK) 
                         ON Ar.AreaSK = WI.AreaSK 
                WHERE  TP.ProjectNodeName = 'ABC' 
                       AND WI.System_WorkItemType = 'Bug' 
                       AND WI.Microsoft_VSTS_CMMI_RootCause <> 'Change Request' 
                       AND Itr.IterationPath LIKE '%\ABC\R1234\Test\IT%' 
                       AND WI.System_State NOT IN ( 'Rejected', 'Closed' ) 
                       AND WI.System_RevisedDate = CONVERT(datetime, '9999', 126)         
            ) WI
                ON WI.Severity = s.OldSeverity   
    GROUP BY s.NewSeverity;
    
    qid & accept id: (22232282, 22232897) query: Select rows until condition met soup:

    Use a sub-query to find out at what point you should stop, then return all row from your starting point to the calculated stop point.

    \n
    SELECT\n  *\nFROM\n  yourTable\nWHERE\n      id >= 4\n  AND id <= (SELECT MIN(id) FROM yourTable WHERE b = 'F' AND id >= 4)\n
    \n

    Note, this assumes that the last record is always an 'F'. You can deal with the last record being a 'T' using a COALESCE.

    \n
    SELECT\n  *\nFROM\n  yourTable\nWHERE\n      id >= 4\n  AND id <= COALESCE(\n              (SELECT MIN(id) FROM yourTable WHERE b = 'F' AND id >= 4),\n              (SELECT MAX(id) FROM yourTable                          )\n            )\n
    \n soup wrap:

    Use a sub-query to find out at what point you should stop, then return all row from your starting point to the calculated stop point.

    SELECT
      *
    FROM
      yourTable
    WHERE
          id >= 4
      AND id <= (SELECT MIN(id) FROM yourTable WHERE b = 'F' AND id >= 4)
    

    Note, this assumes that the last record is always an 'F'. You can deal with the last record being a 'T' using a COALESCE.

    SELECT
      *
    FROM
      yourTable
    WHERE
          id >= 4
      AND id <= COALESCE(
                  (SELECT MIN(id) FROM yourTable WHERE b = 'F' AND id >= 4),
                  (SELECT MAX(id) FROM yourTable                          )
                )
    
    qid & accept id: (22258390, 22258531) query: Concatenate string with real table SQL SERVER soup:

    Try this

    \n
    select * \n  from Table1 a\n       join Table2 b on a.Col1=case @nivel\n                                   when 1 then b.Col1\n                                   when 2 then b.Col2\n                                   when 3 then b.Col3\n                                   ...\n                                 end\n
    \n

    however, this is extremely bad design. You should consider redesigning your Table2 to contain something like

    \n
    | ColNo | ColumnData\n|   1   | Data of column 1\n|   2   | Data of column 2\n|   3   | Data of column 3\n
    \n

    then your query will be more straightforward

    \n
    select * \n  from Table1 a\n       join Table2 b\n         on a.Col1 = b.ColumnData \n        and b.ColNo = @nivel\n
    \n soup wrap:

    Try this

    select * 
      from Table1 a
           join Table2 b on a.Col1=case @nivel
                                       when 1 then b.Col1
                                       when 2 then b.Col2
                                       when 3 then b.Col3
                                       ...
                                     end
    

    however, this is extremely bad design. You should consider redesigning your Table2 to contain something like

    | ColNo | ColumnData
    |   1   | Data of column 1
    |   2   | Data of column 2
    |   3   | Data of column 3
    

    then your query will be more straightforward

    select * 
      from Table1 a
           join Table2 b
             on a.Col1 = b.ColumnData 
            and b.ColNo = @nivel
    
    qid & accept id: (22311646, 22311834) query: How do I combine two LEFT JOINS without getting crossover? soup:

    You just need to add distinct to the counts

    \n
    SELECT u.*, COUNT(DISTINCT q.id), COUNT(DISTINCT a.id)\n FROM users u\n LEFT JOIN questions q ON u.id = q.author_id\n LEFT JOIN answers a ON u.id = a.author_id\n GROUP BY u.id\n
    \n

    Here's a demo of it in action using Data.SE

    \n

    Alternatively you can use inline views in the from clause

    \n
    SELECT u.*, q.QuestionCount, a.AnswerCount\nFROM   users u \n       LEFT JOIN (SELECT Count(id) QuestionCount, \n                         author_id \n                  FROM   questions \n                  GROUP  BY author_id) q \n              ON u.id = q.author_id \n       LEFT JOIN (SELECT Count(id) AnswerCount, \n                         author_id \n                  FROM   answers \n                  GROUP  BY author_id) a \n              ON u.id = q.author_id \n
    \n

    Demo

    \n soup wrap:

    You just need to add distinct to the counts

    SELECT u.*, COUNT(DISTINCT q.id), COUNT(DISTINCT a.id)
     FROM users u
     LEFT JOIN questions q ON u.id = q.author_id
     LEFT JOIN answers a ON u.id = a.author_id
     GROUP BY u.id
    

    Here's a demo of it in action using Data.SE

    Alternatively you can use inline views in the from clause

    SELECT u.*, q.QuestionCount, a.AnswerCount
    FROM   users u 
           LEFT JOIN (SELECT Count(id) QuestionCount, 
                             author_id 
                      FROM   questions 
                      GROUP  BY author_id) q 
                  ON u.id = q.author_id 
           LEFT JOIN (SELECT Count(id) AnswerCount, 
                             author_id 
                      FROM   answers 
                      GROUP  BY author_id) a 
                  ON u.id = q.author_id 
    

    Demo

    qid & accept id: (22348948, 22349154) query: SQL: How to extract data from one column as different columns, according to different condition? soup:

    If I understand correctly, you want to "pivot" the data. In SQLite, one way to do this by using group by:

    \n
    select AP_idx,\n       max(case when RF_idx = 0 then Channel end) as ChannelA,\n       max(case when RF_idx = 1 then Channel end) as ChannelB\nfrom table t\ngroup by AP_idx;\n
    \n

    Another way is by using join:

    \n
    select ta.AP_idx, ta.channel as ChannelA, tb.channel as ChannelB\nfrom table ta join\n     table tb\n     on ta.AP_idx = tb.AP_idx and\n        ta.RF_idx = 0 and\n        tb.RF_idx = 1;\n
    \n

    This might have better performance with the right indexes. On the other hand, the aggregation method is safer if some of the channel values are missing.

    \n soup wrap:

    If I understand correctly, you want to "pivot" the data. In SQLite, one way to do this by using group by:

    select AP_idx,
           max(case when RF_idx = 0 then Channel end) as ChannelA,
           max(case when RF_idx = 1 then Channel end) as ChannelB
    from table t
    group by AP_idx;
    

    Another way is by using join:

    select ta.AP_idx, ta.channel as ChannelA, tb.channel as ChannelB
    from table ta join
         table tb
         on ta.AP_idx = tb.AP_idx and
            ta.RF_idx = 0 and
            tb.RF_idx = 1;
    

    This might have better performance with the right indexes. On the other hand, the aggregation method is safer if some of the channel values are missing.

    qid & accept id: (22366810, 22368044) query: Calculating a field based on totals from queries in MS Access 2010 soup:

    Try using multiple queries as individual reports and as data sources.

    \n

    Suppose your tables looks like this...

    \n

    tblSurveys:

    \n
    employeeid    score\n----------    -----\n1             10\n2             3\n2             2\n3             7\n\netc...\n
    \n

    tblEmployees:

    \n
    employeeid    EmployeeName    SupervisorId    \n----------    -------------   ------------    \n1             Employee 1      1               \n\netc...\n
    \n

    tblSupervisors:

    \n
    SuperVisorId   SuperVisorName   RegManagerId\n------------   --------------   -------------\n1              Super 1          1\n2              Super 2          1\n\netc...\n
    \n

    tblRegManagers:

    \n
    RegManagerId    RegManagerName\n-------------   -----------------\n1               Regional Manager 1\n2               Regional Manager 2\n\netc...\n
    \n

    You may be able to create multipurpose queries. See SQL below...

    \n

    Query1: This gives you the employee stats

    \n
    select SupervisorName,RegManagerId,EmployeeName,\n    Promoter,Detractor,surveys,Promoter-Detractor AS score,\n    (Promoter-Detractor)/surveys as result \n    from \n    (       \n    select a.EmployeeName,b.SupervisorName, b.RegManagerId,\n    (select count(*) from tblSurveys where \n    employeeid=a.employeeid and score<7) as Detractor,\n    (select count(*) from tblSurveys where \n    employeeid=a.employeeid and score>6)  as Promoter,\n    (select count(*) from tblSurveys where employeeid=a.employeeid) as surveys \n    from tblEmployees a left join tblSupervisors b on a.supervisorid=b.supervisorid\n    ) \n
    \n

    Query2: This gives you the supervisor stats but also uses employee stats (Query1)

    \n
    select supervisorname,RegManagerId, \n    promotersum, detractorsum, surveyssum,(promotersum-detractorsum)/surveyssum \n    from \n    (select SuperVisorName,RegManagerId, sum(Promoter) as PromoterSum, \n    sum(Detractor) as DetractorSum, \n    sum(surveys) as surveyssum from query1 group by SuperVisorName,RegManagerId )\n
    \n

    Query3: This gives you Regional Manager stats but also uses supervisor stats (Query2)

    \n
    select RegManagerName, promoter_cnt, detractor_cnt, survey_cnt, promoter_cnt-detractor_cnt as score, \n    (promoter_cnt-detractor_cnt)/survey_cnt as result \n    from \n    (select a.RegManagerName, b.RegManagerId, sum(b.promotersum) as promoter_cnt, \n    sum(b.detractorsum) as detractor_cnt, sum(b.surveyssum) as survey_cnt \n    from tblRegManagers a left join query2 b on a.RegManagerId=b.RegManagerId \n    group by a.RegManagerName, b.RegManagerId) \n
    \n

    So, while each query serves as a report by themselves, the first two are used as source queries.

    \n soup wrap:

    Try using multiple queries as individual reports and as data sources.

    Suppose your tables looks like this...

    tblSurveys:

    employeeid    score
    ----------    -----
    1             10
    2             3
    2             2
    3             7
    
    etc...
    

    tblEmployees:

    employeeid    EmployeeName    SupervisorId    
    ----------    -------------   ------------    
    1             Employee 1      1               
    
    etc...
    

    tblSupervisors:

    SuperVisorId   SuperVisorName   RegManagerId
    ------------   --------------   -------------
    1              Super 1          1
    2              Super 2          1
    
    etc...
    

    tblRegManagers:

    RegManagerId    RegManagerName
    -------------   -----------------
    1               Regional Manager 1
    2               Regional Manager 2
    
    etc...
    

    You may be able to create multipurpose queries. See SQL below...

    Query1: This gives you the employee stats

    select SupervisorName,RegManagerId,EmployeeName,
        Promoter,Detractor,surveys,Promoter-Detractor AS score,
        (Promoter-Detractor)/surveys as result 
        from 
        (       
        select a.EmployeeName,b.SupervisorName, b.RegManagerId,
        (select count(*) from tblSurveys where 
        employeeid=a.employeeid and score<7) as Detractor,
        (select count(*) from tblSurveys where 
        employeeid=a.employeeid and score>6)  as Promoter,
        (select count(*) from tblSurveys where employeeid=a.employeeid) as surveys 
        from tblEmployees a left join tblSupervisors b on a.supervisorid=b.supervisorid
        ) 
    

    Query2: This gives you the supervisor stats but also uses employee stats (Query1)

    select supervisorname,RegManagerId, 
        promotersum, detractorsum, surveyssum,(promotersum-detractorsum)/surveyssum 
        from 
        (select SuperVisorName,RegManagerId, sum(Promoter) as PromoterSum, 
        sum(Detractor) as DetractorSum, 
        sum(surveys) as surveyssum from query1 group by SuperVisorName,RegManagerId )
    

    Query3: This gives you Regional Manager stats but also uses supervisor stats (Query2)

    select RegManagerName, promoter_cnt, detractor_cnt, survey_cnt, promoter_cnt-detractor_cnt as score, 
        (promoter_cnt-detractor_cnt)/survey_cnt as result 
        from 
        (select a.RegManagerName, b.RegManagerId, sum(b.promotersum) as promoter_cnt, 
        sum(b.detractorsum) as detractor_cnt, sum(b.surveyssum) as survey_cnt 
        from tblRegManagers a left join query2 b on a.RegManagerId=b.RegManagerId 
        group by a.RegManagerName, b.RegManagerId) 
    

    So, while each query serves as a report by themselves, the first two are used as source queries.

    qid & accept id: (22390896, 22391177) query: mysql query for give array in the date not into interval soup:
    If I understand this problem correctly, what you want is \n
    \n

    that the startdate or end date should not fall in the interval of s_adate and s_ddate.

    \n
            Try this:\n\n        select * from table where ($datestart  NOT BETWEEN s_adate and s_ddate) OR($enddate NOT BETWEEN s_adate and s_ddate);\n
    \n soup wrap:
    If I understand this problem correctly, what you want is 
    

    that the startdate or end date should not fall in the interval of s_adate and s_ddate.

            Try this:
    
            select * from table where ($datestart  NOT BETWEEN s_adate and s_ddate) OR($enddate NOT BETWEEN s_adate and s_ddate);
    
    qid & accept id: (22393469, 22393561) query: Insert into a colum the month/day/currentyear that is the same month/day as a previous column soup:

    You can use the dateadd and getdate functions to generate the dates you want. Try something like this to test it:

    \n
    declare @d1 date\nset @d1 = '02/01/2007'\n\nselect \n    @d1 as d1, \n    dateadd(YEAR, year(getdate())-year(@d1), @d1) as d2, \n    dateadd(day, 59, dateadd(YEAR, year(getdate())-year(@d1), @d1)) as d3\n
    \n

    This would return:

    \n
    d1         d2         d3\n---------- ---------- ----------\n2007-02-01 2014-02-01 2014-04-01\n
    \n

    You might have to fine-tune the parameters to dateadd to get exactly what you want.

    \n

    To adapt it to an update statement you would do something like:

    \n
    update myTable\n    set date 2 = dateadd(YEAR, year(getdate())-year(date1), date1) , \n    date3 = dateadd(day, 59, dateadd(YEAR, year(getdate())-year(date1), date1)) \n
    \n soup wrap:

    You can use the dateadd and getdate functions to generate the dates you want. Try something like this to test it:

    declare @d1 date
    set @d1 = '02/01/2007'
    
    select 
        @d1 as d1, 
        dateadd(YEAR, year(getdate())-year(@d1), @d1) as d2, 
        dateadd(day, 59, dateadd(YEAR, year(getdate())-year(@d1), @d1)) as d3
    

    This would return:

    d1         d2         d3
    ---------- ---------- ----------
    2007-02-01 2014-02-01 2014-04-01
    

    You might have to fine-tune the parameters to dateadd to get exactly what you want.

    To adapt it to an update statement you would do something like:

    update myTable
        set date 2 = dateadd(YEAR, year(getdate())-year(date1), date1) , 
        date3 = dateadd(day, 59, dateadd(YEAR, year(getdate())-year(date1), date1)) 
    
    qid & accept id: (22399836, 22399908) query: SQL Query, latest rows for each unique duo soup:

    SELECT the MAXimum of modification_date for each GROUP of (A, B), then JOIN back to the original row to get the values (necessary to get the id column):

    \n
    SELECT t1.*\nFROM Person t1\nJOIN\n(\n    SELECT MAX(modification_date) max_date, A, B\n    FROM Person\n    GROUP BY A, B\n) t2 ON t1.A = t2.A AND t1.B = t2.B AND t1.modification_date = t2.max_date\n
    \n
    \n

    More simply, if you don't care which id you get back, and you only want one row even if modification_date is duplicated, you can just select the MINimum value of id and be done with it:

    \n
    SELECT MIN(id) id, A, B, MAX(modification_date) modification_date\nFROM Person\nGROUP BY A, B\n
    \n soup wrap:

    SELECT the MAXimum of modification_date for each GROUP of (A, B), then JOIN back to the original row to get the values (necessary to get the id column):

    SELECT t1.*
    FROM Person t1
    JOIN
    (
        SELECT MAX(modification_date) max_date, A, B
        FROM Person
        GROUP BY A, B
    ) t2 ON t1.A = t2.A AND t1.B = t2.B AND t1.modification_date = t2.max_date
    

    More simply, if you don't care which id you get back, and you only want one row even if modification_date is duplicated, you can just select the MINimum value of id and be done with it:

    SELECT MIN(id) id, A, B, MAX(modification_date) modification_date
    FROM Person
    GROUP BY A, B
    
    qid & accept id: (22452123, 22454653) query: How to conditionally adjust date on subsequent rows soup:

    The following query should return what you want:

    \n
    WITH T1 AS\n( \n\n    SELECT * \n        , ROW_NUMBER() OVER (PARTITION BY propertyid, isprimary ORDER BY date) AS PropNo\n        , COUNT(*) OVER (PARTITION BY propertyid, isprimary) AS PropCount\n    FROM \n        -- Replace below with your source data table\n         (VALUES(1,'Bathroom condition',1,'2014-04-01')\n        ,(1,'External wall finish',0,'2014-04-01') \n        ,(1,'Chimney stacks',0,'2015-04-01') \n        ,(1,'Principal roof covering',0,'2016-04-01') \n        ,(2,'Damp proof course',0,'2016-04-01')) T(propertyid, text, isprimary, date)\n)\nSELECT \n      T1.propertyid\n    , T1.text\n    , T1.isprimary\n    , CASE \n          WHEN T1.isprimary = 1 OR T1.PropNo = T1.PropCount - 1 THEN T1.date\n          ELSE ISNULL(T1Next.date, T1.date) END AS [date]\nFROM T1 \nLEFT JOIN T1 AS T1Next ON T1.propertyid = T1Next.propertyid \n    AND T1.isprimary = T1Next.isprimary\n    AND T1.PropNo = T1Next.PropNo - 1\nWHERE T1.isprimary = 1\n    OR (T1.PropNo < T1.PropCount)\n
    \n

    I use the ROW_NUMBER() and COUNT(*) function to determine when there are subsequent rows. To apply the date from the subsequent row, I use a LEFT JOIN.

    \n

    EDIT\nChanging the left join to this ensures that the join only occurs on secondary elements and only every second element:

    \n
    LEFT JOIN T1 AS T1Next ON T1.propertyid = T1Next.propertyid \n    AND T1.isprimary = 0\n    AND T1Next.isprimary = 0\n    AND T1.PropNo = T1Next.PropNo - 1\n    AND T1Next.PropNo % 2 = 0\n
    \n

    That means we don't need the case statement, just this:

    \n
    ISNULL(T1Next.date, T1.date) AS [date]\n
    \n

    But the where statement is not quite right. This works:

    \n
    WHERE T1.isprimary = 1\n    OR (T1.PropNo % 2 = 0)     --every 2nd one\n    OR T1Next.date IS NOT NULL --and the 1st if there is a 2nd\n
    \n soup wrap:

    The following query should return what you want:

    WITH T1 AS
    ( 
    
        SELECT * 
            , ROW_NUMBER() OVER (PARTITION BY propertyid, isprimary ORDER BY date) AS PropNo
            , COUNT(*) OVER (PARTITION BY propertyid, isprimary) AS PropCount
        FROM 
            -- Replace below with your source data table
             (VALUES(1,'Bathroom condition',1,'2014-04-01')
            ,(1,'External wall finish',0,'2014-04-01') 
            ,(1,'Chimney stacks',0,'2015-04-01') 
            ,(1,'Principal roof covering',0,'2016-04-01') 
            ,(2,'Damp proof course',0,'2016-04-01')) T(propertyid, text, isprimary, date)
    )
    SELECT 
          T1.propertyid
        , T1.text
        , T1.isprimary
        , CASE 
              WHEN T1.isprimary = 1 OR T1.PropNo = T1.PropCount - 1 THEN T1.date
              ELSE ISNULL(T1Next.date, T1.date) END AS [date]
    FROM T1 
    LEFT JOIN T1 AS T1Next ON T1.propertyid = T1Next.propertyid 
        AND T1.isprimary = T1Next.isprimary
        AND T1.PropNo = T1Next.PropNo - 1
    WHERE T1.isprimary = 1
        OR (T1.PropNo < T1.PropCount)
    

    I use the ROW_NUMBER() and COUNT(*) function to determine when there are subsequent rows. To apply the date from the subsequent row, I use a LEFT JOIN.

    EDIT Changing the left join to this ensures that the join only occurs on secondary elements and only every second element:

    LEFT JOIN T1 AS T1Next ON T1.propertyid = T1Next.propertyid 
        AND T1.isprimary = 0
        AND T1Next.isprimary = 0
        AND T1.PropNo = T1Next.PropNo - 1
        AND T1Next.PropNo % 2 = 0
    

    That means we don't need the case statement, just this:

    ISNULL(T1Next.date, T1.date) AS [date]
    

    But the where statement is not quite right. This works:

    WHERE T1.isprimary = 1
        OR (T1.PropNo % 2 = 0)     --every 2nd one
        OR T1Next.date IS NOT NULL --and the 1st if there is a 2nd
    
    qid & accept id: (22468717, 22468815) query: How to update duplicated rows with a index (Mysql) soup:

    Try this:

    \n
    update city cross join\n       (select @city := '', @prevcity := '', @i := 0) const\n    set `index` = (case when (@prevcity := @city) is null then null\n                        when (@city := city) is null then null\n                        else @i := if(@prevcity = city, @i + 1, 1) is null then null\n                   end)\n    order by city; \n
    \n

    If you are familiar with the use of variables for enumeration in a select statement, then this is similar. The complication is ensuring the order of evaluation for the update. This is handled by using a case statement, which sequentially evaluates each clause until one is true. The first two are guaranteed to be false (because the values should never be NULL).

    \n

    EDIT:

    \n

    If you have a unique id, then the solution is a bit easier. I wish you could do this:

    \n
    update city c\n    set `index` = (select count(*) from city c2 where c2.city = c.city and c2.id <= c.id);\n
    \n

    But instead, you can do it with more joins:

    \n
    update city c join\n       (select id, (select count(*) from city c2 where c2.city = c1.city and c2.id <= c1.id) as newind\n        from city c1\n       ) ci\n       on c.id = ci.id\n    set c.`index` = ci.newind;\n
    \n soup wrap:

    Try this:

    update city cross join
           (select @city := '', @prevcity := '', @i := 0) const
        set `index` = (case when (@prevcity := @city) is null then null
                            when (@city := city) is null then null
                            else @i := if(@prevcity = city, @i + 1, 1) is null then null
                       end)
        order by city; 
    

    If you are familiar with the use of variables for enumeration in a select statement, then this is similar. The complication is ensuring the order of evaluation for the update. This is handled by using a case statement, which sequentially evaluates each clause until one is true. The first two are guaranteed to be false (because the values should never be NULL).

    EDIT:

    If you have a unique id, then the solution is a bit easier. I wish you could do this:

    update city c
        set `index` = (select count(*) from city c2 where c2.city = c.city and c2.id <= c.id);
    

    But instead, you can do it with more joins:

    update city c join
           (select id, (select count(*) from city c2 where c2.city = c1.city and c2.id <= c1.id) as newind
            from city c1
           ) ci
           on c.id = ci.id
        set c.`index` = ci.newind;
    
    qid & accept id: (22512709, 22513571) query: In SQL how can I add my Row_Number() to my current subquery in the from clause? soup:

    You should be able to do exactly the same thing (although I cannot imagine what you are trying to accomplish):

    \n
    Select  *\nFrom (\n        SELECT DISTINCT ROW_NUMBER() Over(Order By c.UserId) rn, c.UserId, (u.FirstName + ' ' + u.LastName) AS [UserName], Count(c.UserId +c.CaseId+c.LineNumber) AS [CompletedCase]\n        FROM T.dbo.CompletedCase c join T.dbo.User u on c.UserId = u.UserID\n        WHERE c.PrintDateTime >= '2014-01-27 7:00' AND c.PrintDateTime <= '2014-01-27 17:00'\n        Group By u.FirstName, u.LastName, c.UserId\n    ) x\nWhere   x.rn Between 0 and 25\nOrder By [UserName]\n
    \n

    Personally, I like doing this kind of thing with CTE's:

    \n
    ;with cte as\n(\n    SELECT  DISTINCT ROW_NUMBER() Over(Order By c.UserId) rn\n            ,c.UserId\n            ,(u.FirstName + ' ' + u.LastName) AS [UserName]\n            ,Count(c.UserId +c.CaseId+c.LineNumber) AS [CompletedCase]\n    FROM T.dbo.CompletedCase c\n    join T.dbo.User u\n        on c.UserId = u.UserID\n    WHERE c.PrintDateTime >= '2014-01-27 7:00' AND c.PrintDateTime <= '2014-01-27 17:00'\n    Group By u.FirstName, u.LastName, c.UserId\n)\nSelect  UserId\n        ,UserName\n        ,CompletedCase\nFrom    cte\nWhere   rn Between 0 And 25\nOrder By [UserName]\n
    \n

    But, it kind of seems like you just want the first 25 rows, so why not just:

    \n
    SELECT DISTINCT Top 25, c.UserId, (u.FirstName + ' ' + u.LastName) AS [UserName], Count(c.UserId +c.CaseId+c.LineNumber) AS [CompletedCase]\nFROM T.dbo.CompletedCase c join T.dbo.User u on c.UserId = u.UserID\nWHERE c.PrintDateTime >= '2014-01-27 7:00' AND c.PrintDateTime <= '2014-01-27 17:00'\nGroup By u.FirstName, u.LastName, c.UserId\nOrder By [UserName]\n
    \n soup wrap:

    You should be able to do exactly the same thing (although I cannot imagine what you are trying to accomplish):

    Select  *
    From (
            SELECT DISTINCT ROW_NUMBER() Over(Order By c.UserId) rn, c.UserId, (u.FirstName + ' ' + u.LastName) AS [UserName], Count(c.UserId +c.CaseId+c.LineNumber) AS [CompletedCase]
            FROM T.dbo.CompletedCase c join T.dbo.User u on c.UserId = u.UserID
            WHERE c.PrintDateTime >= '2014-01-27 7:00' AND c.PrintDateTime <= '2014-01-27 17:00'
            Group By u.FirstName, u.LastName, c.UserId
        ) x
    Where   x.rn Between 0 and 25
    Order By [UserName]
    

    Personally, I like doing this kind of thing with CTE's:

    ;with cte as
    (
        SELECT  DISTINCT ROW_NUMBER() Over(Order By c.UserId) rn
                ,c.UserId
                ,(u.FirstName + ' ' + u.LastName) AS [UserName]
                ,Count(c.UserId +c.CaseId+c.LineNumber) AS [CompletedCase]
        FROM T.dbo.CompletedCase c
        join T.dbo.User u
            on c.UserId = u.UserID
        WHERE c.PrintDateTime >= '2014-01-27 7:00' AND c.PrintDateTime <= '2014-01-27 17:00'
        Group By u.FirstName, u.LastName, c.UserId
    )
    Select  UserId
            ,UserName
            ,CompletedCase
    From    cte
    Where   rn Between 0 And 25
    Order By [UserName]
    

    But, it kind of seems like you just want the first 25 rows, so why not just:

    SELECT DISTINCT Top 25, c.UserId, (u.FirstName + ' ' + u.LastName) AS [UserName], Count(c.UserId +c.CaseId+c.LineNumber) AS [CompletedCase]
    FROM T.dbo.CompletedCase c join T.dbo.User u on c.UserId = u.UserID
    WHERE c.PrintDateTime >= '2014-01-27 7:00' AND c.PrintDateTime <= '2014-01-27 17:00'
    Group By u.FirstName, u.LastName, c.UserId
    Order By [UserName]
    
    qid & accept id: (22537662, 22537922) query: transfer the value of a field to variable in SQL Server 2012 soup:

    I suspect that you really want the lag() function:

    \n
    select t.*,\n       lag(code) over (order by date) as lastcode\nfrom table t;\n
    \n

    Note that this would be NULL in the first case, because none is defined. You can use ifnull() to assign a value.

    \n

    In SQL Server, you can use this in an update statement:

    \n
    with toupdate as (\n      select t.*,\n             lag(code) over (order by date) as new_lastcode\n      from table t\n     )\nupdate toupdate\n    set lastcode = new_lastcode;\n
    \n

    This assumes the column already exists in the table.

    \n soup wrap:

    I suspect that you really want the lag() function:

    select t.*,
           lag(code) over (order by date) as lastcode
    from table t;
    

    Note that this would be NULL in the first case, because none is defined. You can use ifnull() to assign a value.

    In SQL Server, you can use this in an update statement:

    with toupdate as (
          select t.*,
                 lag(code) over (order by date) as new_lastcode
          from table t
         )
    update toupdate
        set lastcode = new_lastcode;
    

    This assumes the column already exists in the table.

    qid & accept id: (22544486, 22544574) query: How to insert multiple rows with one insert statement soup:

    Try this:

    \n
    INSERT INTO tblUsers (State,City,Code)\nSELECT 'IN','Indy', UserCode\nFROM tblAccounts\nWHERE UserCode IN\n    (SELECT UserCode\n     FROM tblAccounts\n     WHERE State = 'IN')\n
    \n

    or better simplified (a subquery is not needed):

    \n
    INSERT INTO tblUsers (State,City,Code)\nSELECT 'IN','Indy', UserCode\nFROM tblAccounts\nWHERE State = 'IN'\n
    \n soup wrap:

    Try this:

    INSERT INTO tblUsers (State,City,Code)
    SELECT 'IN','Indy', UserCode
    FROM tblAccounts
    WHERE UserCode IN
        (SELECT UserCode
         FROM tblAccounts
         WHERE State = 'IN')
    

    or better simplified (a subquery is not needed):

    INSERT INTO tblUsers (State,City,Code)
    SELECT 'IN','Indy', UserCode
    FROM tblAccounts
    WHERE State = 'IN'
    
    qid & accept id: (22583760, 22584415) query: Select Every Date for Date Range and Insert soup:

    I think this should do it (DEMO):

    \n
    ;with cte as (\n  select\n     id\n    ,startdate\n    ,enddate\n    ,value / (1+datediff(day, startdate, enddate)) as value\n    ,startdate as date\n  from units\n  union all\n  select id, startdate, enddate, value, date+1 as date\n  from cte\n  where date < enddate\n)\nselect\n   row_number() over (order by date) as ID\n  ,date\n  ,sum(value) as value\nfrom cte\ngroup by date\n
    \n

    The idea is to use a Recursive CTE to explode the date ranges into one record per day. Also, the logic of value / (1+datediff(day, startdate, enddate)) distributes the total value evenly over the number of days in each range. Finally, we group by day and sum together all the values corresponding to that day to get the output:

    \n
    | ID |                            DATE | VALUE |\n|----|---------------------------------|-------|\n|  1 |  January, 01 2014 00:00:00+0000 |    11 |\n|  2 |  January, 02 2014 00:00:00+0000 |    16 |\n|  3 |  January, 03 2014 00:00:00+0000 |    16 |\n|  4 | February, 01 2014 00:00:00+0000 |    10 |\n|  5 | February, 02 2014 00:00:00+0000 |    10 |\n
    \n

    From here you can join with your result table (Table B) by date, and update/insert the value as needed. That logic might look something like this (test it first of course before running in production!):

    \n
    update B set B.VALUE = R.VALUE from TableB B join Result R on B.DATE = R.DATE\ninsert TableB (DATE, VALUE)\n  select DATE, VALUE from Result R where R.DATE not in (select DATE from TableB)\n
    \n soup wrap:

    I think this should do it (DEMO):

    ;with cte as (
      select
         id
        ,startdate
        ,enddate
        ,value / (1+datediff(day, startdate, enddate)) as value
        ,startdate as date
      from units
      union all
      select id, startdate, enddate, value, date+1 as date
      from cte
      where date < enddate
    )
    select
       row_number() over (order by date) as ID
      ,date
      ,sum(value) as value
    from cte
    group by date
    

    The idea is to use a Recursive CTE to explode the date ranges into one record per day. Also, the logic of value / (1+datediff(day, startdate, enddate)) distributes the total value evenly over the number of days in each range. Finally, we group by day and sum together all the values corresponding to that day to get the output:

    | ID |                            DATE | VALUE |
    |----|---------------------------------|-------|
    |  1 |  January, 01 2014 00:00:00+0000 |    11 |
    |  2 |  January, 02 2014 00:00:00+0000 |    16 |
    |  3 |  January, 03 2014 00:00:00+0000 |    16 |
    |  4 | February, 01 2014 00:00:00+0000 |    10 |
    |  5 | February, 02 2014 00:00:00+0000 |    10 |
    

    From here you can join with your result table (Table B) by date, and update/insert the value as needed. That logic might look something like this (test it first of course before running in production!):

    update B set B.VALUE = R.VALUE from TableB B join Result R on B.DATE = R.DATE
    insert TableB (DATE, VALUE)
      select DATE, VALUE from Result R where R.DATE not in (select DATE from TableB)
    
    qid & accept id: (22622841, 22623091) query: SQL Server : Calculate a percentual value in a row out of a sum of multiple rows soup:

    Test Data

    \n
    DECLARE @TABLE TABLE (id INT,name VARCHAR(100),value INT)\nINSERT INTO @TABLE VALUES    \n(1,'kermit',100),(2,'piggy',200),(3,'tiffy',300)\n
    \n

    Query

    \n
    ;WITH CTE1\nAS \n (\n  SELECT SUM(value) AS Total\n  FROM @TABLE\n  ),\nCTE2\nAS\n  (\n  SELECT *\n    , CAST(CAST((CAST(Value AS NUMERIC(10,2)) /\n       (SELECT CAST(Total AS NUMERIC(10,2)) FROM CTE1)) * 100.00\n        AS NUMERIC(4,2)) AS NVARCHAR(10)) + '%' AS [% of sum of matching rows]\n  FROM @TABLE\n  )\nSELECT * \nFROM CTE2\n
    \n

    Result Set

    \n
    ╔════╦════════╦═══════╦═══════════════════════════╗\n║ id ║  name  ║ value ║ % of sum of matching rows ║\n╠════╬════════╬═══════╬═══════════════════════════╣\n║  1 ║ kermit ║   100 ║ 16.67%                    ║\n║  2 ║ piggy  ║   200 ║ 33.33%                    ║\n║  3 ║ tiffy  ║   300 ║ 50.00%                    ║\n╚════╩════════╩═══════╩═══════════════════════════╝\n
    \n soup wrap:

    Test Data

    DECLARE @TABLE TABLE (id INT,name VARCHAR(100),value INT)
    INSERT INTO @TABLE VALUES    
    (1,'kermit',100),(2,'piggy',200),(3,'tiffy',300)
    

    Query

    ;WITH CTE1
    AS 
     (
      SELECT SUM(value) AS Total
      FROM @TABLE
      ),
    CTE2
    AS
      (
      SELECT *
        , CAST(CAST((CAST(Value AS NUMERIC(10,2)) /
           (SELECT CAST(Total AS NUMERIC(10,2)) FROM CTE1)) * 100.00
            AS NUMERIC(4,2)) AS NVARCHAR(10)) + '%' AS [% of sum of matching rows]
      FROM @TABLE
      )
    SELECT * 
    FROM CTE2
    

    Result Set

    ╔════╦════════╦═══════╦═══════════════════════════╗
    ║ id ║  name  ║ value ║ % of sum of matching rows ║
    ╠════╬════════╬═══════╬═══════════════════════════╣
    ║  1 ║ kermit ║   100 ║ 16.67%                    ║
    ║  2 ║ piggy  ║   200 ║ 33.33%                    ║
    ║  3 ║ tiffy  ║   300 ║ 50.00%                    ║
    ╚════╩════════╩═══════╩═══════════════════════════╝
    
    qid & accept id: (22629022, 22629198) query: Inserting a row at the specific place in SQLite database soup:

    You shouldn't care about key values, just append your row at the end.

    \n

    If you really need to do so, you could probably just update the keys with something like this. If you want to insert the new row at key 87

    \n

    Make room for the key

    \n
    update mytable\nset key = key + 1\nwhere key >= 87\n
    \n

    Insert your row

    \n
    insert into mytable ...\n
    \n

    And finally update the key for the new row

    \n
    update mytable\nset key = 87\nwhere key = NEW_ROW_KEY\n
    \n soup wrap:

    You shouldn't care about key values, just append your row at the end.

    If you really need to do so, you could probably just update the keys with something like this. If you want to insert the new row at key 87

    Make room for the key

    update mytable
    set key = key + 1
    where key >= 87
    

    Insert your row

    insert into mytable ...
    

    And finally update the key for the new row

    update mytable
    set key = 87
    where key = NEW_ROW_KEY
    
    qid & accept id: (22655631, 22655782) query: Normalize comma separated foreign key soup:

    The general idea is to split the comma separated field into a set using regexp_split_to_table, cast each value to integer, and pair the results up with the element_id from the tuple we got the original comma separated field from.

    \n

    For PostgreSQL 9.3, you'd write:

    \n
    INSERT INTO element_authors(element_id, author_id)\nSELECT\n  element_id,\n  CAST (author_id AS integer) AS author_id\nFROM\n  element,\n  LATERAL regexp_split_to_table(nullif(authors, ''), ',') author_id;\n
    \n

    or on older PostgreSQL versions I think in this case it's safe to write:

    \n
    INSERT INTO element_authors(element_id, author_id)\nSELECT\n  element_id,\n  CAST( regexp_split_to_table(nullif(authors, ''), ',') AS integer) AS author_id\nFROM\n  element;\n
    \n soup wrap:

    The general idea is to split the comma separated field into a set using regexp_split_to_table, cast each value to integer, and pair the results up with the element_id from the tuple we got the original comma separated field from.

    For PostgreSQL 9.3, you'd write:

    INSERT INTO element_authors(element_id, author_id)
    SELECT
      element_id,
      CAST (author_id AS integer) AS author_id
    FROM
      element,
      LATERAL regexp_split_to_table(nullif(authors, ''), ',') author_id;
    

    or on older PostgreSQL versions I think in this case it's safe to write:

    INSERT INTO element_authors(element_id, author_id)
    SELECT
      element_id,
      CAST( regexp_split_to_table(nullif(authors, ''), ',') AS integer) AS author_id
    FROM
      element;
    
    qid & accept id: (22668248, 22706174) query: Using array of Records in 'IN' operator in Oracle soup:

    Leveraging Oracle Collections to Build Array-typed Solutions

    \n

    The answer to your question is YES, dimensioned variables such as ARRAYS and COLLECTIONS are viable data types in solving problems where there are multiple values in either or both the input and output values.

    \n
    \n

    Additional good news is that the discussion for a simple example (such as the one in the OP) is pretty much the same as for a complex one. Solutions built with arrays are nicely scalable and dynamic if designed with a little advanced planning.

    \n
    \n

    Some Up Front Design Decisions

    \n
      \n
    • There are actual collection types called ARRAYS and ASSOCIATIVE ARRAYS. I chose to use NESTED TABLE TYPES because of their accessibility to direct SQL queries. In some ways, they exhibit "array-like" behavior. There are other trade-offs which can be researched through Oracle references.

    • \n
    • The query applied to search the COURSE TABLE would apply a JOIN condition instead of an IN-LIST approach.

    • \n
    • The use of a STORED PROCEDURE typed object improves database response. Queries within the procedure call can leverage and reuse already compiled code plus their cached execution plans.

    • \n
    \n

    Choosing the Right Collection or Array Type

    \n

    There are a lot of choices of collection types in Oracle for storing variables into memory. Each has an advantage and some sort of limitation. AskTom from Oracle has a good example and break-down of what a developer can expect by choosing one variable collection type over another.

    \n

    Using NESTED TABLE Types for Managing Multiple Valued Variables

    \n

    For this solution, I chose to work with NESTED TABLES because of their ability to be accessed directly through SQL commands. After trying several different approaches, I noticed that the plain-SQL accessibility leads to more clarity in the resulting code.

    \n

    The down-side is that you will notice that there is a little overhead here and there with respect to declaring an instance of a nested table type, initializing each instance, and managing its size with the addition of new values.

    \n
    \n

    In any case, if you anticipate a unknown number of input variables or values (our output), an array-typed data type (collection) of any sort is a more flexible structure for your code. It is likely to require less maintenance in the end.

    \n
    \n

    The Example: A Stored Procedure Search Query

    \n

    Custom TYPE Definitions

    \n
     CREATE OR REPLACE TYPE  "COURSE_REC_TYPE" IS OBJECT (DEPID NUMBER(10,0), COURSE VARCHAR2(10));\n\n CREATE OR REPLACE TYPE  "COURSE_TBL_TYPE" IS TABLE of course_rec_type;\n
    \n

    PROCEDURE Source Code

    \n
     create or replace PROCEDURE ZZ_PROC_COURSE_SEARCH IS\n\n    my_input   course_tbl_type:= course_tbl_type();\n    my_output  course_tbl_type:= course_tbl_type();\n    cur_loop_counter   pls_integer;\n\n    c_output_template   constant  varchar2(100):=\n        'DEPID: <>,  COURSE: <>';\n    v_output   VARCHAR2(200);\n\n    CURSOR find_course_cur IS           \n       SELECT crs.depid, crs.course\n         FROM zz_course crs,\n             (SELECT depid, course\n                FROM TABLE (CAST (my_input AS course_tbl_type))\n                ) search_values\n        WHERE crs.depid = search_values.depid\n          AND crs.course = search_values.course;\n\n BEGIN\n    my_input.extend(2);\n    my_input(1):= course_rec_type(1, 'A');\n    my_input(2):= course_rec_type(4, 'D');\n\n    cur_loop_counter:= 0;\n    for i in find_course_cur\n    loop\n       cur_loop_counter:= cur_loop_counter + 1;\n       my_output.extend;\n       my_output(cur_loop_counter):= course_rec_type(i.depid, i.course);\n\n    end loop;\n\n for j in my_output.first .. my_output.last\n loop\n     v_output:= replace(c_output_template, '<>', to_char(my_output(j).depid));\n     v_output:= replace(v_output, '<>', my_output(j).course);\n\n     dbms_output.put_line(v_output);\n\n end loop;\n\n end ZZ_PROC_COURSE_SEARCH;\n
    \n

    Procedure OUTPUT:

    \n
     DEPID: 1,  COURSE: A\n DEPID: 4,  COURSE: D\n\n Statement processed.\n\n\n 0.03 seconds\n
    \n

    MY COMMENTS: I wasn't particularly satisfied with the way the input variables were stored. There was a clumsy kind of problem with "loading" values into the nested table structure... If you can consider using a single search key instead of a composite pair (i.e., depid and course), the problem condenses to a simpler form.

    \n

    Revised Cursor Using a Single Search Value

    \n

    This is the proposed modification to the table design of the OP. Add a single unique key id column (RecId) to represent each unique combination of DepId and Course.

    \n

    Proposed Table Structure Change: ZZ_COURSE_NEW

    \n

    Note that the RecId column represents a SURROGATE KEY which should have no internal meaning aside from its property as a uniquely assigned value.

    \n

    Custom TYPE Definitions

    \n
     CREATE OR REPLACE TYPE  "NUM_TBL_TYPE" IS TABLE of INTEGER;\n
    \n

    Remove Array Variable

    \n

    This will be passed directly through an input parameter from the procedure call.

    \n
     -- REMOVE\n my_input   course_tbl_type:= course_tbl_type();\n
    \n

    Loading and Presenting INPUT Parameter Array (Nested Table)

    \n

    The following can be removed from the main procedure and presented as part of the call to the procedure.

    \n
     BEGIN\n    my_input.extend(2);\n    my_input(1):= course_rec_type(1, 'A');\n    my_input(2):= course_rec_type(4, 'D');\n
    \n

    Becomes:

    \n
     create or replace PROCEDURE ZZ_PROC_COURSE_SEARCH (p_search_ids IN num_tbl_type) IS...\n
    \n

    and

    \n
     my_external_input.extend(2);\n my_external_input:= num_tbl_type(1, 4);\n
    \n

    Changing the Internal Cursor Definition

    \n

    The cursor looks about the same. You can just as easily use an IN-LIST now that there is only one search parameter.

    \n
     CURSOR find_course_cur IS           \n    SELECT crs.depid, crs.course\n      FROM zz_course_new crs, \n           (SELECT column_value as recid\n              FROM TABLE (CAST (p_search_ids AS num_tbl_type))\n           ) search_values\n     WHERE crs.recid = search_values.recid;\n
    \n

    The Actual SEARCH Call and Output

    \n

    The searching portion of this operation is now isolated and dynamic. It does not need to be changed. All the Changes happen in the calling PL/SQL block where the search ID values are a lot easier to read and change.

    \n
     DECLARE\n    my_input_external   num_tbl_type:= num_tbl_type();\n\n BEGIN\n    my_input_external.extend(3);\n    my_input_external:= num_tbl_type(1,3,22);\n\n    ZZ_PROC_COURSE_SEARCH (p_search_ids => my_input_external);\n\n END; \n\n\n -- The OUTPUT (Currently set to DBMS_OUT)\n\n\n DEPID: 1,  COURSE: A\n DEPID: 4,  COURSE: D\n DEPID: 7,  COURSE: G\n\n Statement processed.\n\n 0.01 seconds\n
    \n soup wrap:

    Leveraging Oracle Collections to Build Array-typed Solutions

    The answer to your question is YES, dimensioned variables such as ARRAYS and COLLECTIONS are viable data types in solving problems where there are multiple values in either or both the input and output values.

    Additional good news is that the discussion for a simple example (such as the one in the OP) is pretty much the same as for a complex one. Solutions built with arrays are nicely scalable and dynamic if designed with a little advanced planning.

    Some Up Front Design Decisions

    • There are actual collection types called ARRAYS and ASSOCIATIVE ARRAYS. I chose to use NESTED TABLE TYPES because of their accessibility to direct SQL queries. In some ways, they exhibit "array-like" behavior. There are other trade-offs which can be researched through Oracle references.

    • The query applied to search the COURSE TABLE would apply a JOIN condition instead of an IN-LIST approach.

    • The use of a STORED PROCEDURE typed object improves database response. Queries within the procedure call can leverage and reuse already compiled code plus their cached execution plans.

    Choosing the Right Collection or Array Type

    There are a lot of choices of collection types in Oracle for storing variables into memory. Each has an advantage and some sort of limitation. AskTom from Oracle has a good example and break-down of what a developer can expect by choosing one variable collection type over another.

    Using NESTED TABLE Types for Managing Multiple Valued Variables

    For this solution, I chose to work with NESTED TABLES because of their ability to be accessed directly through SQL commands. After trying several different approaches, I noticed that the plain-SQL accessibility leads to more clarity in the resulting code.

    The down-side is that you will notice that there is a little overhead here and there with respect to declaring an instance of a nested table type, initializing each instance, and managing its size with the addition of new values.

    In any case, if you anticipate a unknown number of input variables or values (our output), an array-typed data type (collection) of any sort is a more flexible structure for your code. It is likely to require less maintenance in the end.

    The Example: A Stored Procedure Search Query

    Custom TYPE Definitions

     CREATE OR REPLACE TYPE  "COURSE_REC_TYPE" IS OBJECT (DEPID NUMBER(10,0), COURSE VARCHAR2(10));
    
     CREATE OR REPLACE TYPE  "COURSE_TBL_TYPE" IS TABLE of course_rec_type;
    

    PROCEDURE Source Code

     create or replace PROCEDURE ZZ_PROC_COURSE_SEARCH IS
    
        my_input   course_tbl_type:= course_tbl_type();
        my_output  course_tbl_type:= course_tbl_type();
        cur_loop_counter   pls_integer;
    
        c_output_template   constant  varchar2(100):=
            'DEPID: <>,  COURSE: <>';
        v_output   VARCHAR2(200);
    
        CURSOR find_course_cur IS           
           SELECT crs.depid, crs.course
             FROM zz_course crs,
                 (SELECT depid, course
                    FROM TABLE (CAST (my_input AS course_tbl_type))
                    ) search_values
            WHERE crs.depid = search_values.depid
              AND crs.course = search_values.course;
    
     BEGIN
        my_input.extend(2);
        my_input(1):= course_rec_type(1, 'A');
        my_input(2):= course_rec_type(4, 'D');
    
        cur_loop_counter:= 0;
        for i in find_course_cur
        loop
           cur_loop_counter:= cur_loop_counter + 1;
           my_output.extend;
           my_output(cur_loop_counter):= course_rec_type(i.depid, i.course);
    
        end loop;
    
     for j in my_output.first .. my_output.last
     loop
         v_output:= replace(c_output_template, '<>', to_char(my_output(j).depid));
         v_output:= replace(v_output, '<>', my_output(j).course);
    
         dbms_output.put_line(v_output);
    
     end loop;
    
     end ZZ_PROC_COURSE_SEARCH;
    

    Procedure OUTPUT:

     DEPID: 1,  COURSE: A
     DEPID: 4,  COURSE: D
    
     Statement processed.
    
    
     0.03 seconds
    

    MY COMMENTS: I wasn't particularly satisfied with the way the input variables were stored. There was a clumsy kind of problem with "loading" values into the nested table structure... If you can consider using a single search key instead of a composite pair (i.e., depid and course), the problem condenses to a simpler form.

    Revised Cursor Using a Single Search Value

    This is the proposed modification to the table design of the OP. Add a single unique key id column (RecId) to represent each unique combination of DepId and Course.

    Proposed Table Structure Change: ZZ_COURSE_NEW

    Note that the RecId column represents a SURROGATE KEY which should have no internal meaning aside from its property as a uniquely assigned value.

    Custom TYPE Definitions

     CREATE OR REPLACE TYPE  "NUM_TBL_TYPE" IS TABLE of INTEGER;
    

    Remove Array Variable

    This will be passed directly through an input parameter from the procedure call.

     -- REMOVE
     my_input   course_tbl_type:= course_tbl_type();
    

    Loading and Presenting INPUT Parameter Array (Nested Table)

    The following can be removed from the main procedure and presented as part of the call to the procedure.

     BEGIN
        my_input.extend(2);
        my_input(1):= course_rec_type(1, 'A');
        my_input(2):= course_rec_type(4, 'D');
    

    Becomes:

     create or replace PROCEDURE ZZ_PROC_COURSE_SEARCH (p_search_ids IN num_tbl_type) IS...
    

    and

     my_external_input.extend(2);
     my_external_input:= num_tbl_type(1, 4);
    

    Changing the Internal Cursor Definition

    The cursor looks about the same. You can just as easily use an IN-LIST now that there is only one search parameter.

     CURSOR find_course_cur IS           
        SELECT crs.depid, crs.course
          FROM zz_course_new crs, 
               (SELECT column_value as recid
                  FROM TABLE (CAST (p_search_ids AS num_tbl_type))
               ) search_values
         WHERE crs.recid = search_values.recid;
    

    The Actual SEARCH Call and Output

    The searching portion of this operation is now isolated and dynamic. It does not need to be changed. All the Changes happen in the calling PL/SQL block where the search ID values are a lot easier to read and change.

     DECLARE
        my_input_external   num_tbl_type:= num_tbl_type();
    
     BEGIN
        my_input_external.extend(3);
        my_input_external:= num_tbl_type(1,3,22);
    
        ZZ_PROC_COURSE_SEARCH (p_search_ids => my_input_external);
    
     END; 
    
    
     -- The OUTPUT (Currently set to DBMS_OUT)
    
    
     DEPID: 1,  COURSE: A
     DEPID: 4,  COURSE: D
     DEPID: 7,  COURSE: G
    
     Statement processed.
    
     0.01 seconds
    
    qid & accept id: (22697790, 22697920) query: Get the difference returned from two queries as the return of one query soup:

    You can just subtract the two values:

    \n
    SELECT (SELECT COUNT(ID)\n        FROM Used\n        WHERE ID = 54\n          AND QTY = 1.875\n          AND DateReceived = '2014-03-27 00:00:00'\n          AND VendorID = 12400\n          AND WithDrawn = 0) -\n       (SELECT COUNT(ID)\n        FROM Used\n        WHERE ID = 54\n          AND QTY = 1.875\n          AND DateReceived = '2014-03-27 00:00:00'\n          AND VendorID = 12400\n          AND WithDrawn = 1);\n
    \n

    Alternatively, construct a value of +1 or -1 for each record, and take the sum over that:

    \n
    SELECT SUM(CASE WithDrawn WHEN 0 THEN 1 ELSE -1 END)\nFROM Used\nWHERE ID = 54\n  AND QTY = 1.875\n  AND DateReceived = '2014-03-27 00:00:00'\n  AND VendorID = 12400;\n
    \n soup wrap:

    You can just subtract the two values:

    SELECT (SELECT COUNT(ID)
            FROM Used
            WHERE ID = 54
              AND QTY = 1.875
              AND DateReceived = '2014-03-27 00:00:00'
              AND VendorID = 12400
              AND WithDrawn = 0) -
           (SELECT COUNT(ID)
            FROM Used
            WHERE ID = 54
              AND QTY = 1.875
              AND DateReceived = '2014-03-27 00:00:00'
              AND VendorID = 12400
              AND WithDrawn = 1);
    

    Alternatively, construct a value of +1 or -1 for each record, and take the sum over that:

    SELECT SUM(CASE WithDrawn WHEN 0 THEN 1 ELSE -1 END)
    FROM Used
    WHERE ID = 54
      AND QTY = 1.875
      AND DateReceived = '2014-03-27 00:00:00'
      AND VendorID = 12400;
    
    qid & accept id: (22724852, 22725833) query: Oracle combining two monthly sums from to different tables soup:

    You can union the results together, and then sum those results. Keep in mind that you are crossing years based on the OP. If this is not the intent, then I also provided an alternative grouped by year and month.

    \n

    Grouped by month:

    \n
    SELECT c1.monthNum\n    , sum(c1.cost) as cost\nFROM \n(\n    SELECT to_char(t1.date1, 'MM') as monthNum, SUM(t1.cost1)  as cost\n    FROM table1 t1\n    WHERE ..your table1 where clause here...\n    GROUP BY to_char(t1.date1, 'MM') \n\n    UNION ALL\n\n    SELECT to_char(t2.date1, 'MM') as monthNum, SUM(t2.cost1)  as cost\n    FROM table1 t2\n    WHERE ..your table2 where clause here...    \n    GROUP BY to_char(t2.date1, 'MM')\n) c1\nGROUP BY c1.monthNum\n
    \n

    OR Grouped by year:

    \n
    SELECT c1.yearNum\n    , c1.monthNum\n    , sum(c1.cost) as cost\nFROM \n(\n    SELECT to_char(t1.date1, 'YYYY') AS yearNum, to_char(t1.date1, 'MM') as monthNum, SUM(t1.cost1)  as cost\n    FROM table1 t1\n    WHERE ..your table1 where clause here...\n    GROUP BY to_char(t1.date1, 'YYYY'), to_char(t1.date1, 'MM') \n\n    UNION ALL\n\n    SELECT to_char(t2.date1, 'YYYY') AS yearNum, to_char(t2.date1, 'MM') as monthNum, SUM(t2.cost1)  as cost\n    FROM table1 t2\n    WHERE ..your table2 where clause here...    \n    GROUP BY to_char(t2.date1, 'YYYY'), to_char(t2.date1, 'MM')\n) c1\nGROUP BY c1.yearNum, c1.monthNum\n
    \n soup wrap:

    You can union the results together, and then sum those results. Keep in mind that you are crossing years based on the OP. If this is not the intent, then I also provided an alternative grouped by year and month.

    Grouped by month:

    SELECT c1.monthNum
        , sum(c1.cost) as cost
    FROM 
    (
        SELECT to_char(t1.date1, 'MM') as monthNum, SUM(t1.cost1)  as cost
        FROM table1 t1
        WHERE ..your table1 where clause here...
        GROUP BY to_char(t1.date1, 'MM') 
    
        UNION ALL
    
        SELECT to_char(t2.date1, 'MM') as monthNum, SUM(t2.cost1)  as cost
        FROM table1 t2
        WHERE ..your table2 where clause here...    
        GROUP BY to_char(t2.date1, 'MM')
    ) c1
    GROUP BY c1.monthNum
    

    OR Grouped by year:

    SELECT c1.yearNum
        , c1.monthNum
        , sum(c1.cost) as cost
    FROM 
    (
        SELECT to_char(t1.date1, 'YYYY') AS yearNum, to_char(t1.date1, 'MM') as monthNum, SUM(t1.cost1)  as cost
        FROM table1 t1
        WHERE ..your table1 where clause here...
        GROUP BY to_char(t1.date1, 'YYYY'), to_char(t1.date1, 'MM') 
    
        UNION ALL
    
        SELECT to_char(t2.date1, 'YYYY') AS yearNum, to_char(t2.date1, 'MM') as monthNum, SUM(t2.cost1)  as cost
        FROM table1 t2
        WHERE ..your table2 where clause here...    
        GROUP BY to_char(t2.date1, 'YYYY'), to_char(t2.date1, 'MM')
    ) c1
    GROUP BY c1.yearNum, c1.monthNum
    
    qid & accept id: (22738933, 22743474) query: What are the ways to store and search complex numeric data? soup:

    I recommend using Apache Solr to index and search your data.

    \n

    How you use Solr depends on your requirements. I use it as a searchable cache of my data. Extremely useful when the raw master data must be keep as files. Lots of frameworks integrate Solr as their search backend.

    \n

    For building front-ends to a Solr index, checkout solr-ajax.

    \n

    Example

    \n

    Install Solr

    \n

    Download Solr distribution:

    \n
    wget http://www.apache.org/dist/lucene/solr/4.7.0/solr-4.7.0.tgz\ntar zxvf solr-4.7.0.tgz\n
    \n

    Start Solr using embedded Jetty container:

    \n
    cd solr-4.7.0/example\njava -jar start.jar\n
    \n

    Solr should now be running locally

    \n
    http://localhost:8983/solr\n
    \n

    data.xml

    \n

    You did not specify a data format so I used the native XML supported by Solr:

    \n
    \n  \n    1\n    Dog\n    Spotted\n    John\n    White\n    10\n    11\n  \n  \n    2\n    Cat\n    Striped\n    Jane\n    White\n    5\n  \n\n
    \n

    Notes:

    \n
      \n
    • Every document in Solr must have a unique id
    • \n
    • The field names have a trailing "_s" and "_i" in their names to indicate field types. This is a cheat to take advantage of Solr's dynamic field feature.
    • \n
    \n

    Index XML file

    \n

    Lots of ways to get data into Solr. The simplest way is the curl command:

    \n
    curl http://localhost:8983/solr/update?commit=true -H "Content-Type: text/xml" --data-binary @data.xml\n
    \n

    It's worth noting that Solr supports other data formats, such as JSON and CSV.

    \n

    Search indexed file

    \n

    Again there are language libraries to support Solr searches, the following examples use curl. The Solr search syntax is along the lines you've required.

    \n

    Here's a simple example:

    \n
    $ curl http://localhost:8983/solr/select/?q=toy_type_s:Cat\n\n  \n    0\n    1\n    \n      toy_type_s:Cat\n    \n  \n  \n    \n      2\n      Cat\n      Striped\n      Jane\n      White\n      5\n      1463999035283079168\n    \n  \n\n
    \n

    A more complex search example:

    \n
    $ curl "http://localhost:8983/solr/select/?q=toy_type_s:Cat%20AND%20estimated_spots_i:\[2%20TO%206\]" \n\n  \n    0\n    2\n    \n      toy_type_s:Cat AND estimated_spots_i:[2 TO 6]\n    \n  \n  \n    \n      2\n      Cat\n      Striped\n      Jane\n      White\n      5\n      1463999035283079168\n    \n  \n\n
    \n soup wrap:

    I recommend using Apache Solr to index and search your data.

    How you use Solr depends on your requirements. I use it as a searchable cache of my data. Extremely useful when the raw master data must be keep as files. Lots of frameworks integrate Solr as their search backend.

    For building front-ends to a Solr index, checkout solr-ajax.

    Example

    Install Solr

    Download Solr distribution:

    wget http://www.apache.org/dist/lucene/solr/4.7.0/solr-4.7.0.tgz
    tar zxvf solr-4.7.0.tgz
    

    Start Solr using embedded Jetty container:

    cd solr-4.7.0/example
    java -jar start.jar
    

    Solr should now be running locally

    http://localhost:8983/solr
    

    data.xml

    You did not specify a data format so I used the native XML supported by Solr:

    
      
        1
        Dog
        Spotted
        John
        White
        10
        11
      
      
        2
        Cat
        Striped
        Jane
        White
        5
      
    
    

    Notes:

    • Every document in Solr must have a unique id
    • The field names have a trailing "_s" and "_i" in their names to indicate field types. This is a cheat to take advantage of Solr's dynamic field feature.

    Index XML file

    Lots of ways to get data into Solr. The simplest way is the curl command:

    curl http://localhost:8983/solr/update?commit=true -H "Content-Type: text/xml" --data-binary @data.xml
    

    It's worth noting that Solr supports other data formats, such as JSON and CSV.

    Search indexed file

    Again there are language libraries to support Solr searches, the following examples use curl. The Solr search syntax is along the lines you've required.

    Here's a simple example:

    $ curl http://localhost:8983/solr/select/?q=toy_type_s:Cat
    
      
        0
        1
        
          toy_type_s:Cat
        
      
      
        
          2
          Cat
          Striped
          Jane
          White
          5
          1463999035283079168
        
      
    
    

    A more complex search example:

    $ curl "http://localhost:8983/solr/select/?q=toy_type_s:Cat%20AND%20estimated_spots_i:\[2%20TO%206\]" 
    
      
        0
        2
        
          toy_type_s:Cat AND estimated_spots_i:[2 TO 6]
        
      
      
        
          2
          Cat
          Striped
          Jane
          White
          5
          1463999035283079168
        
      
    
    
    qid & accept id: (22742235, 22744674) query: comparing the two values line by line from two different text files soup:

    Maybe try:

    \n
    paste a.txt b.txt | sed -n '/\([0-9]\+\)[[:space:]]\+\1/p' > c.txt\n
    \n

    c.txt will contain:

    \n
    10 10\n
    \n

    And

    \n
    paste a.txt b.txt | sed '/\([0-9]\+\)[[:space:]]\+\1/d' > d.txt\n
    \n

    d.txt will contain:

    \n
    20 30\n30 20\n
    \n soup wrap:

    Maybe try:

    paste a.txt b.txt | sed -n '/\([0-9]\+\)[[:space:]]\+\1/p' > c.txt
    

    c.txt will contain:

    10 10
    

    And

    paste a.txt b.txt | sed '/\([0-9]\+\)[[:space:]]\+\1/d' > d.txt
    

    d.txt will contain:

    20 30
    30 20
    
    qid & accept id: (22783242, 22811989) query: How to read XML column in SQL Server 2008? soup:
    with xmlnamespaces('http://schemas.microsoft.com/office/infopath/2003/myXSD/2014-03-29T09:41:23' as my)\nselect M.XMLData.value('(/my:myFields/my:field1/text())[1]', 'int') as field1,\n       M.XMLData.value('(/my:myFields/my:field2/text())[1]', 'int') as field2,\n       M.XMLData.value('(/my:myFields/my:field3/text())[1]', 'bit') as field3,\n       M.XMLData.value('(/my:myFields/my:FormName/text())[1]', 'datetime') as FormName,\n       (\n         select ','+R.X.value('text()[1]', 'nvarchar(max)')\n         from M.XMLData.nodes('/my:myFields/my:Repeating') as R(X)\n         for xml path(''), type\n       ).value('substring(text()[1], 2)', 'nvarchar(max)') as Repeating\nfrom XMLMain as M\n
    \n

    Result:

    \n
    field1      field2      field3 FormName                Repeating\n----------- ----------- ------ ----------------------- -----------------------\n1           2           1      2014-04-01 15:11:47.000 hi,hello,how are  you?\n
    \n soup wrap:
    with xmlnamespaces('http://schemas.microsoft.com/office/infopath/2003/myXSD/2014-03-29T09:41:23' as my)
    select M.XMLData.value('(/my:myFields/my:field1/text())[1]', 'int') as field1,
           M.XMLData.value('(/my:myFields/my:field2/text())[1]', 'int') as field2,
           M.XMLData.value('(/my:myFields/my:field3/text())[1]', 'bit') as field3,
           M.XMLData.value('(/my:myFields/my:FormName/text())[1]', 'datetime') as FormName,
           (
             select ','+R.X.value('text()[1]', 'nvarchar(max)')
             from M.XMLData.nodes('/my:myFields/my:Repeating') as R(X)
             for xml path(''), type
           ).value('substring(text()[1], 2)', 'nvarchar(max)') as Repeating
    from XMLMain as M
    

    Result:

    field1      field2      field3 FormName                Repeating
    ----------- ----------- ------ ----------------------- -----------------------
    1           2           1      2014-04-01 15:11:47.000 hi,hello,how are  you?
    
    qid & accept id: (22861123, 22862746) query: Preventing removal of rows in a SQL query based on ordinal position soup:

    Adjusted script to allow for gaps in Sequence

    \n
    DECLARE @t TABLE(Text char(5), Sequence int)\nINSERT @t VALUES\n('ITEM1',1),('ITEM1',2),('ITEM1',3),('ITEM2',4),('ITEM2',5),\n('ITEM3',6),('ITEM2',7),('ITEM2',8),('ITEM1',9),('ITEM1',10)\n\n;WITH x as\n(\n  SELECT Text,Sequence,\n    row_number() OVER (order by Sequence)\n    - row_number() OVER (partition by text order by Sequence) grp\n  FROM @t\n)\nSELECT text, MIN(Sequence) seq, \nFROM x\nGROUP BY text, grp\nORDER BY seq\n
    \n

    Result:

    \n
    text  seq\nITEM1 1\nITEM2 4\nITEM3 6\nITEM2 7\nITEM1 9\n
    \n soup wrap:

    Adjusted script to allow for gaps in Sequence

    DECLARE @t TABLE(Text char(5), Sequence int)
    INSERT @t VALUES
    ('ITEM1',1),('ITEM1',2),('ITEM1',3),('ITEM2',4),('ITEM2',5),
    ('ITEM3',6),('ITEM2',7),('ITEM2',8),('ITEM1',9),('ITEM1',10)
    
    ;WITH x as
    (
      SELECT Text,Sequence,
        row_number() OVER (order by Sequence)
        - row_number() OVER (partition by text order by Sequence) grp
      FROM @t
    )
    SELECT text, MIN(Sequence) seq, 
    FROM x
    GROUP BY text, grp
    ORDER BY seq
    

    Result:

    text  seq
    ITEM1 1
    ITEM2 4
    ITEM3 6
    ITEM2 7
    ITEM1 9
    
    qid & accept id: (22872278, 22872495) query: Remove duplicate address values where length of second column is less than the length of the greatest matching address soup:

    You could rebuild your data into a new table using

    \n
    select \naddress_1,max(address_2) as address_2, addressinfo\nfrom \ntable1 \ngroup by address_1,addressinfo\n
    \n

    http://sqlfiddle.com/#!6/3d22c/2

    \n

    Edit 1: \nTo select city and state as well you need to include it as a group by expression:

    \n
    select \naddress_1,max(address_2) as address_2, addressinfo,\ncity, state\nfrom \ntable1 \ngroup by address_1,addressinfo, city, state\n
    \n

    http://sqlfiddle.com/#!6/4527c/1

    \n

    Edit 2: \nThe max function does deliver the longest value here as needed. This works if the shorter values are true starts of the longer values.

    \n

    Here is an example of this: http://sqlfiddle.com/#!6/3fba8/1

    \n soup wrap:

    You could rebuild your data into a new table using

    select 
    address_1,max(address_2) as address_2, addressinfo
    from 
    table1 
    group by address_1,addressinfo
    

    http://sqlfiddle.com/#!6/3d22c/2

    Edit 1: To select city and state as well you need to include it as a group by expression:

    select 
    address_1,max(address_2) as address_2, addressinfo,
    city, state
    from 
    table1 
    group by address_1,addressinfo, city, state
    

    http://sqlfiddle.com/#!6/4527c/1

    Edit 2: The max function does deliver the longest value here as needed. This works if the shorter values are true starts of the longer values.

    Here is an example of this: http://sqlfiddle.com/#!6/3fba8/1

    qid & accept id: (22876321, 22876446) query: Join 3 tables and select only the top average for each category soup:

    Try this:

    \n
    SELECT T1.Name, T1.Category, T1.Average\nFROM\n(SELECT  B1.Name, B2.Category, AVG(R1.Stars) as Average\nFROM Business B1\nINNER JOIN Reviews R1\nON B1.ID=R1.BusinessID \nINNER JOIN BusinessCategories B2\nON B2.BusinessID=R1.BusinessID\nWHERE R1.Date >= convert(datetime,'01-6-2011') AND R1.Date <= convert(datetime,'30-6-   2011')\nGROUP BY Name, Category\nORDER BY Category, AVG(R1.Stars) DESC) T1\n\nLEFT JOIN (\nSELECT  B1.Name, B2.Category, AVG(R1.Stars) as Average\nFROM Business B1\nINNER JOIN Reviews R1\nON B1.ID=R1.BusinessID \nINNER JOIN BusinessCategories B2\nON B2.BusinessID=R1.BusinessID\nWHERE R1.Date >= convert(datetime,'01-6-2011') AND R1.Date <= convert(datetime,'30-6-   2011')\nGROUP BY Name, Category\nORDER BY Category, AVG(R1.Stars) DESC) T2 on T2.Average> T1.Average AND T1.Category= T2.Category\nWHERE T2.Name IS NULL\n
    \n

    OR

    \n
    SELECT Name,Category,Average FROM\n(\nSELECT ROW_NUMBER() OVER(Partition By Category ORDER BY AVG(R1.Stars) DESC) as RN, B1.Name, B2.Category, AVG(R1.Stars) as Average\nFROM Business B1\nINNER JOIN Reviews R1\nON B1.ID=R1.BusinessID \nINNER JOIN BusinessCategories B2\nON B2.BusinessID=R1.BusinessID\nWHERE R1.Date >= convert(datetime,'01-6-2011') AND R1.Date <= convert(datetime,'30-6-   2011')\nGROUP BY Name, Category\nORDER BY Category, AVG(R1.Stars) DESC\n) T\nWHERE RN=1\n
    \n soup wrap:

    Try this:

    SELECT T1.Name, T1.Category, T1.Average
    FROM
    (SELECT  B1.Name, B2.Category, AVG(R1.Stars) as Average
    FROM Business B1
    INNER JOIN Reviews R1
    ON B1.ID=R1.BusinessID 
    INNER JOIN BusinessCategories B2
    ON B2.BusinessID=R1.BusinessID
    WHERE R1.Date >= convert(datetime,'01-6-2011') AND R1.Date <= convert(datetime,'30-6-   2011')
    GROUP BY Name, Category
    ORDER BY Category, AVG(R1.Stars) DESC) T1
    
    LEFT JOIN (
    SELECT  B1.Name, B2.Category, AVG(R1.Stars) as Average
    FROM Business B1
    INNER JOIN Reviews R1
    ON B1.ID=R1.BusinessID 
    INNER JOIN BusinessCategories B2
    ON B2.BusinessID=R1.BusinessID
    WHERE R1.Date >= convert(datetime,'01-6-2011') AND R1.Date <= convert(datetime,'30-6-   2011')
    GROUP BY Name, Category
    ORDER BY Category, AVG(R1.Stars) DESC) T2 on T2.Average> T1.Average AND T1.Category= T2.Category
    WHERE T2.Name IS NULL
    

    OR

    SELECT Name,Category,Average FROM
    (
    SELECT ROW_NUMBER() OVER(Partition By Category ORDER BY AVG(R1.Stars) DESC) as RN, B1.Name, B2.Category, AVG(R1.Stars) as Average
    FROM Business B1
    INNER JOIN Reviews R1
    ON B1.ID=R1.BusinessID 
    INNER JOIN BusinessCategories B2
    ON B2.BusinessID=R1.BusinessID
    WHERE R1.Date >= convert(datetime,'01-6-2011') AND R1.Date <= convert(datetime,'30-6-   2011')
    GROUP BY Name, Category
    ORDER BY Category, AVG(R1.Stars) DESC
    ) T
    WHERE RN=1
    
    qid & accept id: (22909997, 22910064) query: Get an array of all columns starting with the same characters. soup:

    MySQL LIKE to the resque:

    \n
    SELECT col1 FROM table1 WHERE col1 LIKE 'FEL%';\n
    \n

    This way you have to add all cases using OR.

    \n

    Alternative - REGEXP:

    \n
    SELECT col1 FROM table1 WHERE col1 REGEXP '(FEL|PRO|VAI).*'\n
    \n

    Then it's just a matter of writing proper regex.

    \n

    I would use extra col to group your items - to avoid such selecting altogether (which should be quite expensive on bigger dataset).

    \n

    https://dev.mysql.com/doc/refman/5.1/en/regexp.html#operator_regexp

    \n soup wrap:

    MySQL LIKE to the resque:

    SELECT col1 FROM table1 WHERE col1 LIKE 'FEL%';
    

    This way you have to add all cases using OR.

    Alternative - REGEXP:

    SELECT col1 FROM table1 WHERE col1 REGEXP '(FEL|PRO|VAI).*'
    

    Then it's just a matter of writing proper regex.

    I would use extra col to group your items - to avoid such selecting altogether (which should be quite expensive on bigger dataset).

    https://dev.mysql.com/doc/refman/5.1/en/regexp.html#operator_regexp

    qid & accept id: (22910039, 22911326) query: SQL select id from a table to query again all at once soup:

    Should be the last message so either max(id) or latest datetime in this case, counter_party_id is just an user id the most recent counter_party_id does not mean the max counter_party_id(I found the solution in the answers and I gave props):

    \n
    SELECT * \nFROM yourTable \nWHERE counter_party_id = ( SELECT MAX(id) FROM yourTable )\n
    \n

    or

    \n
    SELECT * \nFROM yourTable \nWHERE counter_party_id = ( SELECT counter_party_id FROM yourTable ORDER BY m.time_send DESC LIMIT 1)\n
    \n

    Reason being is that I simplified the example but I had to implement this in a much more complicated scheme.

    \n soup wrap:

    Should be the last message so either max(id) or latest datetime in this case, counter_party_id is just an user id the most recent counter_party_id does not mean the max counter_party_id(I found the solution in the answers and I gave props):

    SELECT * 
    FROM yourTable 
    WHERE counter_party_id = ( SELECT MAX(id) FROM yourTable )
    

    or

    SELECT * 
    FROM yourTable 
    WHERE counter_party_id = ( SELECT counter_party_id FROM yourTable ORDER BY m.time_send DESC LIMIT 1)
    

    Reason being is that I simplified the example but I had to implement this in a much more complicated scheme.

    qid & accept id: (22914453, 22914977) query: Change column data type in MySQL without losing other metadata (DEFAULT, NOTNULL...) soup:

    As it's stated in manual page, ALTER TABLE requires all new type attributes to be defined.

    \n

    However, there is a way to overcome this. You may use INFORMATION_SCHEMA meta-data to reconstruct desired ALTER query. for example, if we have simple table:

    \n
    \nmysql> DESCRIBE t;\n+-------+------------------+------+-----+---------+----------------+\n| Field | Type             | Null | Key | Default | Extra          |\n+-------+------------------+------+-----+---------+----------------+\n| id    | int(11) unsigned | NO   | PRI | NULL    | auto_increment |\n| value | varchar(255)     | NO   |     | NULL    |                |\n+-------+------------------+------+-----+---------+----------------+\n2 rows in set (0.01 sec)\n
    \n

    then we can reproduce our alter statement with:

    \n
    SELECT \n  CONCAT(\n    COLUMN_NAME, \n    ' @new_type', \n    IF(IS_NULLABLE='NO', ' NOT NULL ', ' '), \n    EXTRA\n  ) AS s\nFROM \n  INFORMATION_SCHEMA.COLUMNS \nWHERE \n  TABLE_SCHEMA='test' \n  AND \n  TABLE_NAME='t'\n
    \n

    the result would be:

    \n
    \n+--------------------------------------+\n| s                                    |\n+--------------------------------------+\n| id @new_type NOT NULL auto_increment |\n| value @new_type NOT NULL             |\n+--------------------------------------+\n
    \n

    Here I've left @new_type to indicate that we can use variable for that (or even substitute our new type directly to query). With variable that would be:

    \n
      \n
    • Set our variables.

      \n
      mysql> SET @new_type := 'VARCHAR(10)', @column_name := 'value';\nQuery OK, 0 rows affected (0.00 sec)\n
    • \n
    • Prepare variable for prepared statement (it's long query, but I've left explanations above):

      \n
      SET @sql = (SELECT CONCAT('ALTER TABLE t CHANGE `',COLUMN_NAME, '` `', COLUMN_NAME, '` ', @new_type, IF(IS_NULLABLE='NO', ' NOT NULL ', ' '), EXTRA) AS s FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA='test' AND TABLE_NAME='t' AND COLUMN_NAME=@column_name);\n
    • \n
    • Prepare statement:

      \n
      mysql> prepare stmt from @sql;\nQuery OK, 0 rows affected (0.00 sec)\nStatement prepared\n
    • \n
    • Finally, execute it:

      \n
      mysql> execute stmt;\nQuery OK, 0 rows affected (0.22 sec)\nRecords: 0  Duplicates: 0  Warnings: 0\n
    • \n
    \n

    Then we'll get our data type changed to VARCHAR(10) with saving all the rest specifiers:

    \n
    \nmysql> DESCRIBE t;\n+-------+------------------+------+-----+---------+----------------+\n| Field | Type             | Null | Key | Default | Extra          |\n+-------+------------------+------+-----+---------+----------------+\n| id    | int(11) unsigned | NO   | PRI | NULL    | auto_increment |\n| value | varchar(10)      | NO   |     | NULL    |                |\n+-------+------------------+------+-----+---------+----------------+\n2 rows in set (0.00 sec)\n
    \n soup wrap:

    As it's stated in manual page, ALTER TABLE requires all new type attributes to be defined.

    However, there is a way to overcome this. You may use INFORMATION_SCHEMA meta-data to reconstruct desired ALTER query. for example, if we have simple table:

    mysql> DESCRIBE t;
    +-------+------------------+------+-----+---------+----------------+
    | Field | Type             | Null | Key | Default | Extra          |
    +-------+------------------+------+-----+---------+----------------+
    | id    | int(11) unsigned | NO   | PRI | NULL    | auto_increment |
    | value | varchar(255)     | NO   |     | NULL    |                |
    +-------+------------------+------+-----+---------+----------------+
    2 rows in set (0.01 sec)
    

    then we can reproduce our alter statement with:

    SELECT 
      CONCAT(
        COLUMN_NAME, 
        ' @new_type', 
        IF(IS_NULLABLE='NO', ' NOT NULL ', ' '), 
        EXTRA
      ) AS s
    FROM 
      INFORMATION_SCHEMA.COLUMNS 
    WHERE 
      TABLE_SCHEMA='test' 
      AND 
      TABLE_NAME='t'
    

    the result would be:

    +--------------------------------------+
    | s                                    |
    +--------------------------------------+
    | id @new_type NOT NULL auto_increment |
    | value @new_type NOT NULL             |
    +--------------------------------------+
    

    Here I've left @new_type to indicate that we can use variable for that (or even substitute our new type directly to query). With variable that would be:

    • Set our variables.

      mysql> SET @new_type := 'VARCHAR(10)', @column_name := 'value';
      Query OK, 0 rows affected (0.00 sec)
      
    • Prepare variable for prepared statement (it's long query, but I've left explanations above):

      SET @sql = (SELECT CONCAT('ALTER TABLE t CHANGE `',COLUMN_NAME, '` `', COLUMN_NAME, '` ', @new_type, IF(IS_NULLABLE='NO', ' NOT NULL ', ' '), EXTRA) AS s FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA='test' AND TABLE_NAME='t' AND COLUMN_NAME=@column_name);
      
    • Prepare statement:

      mysql> prepare stmt from @sql;
      Query OK, 0 rows affected (0.00 sec)
      Statement prepared
      
    • Finally, execute it:

      mysql> execute stmt;
      Query OK, 0 rows affected (0.22 sec)
      Records: 0  Duplicates: 0  Warnings: 0
      

    Then we'll get our data type changed to VARCHAR(10) with saving all the rest specifiers:

    mysql> DESCRIBE t;
    +-------+------------------+------+-----+---------+----------------+
    | Field | Type             | Null | Key | Default | Extra          |
    +-------+------------------+------+-----+---------+----------------+
    | id    | int(11) unsigned | NO   | PRI | NULL    | auto_increment |
    | value | varchar(10)      | NO   |     | NULL    |                |
    +-------+------------------+------+-----+---------+----------------+
    2 rows in set (0.00 sec)
    
    qid & accept id: (22921153, 22921334) query: SQL - find all records where col like soup:

    Use UNION ALL operator and basic join:

    \n
    SELECT t.* \nFROM TABLENAME t\nJOIN(\n   SELECT '123' As pattern FROM dual UNION ALL\n   SELECT '245' FROM dual UNION ALL\n   SELECT '234' FROM dual UNION ALL\n   SELECT '323' FROM dual UNION ALL\n   SELECT '163' FROM dual \n) p\nON t.col1 LIKE '%' || p.pattern || '%'\n
    \n

    demo: http://sqlfiddle.com/#!4/a914f/2

    \n
    \n

    EDIT

    \n
    \n

    If there is another table that contains pattern values, the task is even easier, just:

    \n
    SELECT t.* \nFROM TABLENAME t\nJOIN AnotherTable p\nON t.col1 LIKE '%' || p.pattern || '%'\n
    \n

    Demo: http://sqlfiddle.com/#!4/e0318/1

    \n soup wrap:

    Use UNION ALL operator and basic join:

    SELECT t.* 
    FROM TABLENAME t
    JOIN(
       SELECT '123' As pattern FROM dual UNION ALL
       SELECT '245' FROM dual UNION ALL
       SELECT '234' FROM dual UNION ALL
       SELECT '323' FROM dual UNION ALL
       SELECT '163' FROM dual 
    ) p
    ON t.col1 LIKE '%' || p.pattern || '%'
    

    demo: http://sqlfiddle.com/#!4/a914f/2


    EDIT


    If there is another table that contains pattern values, the task is even easier, just:

    SELECT t.* 
    FROM TABLENAME t
    JOIN AnotherTable p
    ON t.col1 LIKE '%' || p.pattern || '%'
    

    Demo: http://sqlfiddle.com/#!4/e0318/1

    qid & accept id: (22959571, 22959761) query: SQL: Limit by unknown number of occurences soup:

    That's easy. You must use a where clause and evaluate the minimum type there.

    \n
    SELECT * \nFROM mytable\nWHERE type = (select min(type) from mytable) \nORDER BY id;\n
    \n

    EDIT: Do the same with max() if you want to get the maximum type records.

    \n

    EDIT: In case the types are not ascending as in your example, you will have to get the type of the minimum/maximum id instead of getting the minimum/maximum type:

    \n
    SELECT * \nFROM mytable\nWHERE type = (select type from mytable where id = (select min(id) from mytable)) \nORDER BY id;\n
    \n soup wrap:

    That's easy. You must use a where clause and evaluate the minimum type there.

    SELECT * 
    FROM mytable
    WHERE type = (select min(type) from mytable) 
    ORDER BY id;
    

    EDIT: Do the same with max() if you want to get the maximum type records.

    EDIT: In case the types are not ascending as in your example, you will have to get the type of the minimum/maximum id instead of getting the minimum/maximum type:

    SELECT * 
    FROM mytable
    WHERE type = (select type from mytable where id = (select min(id) from mytable)) 
    ORDER BY id;
    
    qid & accept id: (22963994, 22964432) query: (Query) Number of tries before the first correct solution soup:

    Possibly using a sub query (not tested):-

    \n
    SELECT problem_id, IF(b.user_id IS NULL, 0, COUNT(*))\nFROM solution a\nLEFT OUTER JOIN\n(\n    SELECT user_id, problem_id, MIN(date) AS min_date\n    FROM solution\n    WHERE correct = true\n    GROUP BY user_id, problem_id\n) b\nON a.problem_id = b.problem_id\nAND a.user_id = b.user_id\nAND a.date < b.min_date\nWHERE a.user_id = ?\nGROUP BY problem_id\n
    \n

    EDIT - Having played with the test data I think I may have a solution. Not sure if there are any edge cases it fails on though:-

    \n
    SELECT a.user_id, a.problem_id, SUM(IF(b.user_id IS NULL OR a.date <= b.min_date, 1, 0))\nFROM solution a\nLEFT OUTER JOIN \n(\n    SELECT user_id, problem_id, MIN(date) AS min_date\n    FROM solution\n    WHERE correct = 'true'\n    GROUP BY user_id, problem_id\n) b\nON a.problem_id = b.problem_id\nAND a.user_id = b.user_id\nGROUP BY a.user_id, problem_id\n
    \n

    This has a sub query to find the lowest date with a correct solution for a user problem and joins that against the list of solutions. It the does a SUM of 1 or 0, with a row counting as 1 if there is no correct solution, or if there is a correct solution and the date of that correct solution is greater or equal this this solutions date.

    \n

    SQL fiddle for it here:-

    \n

    http://www.sqlfiddle.com/#!2/f48e11/1

    \n soup wrap:

    Possibly using a sub query (not tested):-

    SELECT problem_id, IF(b.user_id IS NULL, 0, COUNT(*))
    FROM solution a
    LEFT OUTER JOIN
    (
        SELECT user_id, problem_id, MIN(date) AS min_date
        FROM solution
        WHERE correct = true
        GROUP BY user_id, problem_id
    ) b
    ON a.problem_id = b.problem_id
    AND a.user_id = b.user_id
    AND a.date < b.min_date
    WHERE a.user_id = ?
    GROUP BY problem_id
    

    EDIT - Having played with the test data I think I may have a solution. Not sure if there are any edge cases it fails on though:-

    SELECT a.user_id, a.problem_id, SUM(IF(b.user_id IS NULL OR a.date <= b.min_date, 1, 0))
    FROM solution a
    LEFT OUTER JOIN 
    (
        SELECT user_id, problem_id, MIN(date) AS min_date
        FROM solution
        WHERE correct = 'true'
        GROUP BY user_id, problem_id
    ) b
    ON a.problem_id = b.problem_id
    AND a.user_id = b.user_id
    GROUP BY a.user_id, problem_id
    

    This has a sub query to find the lowest date with a correct solution for a user problem and joins that against the list of solutions. It the does a SUM of 1 or 0, with a row counting as 1 if there is no correct solution, or if there is a correct solution and the date of that correct solution is greater or equal this this solutions date.

    SQL fiddle for it here:-

    http://www.sqlfiddle.com/#!2/f48e11/1

    qid & accept id: (22986618, 22987277) query: Compare financial data from this week to the same week last year soup:

    You don't sound confident you understand how you want (or more specifically how your boss wants to correlate the weeks' value from one year to another (go by month mainly and and it can be out by a week or 2).

    \n

    Here is a starting point based on the data you shared

    \n

    Example of last year's report

    \n
    SELECT YEAR(`date`) AS `year`\n    , WEEKOFYEAR(`date`) AS weekno\n    ,Storecode AS storecode\n    , SUM(amount) AS amount\nFROM transactions\nWHERE YEAR(`date`) = YEAR(DATE_SUB(NOW(), INTERVAL 1 YEAR))\nGROUP BY YEAR(`date`), WEEKOFYEAR(`date`), Storecode\n
    \n

    Here is an example of that query with comparisons

    \n
    SELECT this.storecode \n   , this.weekno\n   , this.amount AS current_amount\n   , history.amount AS past_amount\nFROM (SELECT YEAR(`date`) AS `year`\n        , WEEKOFYEAR(`date`) AS weekno\n        ,Storecode AS storecode\n        , SUM(amount) AS amount\n      FROM transactions\n      WHERE YEAR(`date`) = YEAR(NOW())\n      GROUP BY YEAR(`date`), WEEKOFYEAR(`date`), Storecode) AS this\nJOIN (SELECT YEAR(`date`) AS `year`\n        , WEEKOFYEAR(`date`) AS weekno\n        ,Storecode AS storecode\n        , SUM(amount) AS amount\n      FROM transactions\n      WHERE YEAR(`date`) = YEAR(DATE_SUB(NOW(), INTERVAL 1 YEAR))\n      GROUP BY YEAR(`date`), WEEKOFYEAR(`date`), Storecode) AS history\n  ON this.weekno = history.weekno\n    AND this.storecode = history.storecode;\n
    \n soup wrap:

    You don't sound confident you understand how you want (or more specifically how your boss wants to correlate the weeks' value from one year to another (go by month mainly and and it can be out by a week or 2).

    Here is a starting point based on the data you shared

    Example of last year's report

    SELECT YEAR(`date`) AS `year`
        , WEEKOFYEAR(`date`) AS weekno
        ,Storecode AS storecode
        , SUM(amount) AS amount
    FROM transactions
    WHERE YEAR(`date`) = YEAR(DATE_SUB(NOW(), INTERVAL 1 YEAR))
    GROUP BY YEAR(`date`), WEEKOFYEAR(`date`), Storecode
    

    Here is an example of that query with comparisons

    SELECT this.storecode 
       , this.weekno
       , this.amount AS current_amount
       , history.amount AS past_amount
    FROM (SELECT YEAR(`date`) AS `year`
            , WEEKOFYEAR(`date`) AS weekno
            ,Storecode AS storecode
            , SUM(amount) AS amount
          FROM transactions
          WHERE YEAR(`date`) = YEAR(NOW())
          GROUP BY YEAR(`date`), WEEKOFYEAR(`date`), Storecode) AS this
    JOIN (SELECT YEAR(`date`) AS `year`
            , WEEKOFYEAR(`date`) AS weekno
            ,Storecode AS storecode
            , SUM(amount) AS amount
          FROM transactions
          WHERE YEAR(`date`) = YEAR(DATE_SUB(NOW(), INTERVAL 1 YEAR))
          GROUP BY YEAR(`date`), WEEKOFYEAR(`date`), Storecode) AS history
      ON this.weekno = history.weekno
        AND this.storecode = history.storecode;
    
    qid & accept id: (23034365, 23039408) query: How to exclude a word from a regular expression in oracle? soup:

    Oracle does not support lookaheads.\nWith the products as you show, you can use this:

    \n
    SELECT * FROM TABLENAME WHERE REGEXP_LIKE(PRODUCT, 'product_\d+(\s*\d+)*', 'c');\n
    \n

    This is only based on the product names you have shown. If it does not catch everything you want, give us a better idea of what we are trying to match.

    \n

    Another option: it's a hack, but if you're confident that "product_digits " should never be followed by a "t", you can use this:

    \n
    SELECT * FROM TABLENAME WHERE REGEXP_LIKE(PRODUCT, 'product_\d+($|\s)($|[^t]).*', 'c');\n
    \n soup wrap:

    Oracle does not support lookaheads. With the products as you show, you can use this:

    SELECT * FROM TABLENAME WHERE REGEXP_LIKE(PRODUCT, 'product_\d+(\s*\d+)*', 'c');
    

    This is only based on the product names you have shown. If it does not catch everything you want, give us a better idea of what we are trying to match.

    Another option: it's a hack, but if you're confident that "product_digits " should never be followed by a "t", you can use this:

    SELECT * FROM TABLENAME WHERE REGEXP_LIKE(PRODUCT, 'product_\d+($|\s)($|[^t]).*', 'c');
    
    qid & accept id: (23035651, 23038168) query: mySQL show logs within a time range from each message? soup:

    Rather than any sort of subquery, it sounds like what you want can be accomplished with a LEFT JOIN of the table against itself, but instead of a simple join condition, use the epoch BETWEEN... condition in the join's ON clause.

    \n

    The left side of the join will be filtered to username = 'bob' while the right side will locate messages in the related data ranges.

    \n

    Add a DISTINCT to deduplicate rows if needed.

    \n
    SELECT\n  DISTINCT\n  rng.epoch,\n  rng.username,\n  rng.message\nFROM\n  logs AS main\n  LEFT JOIN logs as rng \n    /* Join the epoch values from the table to related rows within 3 hours */\n    ON rng.epoch BETWEEN main.epoch AND (a.epoch + INTERVAL 3 HOUR)\n/* filter the main one for the desired username */\nWHERE main.username = 'bob'\n
    \n

    What isn't clear from your question yet is whether you ultimately only want bob's rows returned. If that is the case, both sides of the join need to be filtered in the WHERE clause, or usernames matched in the ON clause:

    \n
    FROM\n  logs AS main\n  LEFT JOIN logs as rng \n    ON rng.epoch BETWEEN main.epoch AND (a.epoch + INTERVAL 3 HOUR)\n    /* match usernames so the related rows are only bob's\n    AND main.username = rng.username\n
    \n soup wrap:

    Rather than any sort of subquery, it sounds like what you want can be accomplished with a LEFT JOIN of the table against itself, but instead of a simple join condition, use the epoch BETWEEN... condition in the join's ON clause.

    The left side of the join will be filtered to username = 'bob' while the right side will locate messages in the related data ranges.

    Add a DISTINCT to deduplicate rows if needed.

    SELECT
      DISTINCT
      rng.epoch,
      rng.username,
      rng.message
    FROM
      logs AS main
      LEFT JOIN logs as rng 
        /* Join the epoch values from the table to related rows within 3 hours */
        ON rng.epoch BETWEEN main.epoch AND (a.epoch + INTERVAL 3 HOUR)
    /* filter the main one for the desired username */
    WHERE main.username = 'bob'
    

    What isn't clear from your question yet is whether you ultimately only want bob's rows returned. If that is the case, both sides of the join need to be filtered in the WHERE clause, or usernames matched in the ON clause:

    FROM
      logs AS main
      LEFT JOIN logs as rng 
        ON rng.epoch BETWEEN main.epoch AND (a.epoch + INTERVAL 3 HOUR)
        /* match usernames so the related rows are only bob's
        AND main.username = rng.username
    
    qid & accept id: (23069422, 23069632) query: How to create MySQL database for "type" quiz soup:

    Well Basically you want

    \n
    Questions, ID and Text\nChoices ID, QuestionID and the text\n
    \n

    Answers is just

    \n
    QuestionID, ChoiceID\n\n\nQuestions Table\nId Text \n1  'What is your favourite colour?'\n\nChoices Table\nId, QuestionID, Text\n1   1           'Red'\n2   1           'Blue'\n3   1           'Green'\n4   1           'Pale Blue Green with yellow dots'\n\nAnswers\nVictimID QuestionID ChoiceID\n(userID?)1          4\n
    \n soup wrap:

    Well Basically you want

    Questions, ID and Text
    Choices ID, QuestionID and the text
    

    Answers is just

    QuestionID, ChoiceID
    
    
    Questions Table
    Id Text 
    1  'What is your favourite colour?'
    
    Choices Table
    Id, QuestionID, Text
    1   1           'Red'
    2   1           'Blue'
    3   1           'Green'
    4   1           'Pale Blue Green with yellow dots'
    
    Answers
    VictimID QuestionID ChoiceID
    (userID?)1          4
    
    qid & accept id: (23091177, 23091405) query: Find lowest value in particular group soup:

    What you need is a correlated subquery with a group by.

    \n

    One way to do this which is easy to follow is:

    \n
    SELECT column1, name, column2\nFROM MyTable as mt1\nWHERE column1 in (SELECT Min(column1) FROM MyTable as mt2 GROUP BY column2)\n
    \n

    But a better, cleaner way:

    \n
    SELECT column1, name, column2\nFROM MyTable as mt1\nINNER JOIN\n(SELECT Min(column1) as minc1 FROM MyTable as mt2 GROUP BY column2) as mt2\nON mt1.column1=mt2.minc1;\n
    \n

    SQLFiddle

    \n

    Note: These two forms should be supported by most DBMS's.

    \n soup wrap:

    What you need is a correlated subquery with a group by.

    One way to do this which is easy to follow is:

    SELECT column1, name, column2
    FROM MyTable as mt1
    WHERE column1 in (SELECT Min(column1) FROM MyTable as mt2 GROUP BY column2)
    

    But a better, cleaner way:

    SELECT column1, name, column2
    FROM MyTable as mt1
    INNER JOIN
    (SELECT Min(column1) as minc1 FROM MyTable as mt2 GROUP BY column2) as mt2
    ON mt1.column1=mt2.minc1;
    

    SQLFiddle

    Note: These two forms should be supported by most DBMS's.

    qid & accept id: (23096845, 23101361) query: How to find overlapping periods recursively in SQL Server soup:

    I would first work out where the islands are in your data set, and only after that, work out which ones are overlapped by your query ranges:

    \n
    declare @t table (ID int,StartDate date,EndDate date)\ninsert into @t(ID,StartDate,EndDate) values\n(1   ,'20140105','20140110'),\n(2   ,'20140106','20140111'),\n(3   ,'20140107','20140112'),\n(4   ,'20140108','20140113'),\n(5   ,'20140109','20140114'),\n(6   ,'20140126','20140131'),\n(7   ,'20140127','20140201'),\n(8   ,'20140128','20140202'),\n(9   ,'20140129','20140203'),\n(10  ,'20140130','20140204')\n\ndeclare @Start date\ndeclare @End date\nselect @Start='20140106',@End='20140107'\n\n;With PotIslands as (\n    --Find ranges which aren't overlapped at their start\n    select StartDate,EndDate from @t t where\n        not exists (select * from @t t2 where\n                      t2.StartDate < t.StartDate and\n                      t2.EndDate >= t.StartDate)\n    union all\n    --Extend the ranges by any other ranges which overlap on the end\n    select pi.StartDate,t.EndDate\n    from PotIslands pi\n            inner join\n        @t t\n            on\n                pi.EndDate >= t.StartDate and pi.EndDate < t.EndDate\n), Islands as (\n    select StartDate,MAX(EndDate) as EndDate from PotIslands group by StartDate\n)\nselect * from Islands i where @Start <= i.EndDate and @End >= i.StartDate\n
    \n

    Result:

    \n
    StartDate  EndDate\n---------- ----------\n2014-01-05 2014-01-14\n
    \n

    If you need the individual rows, you can now join the selected islands back to the @t table for a simple range query.

    \n

    This works because, for example, if any row within an island is ever included in a range, the entire remaining rows on an island will always also be included. So we find the islands first.

    \n soup wrap:

    I would first work out where the islands are in your data set, and only after that, work out which ones are overlapped by your query ranges:

    declare @t table (ID int,StartDate date,EndDate date)
    insert into @t(ID,StartDate,EndDate) values
    (1   ,'20140105','20140110'),
    (2   ,'20140106','20140111'),
    (3   ,'20140107','20140112'),
    (4   ,'20140108','20140113'),
    (5   ,'20140109','20140114'),
    (6   ,'20140126','20140131'),
    (7   ,'20140127','20140201'),
    (8   ,'20140128','20140202'),
    (9   ,'20140129','20140203'),
    (10  ,'20140130','20140204')
    
    declare @Start date
    declare @End date
    select @Start='20140106',@End='20140107'
    
    ;With PotIslands as (
        --Find ranges which aren't overlapped at their start
        select StartDate,EndDate from @t t where
            not exists (select * from @t t2 where
                          t2.StartDate < t.StartDate and
                          t2.EndDate >= t.StartDate)
        union all
        --Extend the ranges by any other ranges which overlap on the end
        select pi.StartDate,t.EndDate
        from PotIslands pi
                inner join
            @t t
                on
                    pi.EndDate >= t.StartDate and pi.EndDate < t.EndDate
    ), Islands as (
        select StartDate,MAX(EndDate) as EndDate from PotIslands group by StartDate
    )
    select * from Islands i where @Start <= i.EndDate and @End >= i.StartDate
    

    Result:

    StartDate  EndDate
    ---------- ----------
    2014-01-05 2014-01-14
    

    If you need the individual rows, you can now join the selected islands back to the @t table for a simple range query.

    This works because, for example, if any row within an island is ever included in a range, the entire remaining rows on an island will always also be included. So we find the islands first.

    qid & accept id: (23106523, 23106632) query: SQL Joining Of Queries soup:

    You can use conditional aggregation:

    \n
    SELECT i.DSTAMP, i.NAME,\n       SUM(CASE WHENn i.CODE = 'IN' THEN i.WEIGHT END) as IN_KG_Weight,\n       SUM(CASE WHENn i.CODE = 'OUT' THEN i.WEIGHT END) as OUT_KG_Weight\nFROM inventory i\nWHERE i.code = 'In'\nGROUP BY i.DSTAMP, i.NAME;\n
    \n

    EDIT:

    \n

    To group this just by date:

    \n
    SELECT to_char(i.DSTAMP, 'YYYY-MM-DD') as yyyymmdd, i.NAME,\n       SUM(CASE WHENn i.CODE = 'IN' THEN i.WEIGHT END) as IN_KG_Weight,\n       SUM(CASE WHENn i.CODE = 'OUT' THEN i.WEIGHT END) as OUT_KG_Weight\nFROM inventory i\nWHERE i.code = 'In'\nGROUP BY to_char(i.DSTAMP, 'YYYY-MM-DD'), i.NAME;\n
    \n

    This converts the value to a date string, which is fine for ordering.

    \n soup wrap:

    You can use conditional aggregation:

    SELECT i.DSTAMP, i.NAME,
           SUM(CASE WHENn i.CODE = 'IN' THEN i.WEIGHT END) as IN_KG_Weight,
           SUM(CASE WHENn i.CODE = 'OUT' THEN i.WEIGHT END) as OUT_KG_Weight
    FROM inventory i
    WHERE i.code = 'In'
    GROUP BY i.DSTAMP, i.NAME;
    

    EDIT:

    To group this just by date:

    SELECT to_char(i.DSTAMP, 'YYYY-MM-DD') as yyyymmdd, i.NAME,
           SUM(CASE WHENn i.CODE = 'IN' THEN i.WEIGHT END) as IN_KG_Weight,
           SUM(CASE WHENn i.CODE = 'OUT' THEN i.WEIGHT END) as OUT_KG_Weight
    FROM inventory i
    WHERE i.code = 'In'
    GROUP BY to_char(i.DSTAMP, 'YYYY-MM-DD'), i.NAME;
    

    This converts the value to a date string, which is fine for ordering.

    qid & accept id: (23116249, 23116789) query: string substitution from text file to another string soup:
    awk '{print "INSERT INTO users (email,paypal_tran,CCReceipt) VALUES"; print "(\x27"$1"\x27,\x27"$2"\x27,\x27"$3"\x27);"}' input.txt\n
    \n

    Converts your sample input to preferred output. It should work for multi line input.

    \n

    EDIT

    \n

    The variables you are using in this line:

    \n
    cat temp1 | awk 'email="$1"; transaction="$2"; ccreceipt="$3";'\n
    \n

    are only visible to awk and in this command. They are not shell variables.\nAlso in your sed commands remove those single quotes then you can get the values:

    \n
    sed "s/EMAIL/$email/"\n
    \n soup wrap:
    awk '{print "INSERT INTO users (email,paypal_tran,CCReceipt) VALUES"; print "(\x27"$1"\x27,\x27"$2"\x27,\x27"$3"\x27);"}' input.txt
    

    Converts your sample input to preferred output. It should work for multi line input.

    EDIT

    The variables you are using in this line:

    cat temp1 | awk 'email="$1"; transaction="$2"; ccreceipt="$3";'
    

    are only visible to awk and in this command. They are not shell variables. Also in your sed commands remove those single quotes then you can get the values:

    sed "s/EMAIL/$email/"
    
    qid & accept id: (23124414, 23124493) query: Android auto refresh when new data inserted into listview soup:

    Call notifyDataSetChanged() on your Adapter.

    \n

    Some additional specifics on how/when to call notifyDataSetChanged() can be viewed in this Google I/O video.

    \n

    Use a Handler and its postDelayed method to invalidate the list's adapter as follows:

    \n
    final Handler handler = new Handler()\nhandler.postDelayed( new Runnable() {\n\n    @Override\n    public void run() {\n        adapter.notifyDataSetChanged();\n        handler.postDelayed( this, 60 * 1000 );\n    }\n}, 60 * 1000 );\n
    \n

    You must only update UI in the main (UI) thread.

    \n

    By creating the handler in the main thread, you ensure that everything you post to the handler is run in the main thread also.

    \n
    try\n                {\n                    validat_user(receivedName);\n                    final Handler handler = new Handler();\n                    handler.postDelayed( new Runnable() {\n\n                        @Override\n                        public void run() {\n                            todoItems.clear();\n                            //alertDialog.cancel();\n                            validat_user(receivedName);\n                            handler.postDelayed( this, 60 * 1000 );\n                        }\n                    }, 60 * 1000 );\n\n\n                }\n\n                catch(Exception e)\n                {\n                    display("Network error.\nPlease check with your network settings.");\n                }\n
    \n

    First validate user is first time load the data ,after using handler i can update the values every one minute

    \n

    my full code is below

    \n
    package com.example.employeeinduction;\n\nimport java.io.BufferedReader;\nimport java.io.IOException;\nimport java.io.InputStream;\nimport java.io.InputStreamReader;\nimport java.util.ArrayList;\nimport java.util.Collections;\nimport java.util.Iterator;\nimport java.util.List;\n\nimport org.apache.http.HttpResponse;\nimport org.apache.http.NameValuePair;\nimport org.apache.http.client.HttpClient;\nimport org.apache.http.client.entity.UrlEncodedFormEntity;\nimport org.apache.http.client.methods.HttpPost;\nimport org.apache.http.impl.client.DefaultHttpClient;\nimport org.apache.http.message.BasicNameValuePair;\nimport org.apache.http.params.BasicHttpParams;\nimport org.apache.http.params.HttpConnectionParams;\nimport org.apache.http.params.HttpParams;\nimport org.json.JSONArray;\nimport org.json.JSONObject;\n\nimport android.app.Activity;\nimport android.app.AlertDialog;\nimport android.app.ProgressDialog;\nimport android.content.Context;\nimport android.content.DialogInterface;\nimport android.content.Intent;\nimport android.content.res.TypedArray;\nimport android.os.AsyncTask;\nimport android.os.Bundle;\nimport android.os.Handler;\nimport android.support.v4.widget.DrawerLayout;\nimport android.util.Log;\nimport android.view.Menu;\nimport android.view.MenuItem;\nimport android.view.View;\nimport android.widget.AdapterView;\nimport android.widget.AdapterView.OnItemClickListener;\nimport android.widget.ArrayAdapter;\nimport android.widget.ImageView;\nimport android.widget.ListView;\nimport android.widget.PopupMenu;\nimport android.widget.PopupMenu.OnMenuItemClickListener;\nimport android.widget.Toast;\n\n\npublic class pdf extends Activity\n{\n\n    ImageView iv;\n    public boolean connect=false,logged=false;\n    public String db_select;\n    ListView l1;\n    AlertDialog alertDialog;\n    String mPwd,UName1="Success",UName,ret,receivedName;\n    public Iterator itr;\n    //private String SERVICE_URL = "http://61.12.7.197:8080/pdf";\n    //private String SERVICE_URL1 = "http://61.12.7.197:8080/url";\n    //private final String SERVICE_URL = "http://10.54.3.208:8080/Employee/person/pdf";\n    //private final String SERVICE_URL1 = "http://10.54.3.208:8080/Employee/person/url";\n    private final String SERVICE_URL = Urlmanager.Address+"pdf";\n    private final String SERVICE_URL1 = Urlmanager.Address+"url";\n    private final String TAG = "Pdf";\n    ArrayList todoItems;\n    Boolean isInternetPresent = false;\n    ConnectionDetector cd;\n    ArrayAdapter aa;\n    public List list1=new ArrayList();\n    public DrawerLayout mDrawerLayout;\n    public ListView mDrawerList;\n    //public ActionBarDrawerToggle mDrawerToggle;\n\n    // NavigationDrawer title "Nasdaq" in this example\n    public CharSequence mDrawerTitle;\n\n    //  App title "Navigation Drawer" in this example \n    public CharSequence mTitle;\n\n    // slider menu items details \n    public String[] navMenuTitles=null;\n    public TypedArray navMenuIcons;\n\n    public ArrayList navDrawerItems;\n    public NavDrawerListAdapter adapter;\n\n    @Override\n    protected void onCreate(Bundle savedInstanceState) \n    {\n        super.onCreate(savedInstanceState);\n        setContentView(R.layout.sliding_project);\n         iv = (ImageView)findViewById(R.id.imageView2);\n        l1 = (ListView)findViewById(R.id.list);\n\n\n        mTitle = mDrawerTitle = getTitle();\n\n        // getting items of slider from array\n        navMenuTitles = getResources().getStringArray(R.array.nav_drawer_items);\n\n        // getting Navigation drawer icons from res \n        navMenuIcons = getResources()\n                .obtainTypedArray(R.array.nav_drawer_icons);\n\n        mDrawerLayout = (DrawerLayout) findViewById(R.id.drawer_layout);\n        mDrawerList = (ListView) findViewById(R.id.list_slidermenu);\n\n        navDrawerItems = new ArrayList();\n\n\n        // list item in slider at 1 Home Nasdaq details\n        navDrawerItems.add(new NavDrawerItem(navMenuTitles[0], navMenuIcons.getResourceId(0, -1)));\n        // list item in slider at 2 Facebook details\n        navDrawerItems.add(new NavDrawerItem(navMenuTitles[1], navMenuIcons.getResourceId(1, -1)));\n        // list item in slider at 3 Google details\n        navDrawerItems.add(new NavDrawerItem(navMenuTitles[2], navMenuIcons.getResourceId(2, -1)));\n        // list item in slider at 4 Apple details\n\n\n        // Recycle array\n        navMenuIcons.recycle();\n\n        mDrawerList.setOnItemClickListener(new SlideMenuClickListener());\n\n        // setting list adapter for Navigation Drawer\n        adapter = new NavDrawerListAdapter(getApplicationContext(),\n                navDrawerItems);\n        mDrawerList.setAdapter(adapter);\n\n        if (savedInstanceState == null) {\n              displayView(0);\n        }\n\n          iv.setOnClickListener(new View.OnClickListener() {\n\n                @Override\n                public void onClick(View v) {\n\n\n                    PopupMenu popup = new PopupMenu(getBaseContext(), v);\n\n                    /** Adding menu items to the popumenu */\n                    popup.getMenuInflater().inflate(R.menu.main, popup.getMenu());\n\n                    popup.setOnMenuItemClickListener(new OnMenuItemClickListener() {\n\n                        @Override\n                        public boolean onMenuItemClick(MenuItem item) {\n\n                            switch (item.getItemId()){\n                            case R.id.Home:\n                                Intent a = new Intent(pdf.this,Design_Activity.class);\n                                startActivity(a);\n                                //Projects_Accel.this.finish();\n                            //  return true;\n                                break;\n                            case R.id.Logout:\n                                /*Intent z = new Intent(this,MainActivity.class);\n                                z.addFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP);\n                                startActivity(z);\n                                this.finish();*/\n                                Intent z = new Intent(pdf.this,MainActivity.class);\n                                z.setFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP | \n                                        Intent.FLAG_ACTIVITY_CLEAR_TASK |\n                                        Intent.FLAG_ACTIVITY_NEW_TASK);\n                                startActivity(z);\n                                pdf.this.finish();\n                            //  return true;\n                                break;\n                            }\n\n                            return true;\n                        }\n                    });\n                        popup.show();\n                }\n            });\n\n             todoItems = new ArrayList();\n                aa = new ArrayAdapter(this,R.layout.list_row,R.id.title,todoItems);\n                l1.setAdapter(aa);\n                todoItems.clear();\n                Intent intent = getIntent();\n                receivedName = (String) intent.getSerializableExtra("PROJECT");\n                cd = new ConnectionDetector(getApplicationContext());\n                isInternetPresent = cd.isConnectingToInternet();\n                if(isInternetPresent)\n                {\n                try\n                {\n                    validat_user(receivedName);\n                    final Handler handler = new Handler();\n                    handler.postDelayed( new Runnable() {\n\n                        @Override\n                        public void run() {\n                            todoItems.clear();\n                            //alertDialog.cancel();\n                            validat_user(receivedName);\n                            handler.postDelayed( this, 60 * 1000 );\n                        }\n                    }, 60 * 1000 );\n\n\n                }\n\n                catch(Exception e)\n                {\n                    display("Network error.\nPlease check with your network settings.");\n                }\n                }\n                else\n                {\n                    display("No Internet Connection..");\n                }\n\n                l1.setOnItemClickListener(new OnItemClickListener() {\n                    public void onItemClick(AdapterView parent, View view,\n                        int position, long id) {\n\n                     String name=(String)parent.getItemAtPosition(position);\n\n                     /*Toast.makeText(getBaseContext(), name, Toast.LENGTH_LONG).show();\n                      Intent i = new Intent(getBaseContext(),Webview.class);\n                      i.putExtra("USERNAME", name);\n                      startActivity(i);*/\n                     cd = new ConnectionDetector(getApplicationContext());\n                        isInternetPresent = cd.isConnectingToInternet();\n                     if(isInternetPresent)\n                        {\n                     try\n                        {\n                            validat_user1(receivedName,name);\n\n                        }\n                        catch(Exception e)\n                        {\n                            display("Network error.\nPlease check with your network settings.");\n\n                        }\n\n                        }\n                     else\n                        {\n                            display("No Internet Connection..");\n                        }\n                    }\n                });\n\n             }      \n    private class SlideMenuClickListener implements\n    ListView.OnItemClickListener {\n@Override\npublic void onItemClick(AdapterView parent, View view, int position,\n        long id) {\n    // display view for selected item\n    displayView(position);\n}\n}\n\n@Override\npublic boolean onCreateOptionsMenu(Menu menu) {\ngetMenuInflater().inflate(R.menu.main, menu);\n//setMenuBackground();\nreturn true;\n}\n\n\n/*@Override\npublic boolean onOptionsItemSelected(MenuItem item) {\n//  title/icon\nif (mDrawerToggle.onOptionsItemSelected(item)) {\n    return true;\n}\n// Handle action bar actions click\nswitch (item.getItemId()) {\ncase R.id.action_settings:\n    return true;\ndefault:\n    return super.onOptionsItemSelected(item);\n}\n}*/\n\n//called when invalidateOptionsMenu() invoke \n\n@Override\npublic boolean onPrepareOptionsMenu(Menu menu) {\n// if Navigation drawer is opened, hide the action items\n//boolean drawerOpen = mDrawerLayout.isDrawerOpen(mDrawerList);\n//menu.findItem(R.id.action_settings).setVisible(!drawerOpen);\nreturn super.onPrepareOptionsMenu(menu);\n}\n\nprivate void displayView(int position) {\n// update the main content with called Fragment\nswitch (position) {\n\ncase 1:\n    //fragment = new Fragment2Profile();\n    Intent i = new Intent(pdf.this,Design_Activity.class);\n    startActivity(i);\n    pdf.this.finish();\n    break;\ncase 2:\n    //fragment = new Fragment3Logout();\n    Intent z = new Intent(pdf.this,MainActivity.class);\n    z.setFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP | \n             Intent.FLAG_ACTIVITY_CLEAR_TASK |\n             Intent.FLAG_ACTIVITY_NEW_TASK);\n        startActivity(z);\n        pdf.this.finish();\n    break;\n\ndefault:\n    break;\n}\n\n\n\n}\n\n\n\n\n        public void display(String msg) \n        {\n            Toast.makeText(pdf.this, msg, Toast.LENGTH_LONG).show();\n        }\n        private void validat_user(String st)\n        {\n\n            WebServiceTask wst = new WebServiceTask(WebServiceTask.POST_TASK, this, "");\n\n           wst.addNameValuePair1("TABLE_NAME", st);\n           // wst.addNameValuePair("Emp_PWD", stg2);\n           // db_select=stg1;\n            //display("I am");\n            wst.execute(new String[] { SERVICE_URL });\n            //display(SERVICE_URL);\n\n        }\n        private void validat_user1(String stg1,String stg2)\n        {\n            db_select=stg1;\n            WebServiceTask wst = new WebServiceTask(WebServiceTask.POST_TASK, this, "Loading...");\n\n            wst.addNameValuePair1("PDF_NAME", stg1);\n            wst.addNameValuePair1("TABLE_NAME1", stg2);\n            wst.execute(new String[] { SERVICE_URL1 });\n\n        }\n        @SuppressWarnings("deprecation")\n        public void no_net()\n        {\n            display( "No Network Connection");\n            final AlertDialog alertDialog = new AlertDialog.Builder(pdf.this).create();\n            alertDialog.setTitle("No Internet Connection");\n            alertDialog.setMessage("You don't have internet connection.\nElse please check the Internet Connection Settings.");\n            //alertDialog.setIcon(R.drawable.error_info);\n            alertDialog.setCancelable(false);\n            alertDialog.setButton("Close", new DialogInterface.OnClickListener() \n            {\n                public void onClick(DialogInterface dialog, int which)\n                {   \n                    alertDialog.cancel();\n                    pdf.this.finish();\n                    System.exit(0);\n                }\n            });\n            alertDialog.setButton2("Use Local DataBase", new DialogInterface.OnClickListener() \n            {\n                public void onClick(DialogInterface dialog, int which)\n                {\n                    display( "Accessing local DataBase.....");\n                    alertDialog.cancel();\n                }\n            });\n            alertDialog.show();\n        }\n\n        private class WebServiceTask extends AsyncTask {\n\n            public static final int POST_TASK = 1;\n\n            private static final String TAG = "WebServiceTask";\n\n            // connection timeout, in milliseconds (waiting to connect)\n            private static final int CONN_TIMEOUT = 12000;\n\n            // socket timeout, in milliseconds (waiting for data)\n            private static final int SOCKET_TIMEOUT = 12000;\n\n            private int taskType = POST_TASK;\n            private Context mContext = null;\n            private String processMessage = "Processing...";\n\n            private ArrayList params = new ArrayList();\n\n            private ProgressDialog pDlg = null;\n\n            public WebServiceTask(int taskType, Context mContext, String processMessage) {\n\n                this.taskType = taskType;\n                this.mContext = mContext;\n                this.processMessage = processMessage;\n            }\n\n            public void addNameValuePair1(String name, String value) {\n\n                params.add(new BasicNameValuePair(name, value));\n            }\n            @SuppressWarnings("deprecation")\n            private void showProgressDialog() {\n\n                pDlg = new ProgressDialog(mContext);\n                pDlg.setMessage(processMessage);\n                pDlg.setProgressDrawable(mContext.getWallpaper());\n                pDlg.setProgressStyle(ProgressDialog.STYLE_SPINNER);\n                pDlg.setCancelable(false);\n                pDlg.show();\n\n            }\n\n            @Override\n            protected void onPreExecute() {\n\n                showProgressDialog();\n\n            }\n\n            protected String doInBackground(String... urls) {\n\n                String url = urls[0];\n                String result = "";\n\n                HttpResponse response = doResponse(url);\n\n                if (response == null) {\n                    return result;\n                } else {\n\n                    try {\n\n                        result = inputStreamToString(response.getEntity().getContent());\n\n                    } catch (IllegalStateException e) {\n                        Log.e(TAG, e.getLocalizedMessage(), e);\n\n                    } catch (IOException e) {\n                        Log.e(TAG, e.getLocalizedMessage(), e);\n                    }\n\n                }\n\n                return result;\n            }\n\n            @Override\n            protected void onPostExecute(String response) {\n\n                handleResponse(response);\n                pDlg.dismiss();\n\n            }\n\n\n            // Establish connection and socket (data retrieval) timeouts\n            private HttpParams getHttpParams() {\n\n                HttpParams htpp = new BasicHttpParams();\n\n                HttpConnectionParams.setConnectionTimeout(htpp, CONN_TIMEOUT);\n                HttpConnectionParams.setSoTimeout(htpp, SOCKET_TIMEOUT);\n\n                return htpp;\n            }\n\n            private HttpResponse doResponse(String url) {\n\n                // Use our connection and data timeouts as parameters for our\n                // DefaultHttpClient\n                HttpClient httpclient = new DefaultHttpClient(getHttpParams());\n\n                HttpResponse response = null;\n\n                try {\n                    switch (taskType) {\n\n                    case POST_TASK:\n                        HttpPost httppost = new HttpPost(url);\n                        // Add parameters\n                        httppost.setEntity(new UrlEncodedFormEntity(params));\n\n                        response = httpclient.execute(httppost);\n                        break;\n                    }\n                } catch (Exception e) {\n                    display("Remote DataBase can not be connected.\nPlease check network connection.");\n\n                    Log.e(TAG, e.getLocalizedMessage(), e);\n                    return null;\n\n                }\n\n                return response;\n            }\n\n            private String inputStreamToString(InputStream is) {\n\n                String line = "";\n                StringBuilder total = new StringBuilder();\n\n                // Wrap a BufferedReader around the InputStream\n                BufferedReader rd = new BufferedReader(new InputStreamReader(is));\n\n                try {\n                    // Read response until the end\n                    while ((line = rd.readLine()) != null) {\n                        total.append(line);\n                    }\n                } catch (IOException e) {\n                    Log.e(TAG, e.getLocalizedMessage(), e);\n                }\n\n                // Return full string\n                return total.toString();\n            }\n\n        }\n        public void handleResponse(String response) \n        {    //display("JSON responce is : "+response);\n            if(!response.equals(""))\n            {\n           try {\n\n                JSONObject jso = new JSONObject(response);\n\n\n                      int UName = jso.getInt("status1");\n\n                      if(UName==1)\n                      {\n                            String status = jso.getString("reps1");\n                            ret=status.substring(12,status.length()-2);\n                            todoItems.add(0, ret);\n                            aa.notifyDataSetChanged();\n                      }\n                      else if(UName==-1)\n                      {\n                          String status = jso.getString("status");\n                          //ret=status.substring(12,status.length()-2);\n                          //display(status);\n                            Intent intObj=new Intent(pdf.this,Webview.class);\n                             intObj.putExtra("USERNAME",status);\n                            startActivity(intObj);\n                      }\n                      else if(UName>1)\n                      {\n//                       int count=Integer.parseInt(UName);\n//                       display("Number of Projects have been handling in AFL right now: "+count);\n                        list1=new ArrayList();\n\n                        JSONArray array=jso.getJSONArray("reps1");\n                        for(int i=0;i parent, View view, int position,\n                    long id) {\n                // display view for selected item\n                displayView(position);\n            }\n        }\n\n\n        private void displayView(int position) {\n            // update the main content with called Fragment\n        //  Fragment fragment = null;\n            switch (position) {\n            case 0:\n            //  fragment = new Fragment1User();\n                break;\n            case 1:\n            //  fragment = new Fragment2Profile();\n                break;\n            case 2:\n            //  fragment = new Fragment3Logout();\n                break;\n\n            default:\n                break;\n            }\n        }*/\n\n\n}\n
    \n soup wrap:

    Call notifyDataSetChanged() on your Adapter.

    Some additional specifics on how/when to call notifyDataSetChanged() can be viewed in this Google I/O video.

    Use a Handler and its postDelayed method to invalidate the list's adapter as follows:

    final Handler handler = new Handler()
    handler.postDelayed( new Runnable() {
    
        @Override
        public void run() {
            adapter.notifyDataSetChanged();
            handler.postDelayed( this, 60 * 1000 );
        }
    }, 60 * 1000 );
    

    You must only update UI in the main (UI) thread.

    By creating the handler in the main thread, you ensure that everything you post to the handler is run in the main thread also.

    try
                    {
                        validat_user(receivedName);
                        final Handler handler = new Handler();
                        handler.postDelayed( new Runnable() {
    
                            @Override
                            public void run() {
                                todoItems.clear();
                                //alertDialog.cancel();
                                validat_user(receivedName);
                                handler.postDelayed( this, 60 * 1000 );
                            }
                        }, 60 * 1000 );
    
    
                    }
    
                    catch(Exception e)
                    {
                        display("Network error.\nPlease check with your network settings.");
                    }
    

    First validate user is first time load the data ,after using handler i can update the values every one minute

    my full code is below

    package com.example.employeeinduction;
    
    import java.io.BufferedReader;
    import java.io.IOException;
    import java.io.InputStream;
    import java.io.InputStreamReader;
    import java.util.ArrayList;
    import java.util.Collections;
    import java.util.Iterator;
    import java.util.List;
    
    import org.apache.http.HttpResponse;
    import org.apache.http.NameValuePair;
    import org.apache.http.client.HttpClient;
    import org.apache.http.client.entity.UrlEncodedFormEntity;
    import org.apache.http.client.methods.HttpPost;
    import org.apache.http.impl.client.DefaultHttpClient;
    import org.apache.http.message.BasicNameValuePair;
    import org.apache.http.params.BasicHttpParams;
    import org.apache.http.params.HttpConnectionParams;
    import org.apache.http.params.HttpParams;
    import org.json.JSONArray;
    import org.json.JSONObject;
    
    import android.app.Activity;
    import android.app.AlertDialog;
    import android.app.ProgressDialog;
    import android.content.Context;
    import android.content.DialogInterface;
    import android.content.Intent;
    import android.content.res.TypedArray;
    import android.os.AsyncTask;
    import android.os.Bundle;
    import android.os.Handler;
    import android.support.v4.widget.DrawerLayout;
    import android.util.Log;
    import android.view.Menu;
    import android.view.MenuItem;
    import android.view.View;
    import android.widget.AdapterView;
    import android.widget.AdapterView.OnItemClickListener;
    import android.widget.ArrayAdapter;
    import android.widget.ImageView;
    import android.widget.ListView;
    import android.widget.PopupMenu;
    import android.widget.PopupMenu.OnMenuItemClickListener;
    import android.widget.Toast;
    
    
    public class pdf extends Activity
    {
    
        ImageView iv;
        public boolean connect=false,logged=false;
        public String db_select;
        ListView l1;
        AlertDialog alertDialog;
        String mPwd,UName1="Success",UName,ret,receivedName;
        public Iterator itr;
        //private String SERVICE_URL = "http://61.12.7.197:8080/pdf";
        //private String SERVICE_URL1 = "http://61.12.7.197:8080/url";
        //private final String SERVICE_URL = "http://10.54.3.208:8080/Employee/person/pdf";
        //private final String SERVICE_URL1 = "http://10.54.3.208:8080/Employee/person/url";
        private final String SERVICE_URL = Urlmanager.Address+"pdf";
        private final String SERVICE_URL1 = Urlmanager.Address+"url";
        private final String TAG = "Pdf";
        ArrayList todoItems;
        Boolean isInternetPresent = false;
        ConnectionDetector cd;
        ArrayAdapter aa;
        public List list1=new ArrayList();
        public DrawerLayout mDrawerLayout;
        public ListView mDrawerList;
        //public ActionBarDrawerToggle mDrawerToggle;
    
        // NavigationDrawer title "Nasdaq" in this example
        public CharSequence mDrawerTitle;
    
        //  App title "Navigation Drawer" in this example 
        public CharSequence mTitle;
    
        // slider menu items details 
        public String[] navMenuTitles=null;
        public TypedArray navMenuIcons;
    
        public ArrayList navDrawerItems;
        public NavDrawerListAdapter adapter;
    
        @Override
        protected void onCreate(Bundle savedInstanceState) 
        {
            super.onCreate(savedInstanceState);
            setContentView(R.layout.sliding_project);
             iv = (ImageView)findViewById(R.id.imageView2);
            l1 = (ListView)findViewById(R.id.list);
    
    
            mTitle = mDrawerTitle = getTitle();
    
            // getting items of slider from array
            navMenuTitles = getResources().getStringArray(R.array.nav_drawer_items);
    
            // getting Navigation drawer icons from res 
            navMenuIcons = getResources()
                    .obtainTypedArray(R.array.nav_drawer_icons);
    
            mDrawerLayout = (DrawerLayout) findViewById(R.id.drawer_layout);
            mDrawerList = (ListView) findViewById(R.id.list_slidermenu);
    
            navDrawerItems = new ArrayList();
    
    
            // list item in slider at 1 Home Nasdaq details
            navDrawerItems.add(new NavDrawerItem(navMenuTitles[0], navMenuIcons.getResourceId(0, -1)));
            // list item in slider at 2 Facebook details
            navDrawerItems.add(new NavDrawerItem(navMenuTitles[1], navMenuIcons.getResourceId(1, -1)));
            // list item in slider at 3 Google details
            navDrawerItems.add(new NavDrawerItem(navMenuTitles[2], navMenuIcons.getResourceId(2, -1)));
            // list item in slider at 4 Apple details
    
    
            // Recycle array
            navMenuIcons.recycle();
    
            mDrawerList.setOnItemClickListener(new SlideMenuClickListener());
    
            // setting list adapter for Navigation Drawer
            adapter = new NavDrawerListAdapter(getApplicationContext(),
                    navDrawerItems);
            mDrawerList.setAdapter(adapter);
    
            if (savedInstanceState == null) {
                  displayView(0);
            }
    
              iv.setOnClickListener(new View.OnClickListener() {
    
                    @Override
                    public void onClick(View v) {
    
    
                        PopupMenu popup = new PopupMenu(getBaseContext(), v);
    
                        /** Adding menu items to the popumenu */
                        popup.getMenuInflater().inflate(R.menu.main, popup.getMenu());
    
                        popup.setOnMenuItemClickListener(new OnMenuItemClickListener() {
    
                            @Override
                            public boolean onMenuItemClick(MenuItem item) {
    
                                switch (item.getItemId()){
                                case R.id.Home:
                                    Intent a = new Intent(pdf.this,Design_Activity.class);
                                    startActivity(a);
                                    //Projects_Accel.this.finish();
                                //  return true;
                                    break;
                                case R.id.Logout:
                                    /*Intent z = new Intent(this,MainActivity.class);
                                    z.addFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP);
                                    startActivity(z);
                                    this.finish();*/
                                    Intent z = new Intent(pdf.this,MainActivity.class);
                                    z.setFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP | 
                                            Intent.FLAG_ACTIVITY_CLEAR_TASK |
                                            Intent.FLAG_ACTIVITY_NEW_TASK);
                                    startActivity(z);
                                    pdf.this.finish();
                                //  return true;
                                    break;
                                }
    
                                return true;
                            }
                        });
                            popup.show();
                    }
                });
    
                 todoItems = new ArrayList();
                    aa = new ArrayAdapter(this,R.layout.list_row,R.id.title,todoItems);
                    l1.setAdapter(aa);
                    todoItems.clear();
                    Intent intent = getIntent();
                    receivedName = (String) intent.getSerializableExtra("PROJECT");
                    cd = new ConnectionDetector(getApplicationContext());
                    isInternetPresent = cd.isConnectingToInternet();
                    if(isInternetPresent)
                    {
                    try
                    {
                        validat_user(receivedName);
                        final Handler handler = new Handler();
                        handler.postDelayed( new Runnable() {
    
                            @Override
                            public void run() {
                                todoItems.clear();
                                //alertDialog.cancel();
                                validat_user(receivedName);
                                handler.postDelayed( this, 60 * 1000 );
                            }
                        }, 60 * 1000 );
    
    
                    }
    
                    catch(Exception e)
                    {
                        display("Network error.\nPlease check with your network settings.");
                    }
                    }
                    else
                    {
                        display("No Internet Connection..");
                    }
    
                    l1.setOnItemClickListener(new OnItemClickListener() {
                        public void onItemClick(AdapterView parent, View view,
                            int position, long id) {
    
                         String name=(String)parent.getItemAtPosition(position);
    
                         /*Toast.makeText(getBaseContext(), name, Toast.LENGTH_LONG).show();
                          Intent i = new Intent(getBaseContext(),Webview.class);
                          i.putExtra("USERNAME", name);
                          startActivity(i);*/
                         cd = new ConnectionDetector(getApplicationContext());
                            isInternetPresent = cd.isConnectingToInternet();
                         if(isInternetPresent)
                            {
                         try
                            {
                                validat_user1(receivedName,name);
    
                            }
                            catch(Exception e)
                            {
                                display("Network error.\nPlease check with your network settings.");
    
                            }
    
                            }
                         else
                            {
                                display("No Internet Connection..");
                            }
                        }
                    });
    
                 }      
        private class SlideMenuClickListener implements
        ListView.OnItemClickListener {
    @Override
    public void onItemClick(AdapterView parent, View view, int position,
            long id) {
        // display view for selected item
        displayView(position);
    }
    }
    
    @Override
    public boolean onCreateOptionsMenu(Menu menu) {
    getMenuInflater().inflate(R.menu.main, menu);
    //setMenuBackground();
    return true;
    }
    
    
    /*@Override
    public boolean onOptionsItemSelected(MenuItem item) {
    //  title/icon
    if (mDrawerToggle.onOptionsItemSelected(item)) {
        return true;
    }
    // Handle action bar actions click
    switch (item.getItemId()) {
    case R.id.action_settings:
        return true;
    default:
        return super.onOptionsItemSelected(item);
    }
    }*/
    
    //called when invalidateOptionsMenu() invoke 
    
    @Override
    public boolean onPrepareOptionsMenu(Menu menu) {
    // if Navigation drawer is opened, hide the action items
    //boolean drawerOpen = mDrawerLayout.isDrawerOpen(mDrawerList);
    //menu.findItem(R.id.action_settings).setVisible(!drawerOpen);
    return super.onPrepareOptionsMenu(menu);
    }
    
    private void displayView(int position) {
    // update the main content with called Fragment
    switch (position) {
    
    case 1:
        //fragment = new Fragment2Profile();
        Intent i = new Intent(pdf.this,Design_Activity.class);
        startActivity(i);
        pdf.this.finish();
        break;
    case 2:
        //fragment = new Fragment3Logout();
        Intent z = new Intent(pdf.this,MainActivity.class);
        z.setFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP | 
                 Intent.FLAG_ACTIVITY_CLEAR_TASK |
                 Intent.FLAG_ACTIVITY_NEW_TASK);
            startActivity(z);
            pdf.this.finish();
        break;
    
    default:
        break;
    }
    
    
    
    }
    
    
    
    
            public void display(String msg) 
            {
                Toast.makeText(pdf.this, msg, Toast.LENGTH_LONG).show();
            }
            private void validat_user(String st)
            {
    
                WebServiceTask wst = new WebServiceTask(WebServiceTask.POST_TASK, this, "");
    
               wst.addNameValuePair1("TABLE_NAME", st);
               // wst.addNameValuePair("Emp_PWD", stg2);
               // db_select=stg1;
                //display("I am");
                wst.execute(new String[] { SERVICE_URL });
                //display(SERVICE_URL);
    
            }
            private void validat_user1(String stg1,String stg2)
            {
                db_select=stg1;
                WebServiceTask wst = new WebServiceTask(WebServiceTask.POST_TASK, this, "Loading...");
    
                wst.addNameValuePair1("PDF_NAME", stg1);
                wst.addNameValuePair1("TABLE_NAME1", stg2);
                wst.execute(new String[] { SERVICE_URL1 });
    
            }
            @SuppressWarnings("deprecation")
            public void no_net()
            {
                display( "No Network Connection");
                final AlertDialog alertDialog = new AlertDialog.Builder(pdf.this).create();
                alertDialog.setTitle("No Internet Connection");
                alertDialog.setMessage("You don't have internet connection.\nElse please check the Internet Connection Settings.");
                //alertDialog.setIcon(R.drawable.error_info);
                alertDialog.setCancelable(false);
                alertDialog.setButton("Close", new DialogInterface.OnClickListener() 
                {
                    public void onClick(DialogInterface dialog, int which)
                    {   
                        alertDialog.cancel();
                        pdf.this.finish();
                        System.exit(0);
                    }
                });
                alertDialog.setButton2("Use Local DataBase", new DialogInterface.OnClickListener() 
                {
                    public void onClick(DialogInterface dialog, int which)
                    {
                        display( "Accessing local DataBase.....");
                        alertDialog.cancel();
                    }
                });
                alertDialog.show();
            }
    
            private class WebServiceTask extends AsyncTask {
    
                public static final int POST_TASK = 1;
    
                private static final String TAG = "WebServiceTask";
    
                // connection timeout, in milliseconds (waiting to connect)
                private static final int CONN_TIMEOUT = 12000;
    
                // socket timeout, in milliseconds (waiting for data)
                private static final int SOCKET_TIMEOUT = 12000;
    
                private int taskType = POST_TASK;
                private Context mContext = null;
                private String processMessage = "Processing...";
    
                private ArrayList params = new ArrayList();
    
                private ProgressDialog pDlg = null;
    
                public WebServiceTask(int taskType, Context mContext, String processMessage) {
    
                    this.taskType = taskType;
                    this.mContext = mContext;
                    this.processMessage = processMessage;
                }
    
                public void addNameValuePair1(String name, String value) {
    
                    params.add(new BasicNameValuePair(name, value));
                }
                @SuppressWarnings("deprecation")
                private void showProgressDialog() {
    
                    pDlg = new ProgressDialog(mContext);
                    pDlg.setMessage(processMessage);
                    pDlg.setProgressDrawable(mContext.getWallpaper());
                    pDlg.setProgressStyle(ProgressDialog.STYLE_SPINNER);
                    pDlg.setCancelable(false);
                    pDlg.show();
    
                }
    
                @Override
                protected void onPreExecute() {
    
                    showProgressDialog();
    
                }
    
                protected String doInBackground(String... urls) {
    
                    String url = urls[0];
                    String result = "";
    
                    HttpResponse response = doResponse(url);
    
                    if (response == null) {
                        return result;
                    } else {
    
                        try {
    
                            result = inputStreamToString(response.getEntity().getContent());
    
                        } catch (IllegalStateException e) {
                            Log.e(TAG, e.getLocalizedMessage(), e);
    
                        } catch (IOException e) {
                            Log.e(TAG, e.getLocalizedMessage(), e);
                        }
    
                    }
    
                    return result;
                }
    
                @Override
                protected void onPostExecute(String response) {
    
                    handleResponse(response);
                    pDlg.dismiss();
    
                }
    
    
                // Establish connection and socket (data retrieval) timeouts
                private HttpParams getHttpParams() {
    
                    HttpParams htpp = new BasicHttpParams();
    
                    HttpConnectionParams.setConnectionTimeout(htpp, CONN_TIMEOUT);
                    HttpConnectionParams.setSoTimeout(htpp, SOCKET_TIMEOUT);
    
                    return htpp;
                }
    
                private HttpResponse doResponse(String url) {
    
                    // Use our connection and data timeouts as parameters for our
                    // DefaultHttpClient
                    HttpClient httpclient = new DefaultHttpClient(getHttpParams());
    
                    HttpResponse response = null;
    
                    try {
                        switch (taskType) {
    
                        case POST_TASK:
                            HttpPost httppost = new HttpPost(url);
                            // Add parameters
                            httppost.setEntity(new UrlEncodedFormEntity(params));
    
                            response = httpclient.execute(httppost);
                            break;
                        }
                    } catch (Exception e) {
                        display("Remote DataBase can not be connected.\nPlease check network connection.");
    
                        Log.e(TAG, e.getLocalizedMessage(), e);
                        return null;
    
                    }
    
                    return response;
                }
    
                private String inputStreamToString(InputStream is) {
    
                    String line = "";
                    StringBuilder total = new StringBuilder();
    
                    // Wrap a BufferedReader around the InputStream
                    BufferedReader rd = new BufferedReader(new InputStreamReader(is));
    
                    try {
                        // Read response until the end
                        while ((line = rd.readLine()) != null) {
                            total.append(line);
                        }
                    } catch (IOException e) {
                        Log.e(TAG, e.getLocalizedMessage(), e);
                    }
    
                    // Return full string
                    return total.toString();
                }
    
            }
            public void handleResponse(String response) 
            {    //display("JSON responce is : "+response);
                if(!response.equals(""))
                {
               try {
    
                    JSONObject jso = new JSONObject(response);
    
    
                          int UName = jso.getInt("status1");
    
                          if(UName==1)
                          {
                                String status = jso.getString("reps1");
                                ret=status.substring(12,status.length()-2);
                                todoItems.add(0, ret);
                                aa.notifyDataSetChanged();
                          }
                          else if(UName==-1)
                          {
                              String status = jso.getString("status");
                              //ret=status.substring(12,status.length()-2);
                              //display(status);
                                Intent intObj=new Intent(pdf.this,Webview.class);
                                 intObj.putExtra("USERNAME",status);
                                startActivity(intObj);
                          }
                          else if(UName>1)
                          {
    //                       int count=Integer.parseInt(UName);
    //                       display("Number of Projects have been handling in AFL right now: "+count);
                            list1=new ArrayList();
    
                            JSONArray array=jso.getJSONArray("reps1");
                            for(int i=0;i parent, View view, int position,
                        long id) {
                    // display view for selected item
                    displayView(position);
                }
            }
    
    
            private void displayView(int position) {
                // update the main content with called Fragment
            //  Fragment fragment = null;
                switch (position) {
                case 0:
                //  fragment = new Fragment1User();
                    break;
                case 1:
                //  fragment = new Fragment2Profile();
                    break;
                case 2:
                //  fragment = new Fragment3Logout();
                    break;
    
                default:
                    break;
                }
            }*/
    
    
    }
    
    qid & accept id: (23129852, 23174570) query: How to use another table fields as a criteria for MS Access soup:

    The 2nd problem is a bit more difficult than the 1st. My approach would be to use 3 separate queries to get the answer:

    \n

    Query1 returns a record for each record in the original table, adding the year and quarter from the quarters table. Note that instead of using the quarters table, you could just as easily calculate the year and quarter from the date.

    \n
    SELECT Table.FName, Table.FValue, Table.VDate, Quarters.Yr, Quarters.Qtr\nFROM [Table], Quarters\nWHERE (((Table.VDate)>=[start] And (Table.VDate)<=[end]));\n
    \n

    Query2 uses the results of Query1 and finds the minimum values you need:

    \n
    SELECT Query1.FName, Query1.Yr, Query1.Qtr, Min(Query1.FValue) AS MinValue\nFROM Query1\nGROUP BY Query1.FName, Query1.Yr, Query1.Qtr;\n
    \n

    Query3 matches the results of Query1 and Query2 to show the data on which the minimum value was reached. Note that I made this a Sum query and used First(VDate), assumining that the minimum value may have occurred more than once and you need only the 1st time it happened.

    \n
    SELECT Query1.FName, Query1.Yr, Query1.Qtr, Query2.MinValue, First(Query1.VDate) AS MidDate, Query1.FValue\nFROM Query1 INNER JOIN Query2 ON (Query1.Qtr = Query2.Qtr) AND (Query1.FValue = Query2.MinValue) AND (Query1.FName = Query2.FName)\nGROUP BY Query1.FName, Query1.Yr, Query1.Qtr, Query2.MinValue, Query1.FValue;\n
    \n

    There's probably a clever way to do this all in one query, but this is the way usually solve similar problems.

    \n soup wrap:

    The 2nd problem is a bit more difficult than the 1st. My approach would be to use 3 separate queries to get the answer:

    Query1 returns a record for each record in the original table, adding the year and quarter from the quarters table. Note that instead of using the quarters table, you could just as easily calculate the year and quarter from the date.

    SELECT Table.FName, Table.FValue, Table.VDate, Quarters.Yr, Quarters.Qtr
    FROM [Table], Quarters
    WHERE (((Table.VDate)>=[start] And (Table.VDate)<=[end]));
    

    Query2 uses the results of Query1 and finds the minimum values you need:

    SELECT Query1.FName, Query1.Yr, Query1.Qtr, Min(Query1.FValue) AS MinValue
    FROM Query1
    GROUP BY Query1.FName, Query1.Yr, Query1.Qtr;
    

    Query3 matches the results of Query1 and Query2 to show the data on which the minimum value was reached. Note that I made this a Sum query and used First(VDate), assumining that the minimum value may have occurred more than once and you need only the 1st time it happened.

    SELECT Query1.FName, Query1.Yr, Query1.Qtr, Query2.MinValue, First(Query1.VDate) AS MidDate, Query1.FValue
    FROM Query1 INNER JOIN Query2 ON (Query1.Qtr = Query2.Qtr) AND (Query1.FValue = Query2.MinValue) AND (Query1.FName = Query2.FName)
    GROUP BY Query1.FName, Query1.Yr, Query1.Qtr, Query2.MinValue, Query1.FValue;
    

    There's probably a clever way to do this all in one query, but this is the way usually solve similar problems.

    qid & accept id: (23146750, 23147525) query: List records with duplicate values soup:

    If you have table Projects then you can correct your query as follows:

    \n
    select\n     projectId,\n     IDs = STUFF(\n    (SELECT ','+ CAST(g2.[value] AS VARCHAR(255)) as 'data()' \n      FROM ProjectDetail g2\n      WHERE g2.recordType=1\n            and g1.value=g2.value\n            and g1.recordType=g2.recordType\n            and g1.projectId=g2.projectIdand\n            and g2.auditDate > '01-01-2014'\n      For XML PATH('')\n      ),1,1,'')\nFROM Projects P\nWHERE EXISTS (select projectID\n              from ProjectDetail PD ON P.projectID=PD.ProjectID\n              having count(*)>1)\n
    \n

    OR without table Projects

    \n
       select\n         projectId,\n         IDs = STUFF(\n        (SELECT ','+ CAST(g2.[value] AS VARCHAR(255)) as 'data()' \n          FROM ProjectDetail g2\n          WHERE g2.recordType=1\n                and g1.value=g2.value\n                and g1.recordType=g2.recordType\n                and g1.projectId=g2.projectIdand\n                and g2.auditDate > '01-01-2014'\n          For XML PATH('')\n          ),1,1,'')\n    FROM  (select projectID\n           from ProjectDetail PD\n           having count(*)>1) P\n
    \n soup wrap:

    If you have table Projects then you can correct your query as follows:

    select
         projectId,
         IDs = STUFF(
        (SELECT ','+ CAST(g2.[value] AS VARCHAR(255)) as 'data()' 
          FROM ProjectDetail g2
          WHERE g2.recordType=1
                and g1.value=g2.value
                and g1.recordType=g2.recordType
                and g1.projectId=g2.projectIdand
                and g2.auditDate > '01-01-2014'
          For XML PATH('')
          ),1,1,'')
    FROM Projects P
    WHERE EXISTS (select projectID
                  from ProjectDetail PD ON P.projectID=PD.ProjectID
                  having count(*)>1)
    

    OR without table Projects

       select
             projectId,
             IDs = STUFF(
            (SELECT ','+ CAST(g2.[value] AS VARCHAR(255)) as 'data()' 
              FROM ProjectDetail g2
              WHERE g2.recordType=1
                    and g1.value=g2.value
                    and g1.recordType=g2.recordType
                    and g1.projectId=g2.projectIdand
                    and g2.auditDate > '01-01-2014'
              For XML PATH('')
              ),1,1,'')
        FROM  (select projectID
               from ProjectDetail PD
               having count(*)>1) P
    
    qid & accept id: (23151081, 23151380) query: SQL Server: compare two columns in Select and count matches soup:

    It' a bit different, but I would try something like this:

    \n
    SELECT a.col1, a.total_count, b.match_count,\n  (100*b.match_count/a.total_count) AS match_percentage\nFROM (\n  SELECT col1, COUNT(*) AS total_count\n  FROM LogTable\n  WHERE Category LIKE '2014-04%'\n  GROUP BY col1\n) a\nJOIN (\n  SELECT col1, COUNT(*) AS match_count\n  FROM LogTable\n  WHERE Category LIKE '2014-04%' AND col2=col3\n  GROUP BY col1\n) b ON a.col1=b.col1\n
    \n

    As an alternative... this should give the same result. Not sure which would be more efficient:

    \n
    SELECT col1, total_count,\n  (SELECT COUNT(*)\n   FROM LogTable\n   WHERE Category LIKE '2014-04%' AND col1=a.col1 AND col2=col3\n  ) AS match_count,\n  (100*match_count/total_count) AS match_percentage\nFROM (\n  SELECT col1, COUNT(*) AS total_count\n  FROM LogTable\n  WHERE Category LIKE '2014-04%'\n  GROUP BY col1\n) a\n
    \n

    But... beware... I'm not sure all engines are able to reference the subselect column match_count directly in the expression used to build the match_percentage column.

    \n soup wrap:

    It' a bit different, but I would try something like this:

    SELECT a.col1, a.total_count, b.match_count,
      (100*b.match_count/a.total_count) AS match_percentage
    FROM (
      SELECT col1, COUNT(*) AS total_count
      FROM LogTable
      WHERE Category LIKE '2014-04%'
      GROUP BY col1
    ) a
    JOIN (
      SELECT col1, COUNT(*) AS match_count
      FROM LogTable
      WHERE Category LIKE '2014-04%' AND col2=col3
      GROUP BY col1
    ) b ON a.col1=b.col1
    

    As an alternative... this should give the same result. Not sure which would be more efficient:

    SELECT col1, total_count,
      (SELECT COUNT(*)
       FROM LogTable
       WHERE Category LIKE '2014-04%' AND col1=a.col1 AND col2=col3
      ) AS match_count,
      (100*match_count/total_count) AS match_percentage
    FROM (
      SELECT col1, COUNT(*) AS total_count
      FROM LogTable
      WHERE Category LIKE '2014-04%'
      GROUP BY col1
    ) a
    

    But... beware... I'm not sure all engines are able to reference the subselect column match_count directly in the expression used to build the match_percentage column.

    qid & accept id: (23151241, 23151281) query: Create row in table with only auto generated fields - SQL soup:

    The SQL standard and most databases support the DEFAULT VALUES clause for this:

    \n
    INSERT INTO "MIGRATION"."VERSION" DEFAULT VALUES;\n
    \n

    This is supported in

    \n
      \n
    • CUBRID
    • \n
    • Firebird
    • \n
    • H2
    • \n
    • HSQLDB
    • \n
    • Ingres
    • \n
    • PostgreSQL
    • \n
    • SQLite
    • \n
    • SQL Server
    • \n
    • Sybase SQL Anywhere
    • \n
    \n

    If the above is not supported, you can still write this statement as a workaround. In fact, the first is specified by the SQL standard to be equivalent to the second:

    \n
    INSERT INTO "MIGRATION"."VERSION" (ID, VERSION_DATE) VALUES (DEFAULT, DEFAULT);\n
    \n

    This will then also work with:

    \n
      \n
    • Access
    • \n
    • DB2
    • \n
    • MariaDB
    • \n
    • MySQL
    • \n
    • Oracle
    • \n
    \n

    For more details, see this blog post here:

    \n

    http://blog.jooq.org/2014/01/08/lesser-known-sql-features-default-values/

    \n soup wrap:

    The SQL standard and most databases support the DEFAULT VALUES clause for this:

    INSERT INTO "MIGRATION"."VERSION" DEFAULT VALUES;
    

    This is supported in

    • CUBRID
    • Firebird
    • H2
    • HSQLDB
    • Ingres
    • PostgreSQL
    • SQLite
    • SQL Server
    • Sybase SQL Anywhere

    If the above is not supported, you can still write this statement as a workaround. In fact, the first is specified by the SQL standard to be equivalent to the second:

    INSERT INTO "MIGRATION"."VERSION" (ID, VERSION_DATE) VALUES (DEFAULT, DEFAULT);
    

    This will then also work with:

    • Access
    • DB2
    • MariaDB
    • MySQL
    • Oracle

    For more details, see this blog post here:

    http://blog.jooq.org/2014/01/08/lesser-known-sql-features-default-values/

    qid & accept id: (23166266, 23195715) query: Procedure to insert data from one column into two columns in another table soup:

    Building a Looping PL/SQL Based DML Cursor For Multiple DML Targets

    \n

    A PL/SQL Stored Procedure is a great way to accomplish your task. An alternate approach to breaking down your single name field into FIRST NAME and LAST NAME components could be to use an Oracle Regular Expression, as in:

    \n
    SELECT REGEXP_SUBSTR('MYFIRST MYLAST','[^ ]+', 1, 1) from dual\n-- Result: MYFIRST\n\nSELECT REGEXP_SUBSTR('MYFIRST MYLAST','[^ ]+', 1, 2) from dual\n-- Result: MYLAST\n
    \n

    A procedure based approach is a good idea; first wrap this query into a cursor definition. Integrate the cursor within a complete PL/SQL stored procedure DDL script.

    \n
    CREATE or REPLACE PROCEDURE PROC_MYNAME_IMPORT IS\n\n    -- Queries parsed name values from STAFF (the source) table \n\n    CURSOR name_cursor IS\n       SELECT REGEXP_SUBSTR(staff.name,...) as FirstName,\n              REGEXP_SUBSTR(... ) as LastName\n         FROM STAFF;\n\n    BEGIN\n\n       FOR i IN name_cursor LOOP\n\n          --DML Command 1:\n          INSERT INTO Table_One ( first_name, last_name )\n          VALUES (i.FirstName, i.LastName);\n          COMMIT;\n\n          --DML Command 2:\n          INSERT INTO Table_Two ...\n          COMMIT;\n\n          END LOOP;\n\n    END proc_myname_import;\n
    \n

    As you can see from the example block, a long series of DML statements can take place (not just two) for a given cursor record and its values as it is handled by each loop iteration. Each field may be referenced by the name assigned to them within the cursor SQL statement. There is a '.' (dot) notation where the handle assigned to the cursor call is the prefix, as in:

    \n
    CURSOR c1 IS\n   SELECT st.col1, st.col2, st.col3\n     FROM sample_table st\n    WHERE ...\n
    \n

    Then the cursor call for looping through the main record set:

    \n
    FOR my_personal_loop IN c1 LOOP\n    ...do this\n    ...do that\n\n    INSERT INTO some_other_table (column_one, column_two, column_three)\n    VALUES (my_personal_loop.col1, my_personal_loop.col2, ...);\n\n    COMMIT;\nEND LOOP;\n\n... and so on.\n
    \n soup wrap:

    Building a Looping PL/SQL Based DML Cursor For Multiple DML Targets

    A PL/SQL Stored Procedure is a great way to accomplish your task. An alternate approach to breaking down your single name field into FIRST NAME and LAST NAME components could be to use an Oracle Regular Expression, as in:

    SELECT REGEXP_SUBSTR('MYFIRST MYLAST','[^ ]+', 1, 1) from dual
    -- Result: MYFIRST
    
    SELECT REGEXP_SUBSTR('MYFIRST MYLAST','[^ ]+', 1, 2) from dual
    -- Result: MYLAST
    

    A procedure based approach is a good idea; first wrap this query into a cursor definition. Integrate the cursor within a complete PL/SQL stored procedure DDL script.

    CREATE or REPLACE PROCEDURE PROC_MYNAME_IMPORT IS
    
        -- Queries parsed name values from STAFF (the source) table 
    
        CURSOR name_cursor IS
           SELECT REGEXP_SUBSTR(staff.name,...) as FirstName,
                  REGEXP_SUBSTR(... ) as LastName
             FROM STAFF;
    
        BEGIN
    
           FOR i IN name_cursor LOOP
    
              --DML Command 1:
              INSERT INTO Table_One ( first_name, last_name )
              VALUES (i.FirstName, i.LastName);
              COMMIT;
    
              --DML Command 2:
              INSERT INTO Table_Two ...
              COMMIT;
    
              END LOOP;
    
        END proc_myname_import;
    

    As you can see from the example block, a long series of DML statements can take place (not just two) for a given cursor record and its values as it is handled by each loop iteration. Each field may be referenced by the name assigned to them within the cursor SQL statement. There is a '.' (dot) notation where the handle assigned to the cursor call is the prefix, as in:

    CURSOR c1 IS
       SELECT st.col1, st.col2, st.col3
         FROM sample_table st
        WHERE ...
    

    Then the cursor call for looping through the main record set:

    FOR my_personal_loop IN c1 LOOP
        ...do this
        ...do that
    
        INSERT INTO some_other_table (column_one, column_two, column_three)
        VALUES (my_personal_loop.col1, my_personal_loop.col2, ...);
    
        COMMIT;
    END LOOP;
    
    ... and so on.
    
    qid & accept id: (23176321, 23178314) query: "Convert" Entity Framework program to raw SQL soup:

    I was there and the good news is you don't have to give up Entity Framework if you don't want to. The bad news is you have to update the database yourself. Which isn't as hard as it seems. I'm currently using EF 5 but plan to go to EF 6. I don't see why this still wouldn't work for EF 6.

    \n

    First thing is in the constructor of the DbContext cast it to IObjectContextAdapter and get access to the ObjectContext. I make a property for this

    \n
    public virtual ObjectContext ObjContext\n{\n    get\n    {\n        return ((IObjectContextAdapter)this).ObjectContext;\n    }\n}\n
    \n

    Once you have that subscribe to the SavingChanges event - this isn't our exact code some things are copied out of other methods and redone. This just gives you an idea of what you need to do.

    \n
    ObjContext.SavingChanges += SaveData;\n\nprivate void SaveData(object sender, EventArgs e)\n{\n    var context = sender as ObjectContext;\n    if (context != null)\n    {\n        context.DetectChanges();\n        var tsql = new StringBuilder();\n        var dbParams = new List>();\n\n        var deletedEntites = context.ObjectStateManager.GetObjectStateEntries(EntityState.Deleted);\n        foreach (var delete in deletedEntites)\n        {\n            // Set state to unchanged - so entity framework will ignore\n            delete.ChangeState(EntityState.Unchanged);\n            // Method to generate tsql for deleting entities\n            DeleteData(delete, tsql, dbParams);\n        }\n\n        var addedEntites = context.ObjectStateManager.GetObjectStateEntries(EntityState.Added);\n        foreach (var add in addedEntites)\n        {\n            // Set state to unchanged - so entity framework will ignore\n            add.ChangeState(EntityState.Unchanged);\n            // Method to generate tsql for added entities\n            AddData(add, tsql, dbParams);\n        }\n\n        var editedEntites = context.ObjectStateManager.GetObjectStateEntries(EntityState.Modified);\n        foreach (var edit in editedEntites)\n        {\n            // Method to generate tsql for updating entities\n            UpdateEditData(edit, tsql, dbParams);\n            // Set state to unchanged - so entity framework will ignore\n            edit.ChangeState(EntityState.Unchanged);\n        }\n        if (!tsql.ToString().IsEmpty())\n        {\n            var dbcommand = Database.Connection.CreateCommand();\n            dbcommand.CommandText = tsql.ToString();\n\n            foreach (var dbParameter in dbParams)\n            {\n                var dbparam = dbcommand.CreateParameter();\n                dbparam.ParameterName = dbParameter.Key;\n                dbparam.Value = dbParameter.Value;\n                dbcommand.Parameters.Add(dbparam);\n            }\n            var results = dbcommand.ExecuteNonQuery();\n        }\n    }\n}\n
    \n

    Why we set the entity to unmodified after the update because you can do

    \n
    var changed properties = edit.GetModifiedProperties();\n
    \n

    to get a list of all the changed properties. Since all the entities are now marked as unchanged EF will not send any updates to SQL.

    \n

    You will also need to mess with the metadata to go from entity to table and property to fields. This isn't that hard to do but messing the metadata does take some time to learn. Something I still struggle with sometimes. I refactored all that out into an IMetaDataHelper interface where I pass it in the entity type and property name to get the table and field back - along with caching the result so I don't have to query metadata all the time.

    \n

    At the end the tsql is a batch that has all the T-SQL how we want it with the locking hints and containing the transaction level. We also change numeric fields from just being set to nfield = 10 but to be nfield = nfield + 2 in the TSQL if the user updated them by 2 to avoid the concurrency issue as well.

    \n

    What you wont get to is having SQL locked once someone starts to edit your entity but I don't see how you would get that with stored procedures as well.

    \n

    All in all it took me about 2 solid days to get this all up and running for us.

    \n soup wrap:

    I was there and the good news is you don't have to give up Entity Framework if you don't want to. The bad news is you have to update the database yourself. Which isn't as hard as it seems. I'm currently using EF 5 but plan to go to EF 6. I don't see why this still wouldn't work for EF 6.

    First thing is in the constructor of the DbContext cast it to IObjectContextAdapter and get access to the ObjectContext. I make a property for this

    public virtual ObjectContext ObjContext
    {
        get
        {
            return ((IObjectContextAdapter)this).ObjectContext;
        }
    }
    

    Once you have that subscribe to the SavingChanges event - this isn't our exact code some things are copied out of other methods and redone. This just gives you an idea of what you need to do.

    ObjContext.SavingChanges += SaveData;
    
    private void SaveData(object sender, EventArgs e)
    {
        var context = sender as ObjectContext;
        if (context != null)
        {
            context.DetectChanges();
            var tsql = new StringBuilder();
            var dbParams = new List>();
    
            var deletedEntites = context.ObjectStateManager.GetObjectStateEntries(EntityState.Deleted);
            foreach (var delete in deletedEntites)
            {
                // Set state to unchanged - so entity framework will ignore
                delete.ChangeState(EntityState.Unchanged);
                // Method to generate tsql for deleting entities
                DeleteData(delete, tsql, dbParams);
            }
    
            var addedEntites = context.ObjectStateManager.GetObjectStateEntries(EntityState.Added);
            foreach (var add in addedEntites)
            {
                // Set state to unchanged - so entity framework will ignore
                add.ChangeState(EntityState.Unchanged);
                // Method to generate tsql for added entities
                AddData(add, tsql, dbParams);
            }
    
            var editedEntites = context.ObjectStateManager.GetObjectStateEntries(EntityState.Modified);
            foreach (var edit in editedEntites)
            {
                // Method to generate tsql for updating entities
                UpdateEditData(edit, tsql, dbParams);
                // Set state to unchanged - so entity framework will ignore
                edit.ChangeState(EntityState.Unchanged);
            }
            if (!tsql.ToString().IsEmpty())
            {
                var dbcommand = Database.Connection.CreateCommand();
                dbcommand.CommandText = tsql.ToString();
    
                foreach (var dbParameter in dbParams)
                {
                    var dbparam = dbcommand.CreateParameter();
                    dbparam.ParameterName = dbParameter.Key;
                    dbparam.Value = dbParameter.Value;
                    dbcommand.Parameters.Add(dbparam);
                }
                var results = dbcommand.ExecuteNonQuery();
            }
        }
    }
    

    Why we set the entity to unmodified after the update because you can do

    var changed properties = edit.GetModifiedProperties();
    

    to get a list of all the changed properties. Since all the entities are now marked as unchanged EF will not send any updates to SQL.

    You will also need to mess with the metadata to go from entity to table and property to fields. This isn't that hard to do but messing the metadata does take some time to learn. Something I still struggle with sometimes. I refactored all that out into an IMetaDataHelper interface where I pass it in the entity type and property name to get the table and field back - along with caching the result so I don't have to query metadata all the time.

    At the end the tsql is a batch that has all the T-SQL how we want it with the locking hints and containing the transaction level. We also change numeric fields from just being set to nfield = 10 but to be nfield = nfield + 2 in the TSQL if the user updated them by 2 to avoid the concurrency issue as well.

    What you wont get to is having SQL locked once someone starts to edit your entity but I don't see how you would get that with stored procedures as well.

    All in all it took me about 2 solid days to get this all up and running for us.

    qid & accept id: (23219081, 23221322) query: SQL code for DB query by date soup:

    If you will only have one value per tag per month, you can use a conditional aggregate to choose your record. I have gone for the MAX function, but if you only have one value it is arbitrary:

    \n
    DECLARE @Year INT;\nSET @Year = 2013;\n\n-- CONVERT TO A DATE TO ALLOW A SARGEABLE PREDICATE IN THE WHERE CLAUSE\nDECLARE @Date SMALLDATETIME;\nSET @Date = CONVERT(SMALLDATETIME, CONVERT(CHAR(4), @Year) + '0101', 112);\n\nSELECT  Tagname,\n        Jan = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 1 THEN value END),\n        Feb = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 2 THEN value END),\n        Mar = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 3 THEN value END),\n        Apr = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 4 THEN value END),\n        May = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 5 THEN value END),\n        Jun = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 6 THEN value END),\n        Jul = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 7 THEN value END),\n        Aug = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 8 THEN value END),\n        Sep = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 9 THEN value END),\n        Oct = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 10 THEN value END),\n        Nov = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 11 THEN value END),\n        Dec = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 12 THEN value END)\nFROM    runtime.dbo.History\nWHERE   Tagname IN ('Tag1', 'Tag2')\nAND     wwVersion = 'Latest'\nAND     DateTime >= @Date\nAND     DateTime < DATEADD(YEAR, 1, @Date)\nGROUP BY TagName;\n
    \n

    If you will have multiple values then you will need to apply some sort of logic to chose the correct one. In the below example I have gone for the first value for each month:

    \n
    DECLARE @Year INT;\nSET @Year = 2013;\n\n-- CONVERT TO A DATE TO ALLOW A SARGEABLE PREDICATE IN THE WHERE CLAUSE\nDECLARE @Date SMALLDATETIME;\nSET @Date = CONVERT(SMALLDATETIME, CONVERT(CHAR(4), @Year) + '0101', 112);\n\nSELECT  Tagname,\n        Jan = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 1 THEN value END),\n        Feb = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 2 THEN value END),\n        Mar = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 3 THEN value END),\n        Apr = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 4 THEN value END),\n        May = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 5 THEN value END),\n        Jun = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 6 THEN value END),\n        Jul = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 7 THEN value END),\n        Aug = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 8 THEN value END),\n        Sep = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 9 THEN value END),\n        Oct = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 10 THEN value END),\n        Nov = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 11 THEN value END),\n        Dec = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 12 THEN value END)\nFROM    (   SELECT  TagName, \n                    DateTime,\n                    Value,\n                    RowNum = ROW_NUMBER() OVER(PARTITION BY TagName, DATEPART(MONTH, DateTime), DATEPART(YEAR, DateTime)\n                                                ORDER BY DateTime)\n            FROM    runtime.dbo.History\n            WHERE   Tagname IN ('Tag1', 'Tag2')\n            AND     wwVersion = 'Latest'\n            AND     DateTime >= @Date\n            AND     DateTime < DATEADD(YEAR, 1, @Date)\n        ) h\nWHERE   h.RowNum = 1\nGROUP BY TagName;\n
    \n soup wrap:

    If you will only have one value per tag per month, you can use a conditional aggregate to choose your record. I have gone for the MAX function, but if you only have one value it is arbitrary:

    DECLARE @Year INT;
    SET @Year = 2013;
    
    -- CONVERT TO A DATE TO ALLOW A SARGEABLE PREDICATE IN THE WHERE CLAUSE
    DECLARE @Date SMALLDATETIME;
    SET @Date = CONVERT(SMALLDATETIME, CONVERT(CHAR(4), @Year) + '0101', 112);
    
    SELECT  Tagname,
            Jan = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 1 THEN value END),
            Feb = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 2 THEN value END),
            Mar = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 3 THEN value END),
            Apr = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 4 THEN value END),
            May = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 5 THEN value END),
            Jun = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 6 THEN value END),
            Jul = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 7 THEN value END),
            Aug = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 8 THEN value END),
            Sep = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 9 THEN value END),
            Oct = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 10 THEN value END),
            Nov = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 11 THEN value END),
            Dec = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 12 THEN value END)
    FROM    runtime.dbo.History
    WHERE   Tagname IN ('Tag1', 'Tag2')
    AND     wwVersion = 'Latest'
    AND     DateTime >= @Date
    AND     DateTime < DATEADD(YEAR, 1, @Date)
    GROUP BY TagName;
    

    If you will have multiple values then you will need to apply some sort of logic to chose the correct one. In the below example I have gone for the first value for each month:

    DECLARE @Year INT;
    SET @Year = 2013;
    
    -- CONVERT TO A DATE TO ALLOW A SARGEABLE PREDICATE IN THE WHERE CLAUSE
    DECLARE @Date SMALLDATETIME;
    SET @Date = CONVERT(SMALLDATETIME, CONVERT(CHAR(4), @Year) + '0101', 112);
    
    SELECT  Tagname,
            Jan = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 1 THEN value END),
            Feb = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 2 THEN value END),
            Mar = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 3 THEN value END),
            Apr = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 4 THEN value END),
            May = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 5 THEN value END),
            Jun = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 6 THEN value END),
            Jul = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 7 THEN value END),
            Aug = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 8 THEN value END),
            Sep = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 9 THEN value END),
            Oct = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 10 THEN value END),
            Nov = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 11 THEN value END),
            Dec = MAX(CASE WHEN DATEPART(MONTH, DateTime) = 12 THEN value END)
    FROM    (   SELECT  TagName, 
                        DateTime,
                        Value,
                        RowNum = ROW_NUMBER() OVER(PARTITION BY TagName, DATEPART(MONTH, DateTime), DATEPART(YEAR, DateTime)
                                                    ORDER BY DateTime)
                FROM    runtime.dbo.History
                WHERE   Tagname IN ('Tag1', 'Tag2')
                AND     wwVersion = 'Latest'
                AND     DateTime >= @Date
                AND     DateTime < DATEADD(YEAR, 1, @Date)
            ) h
    WHERE   h.RowNum = 1
    GROUP BY TagName;
    
    qid & accept id: (23225721, 23226382) query: Remove last character in dbms_output.put_line soup:

    You can't directly - you have no control over what has been written to the buffer. So you need to not write it in the first place. One way is to keep track of where you are in the output, the list of columns in this case, and only add the comma if you are not on the last item. Using the analytic row_number() function can be used for this:

    \n
    begin\n  for v_rec in (\n    select column_name,data_type,\n      row_number() over (order by column_id desc) as rn\n    from user_tab_cols\n    where table_name = 'RFI_ATCH_CHKLST_DTL'\n    order by column_id\n  ) loop\n    dbms_output.put('p' || v_rec.column_name);\n    if v_rec.rn != 1 then\n      dbms_output.put(',');\n    end if;\n    dbms_output.new_line;\n  end loop;\nend;\n/\n\npRACD_REMARKS,\npRACD_NA_STS,\npRACD_VAL2_STS,\npRACD_VAL_STS,\npBCLI_CODE,\npBAI_CODE,\npRAH_ID,\npRACD_ID\n
    \n

    The rn pseudocolumn generates a numeric row counter, in descending order in this case. This is the reverse of the order the columns actually appear in - both order by clauses use the same value, column_id, with one descending and the other ascending:

    \n
    select column_id, column_name,\n  row_number() over (order by column_id desc) as rn\nfrom user_tab_cols\nwhere table_name = 'RFI_ATCH_CHKLST_DTL'\norder by column_id;\n\n COLUMN_ID COLUMN_NAME                            RN\n---------- ------------------------------ ----------\n         1 RACD_REMARKS                            8 \n         2 RACD_NA_STS                             7 \n         3 RACD_VAL2_STS                           6 \n         4 RACD_VAL_STS                            5 \n         5 BCLI_CODE                               4 \n         6 BAI_CODE                                3 \n         7 RAH_ID                                  2 \n         8 RACD_ID                                 1 \n
    \n

    So when the row counter goes down to 1, you know you're on the last row from the cursor, and you can use that knowledge to omit the comma.

    \n

    You don't have to use column_id but it's probably useful here. You could order by column_name, or anything you like, as long as both clauses use the same ordering logic (but in reverse).

    \n soup wrap:

    You can't directly - you have no control over what has been written to the buffer. So you need to not write it in the first place. One way is to keep track of where you are in the output, the list of columns in this case, and only add the comma if you are not on the last item. Using the analytic row_number() function can be used for this:

    begin
      for v_rec in (
        select column_name,data_type,
          row_number() over (order by column_id desc) as rn
        from user_tab_cols
        where table_name = 'RFI_ATCH_CHKLST_DTL'
        order by column_id
      ) loop
        dbms_output.put('p' || v_rec.column_name);
        if v_rec.rn != 1 then
          dbms_output.put(',');
        end if;
        dbms_output.new_line;
      end loop;
    end;
    /
    
    pRACD_REMARKS,
    pRACD_NA_STS,
    pRACD_VAL2_STS,
    pRACD_VAL_STS,
    pBCLI_CODE,
    pBAI_CODE,
    pRAH_ID,
    pRACD_ID
    

    The rn pseudocolumn generates a numeric row counter, in descending order in this case. This is the reverse of the order the columns actually appear in - both order by clauses use the same value, column_id, with one descending and the other ascending:

    select column_id, column_name,
      row_number() over (order by column_id desc) as rn
    from user_tab_cols
    where table_name = 'RFI_ATCH_CHKLST_DTL'
    order by column_id;
    
     COLUMN_ID COLUMN_NAME                            RN
    ---------- ------------------------------ ----------
             1 RACD_REMARKS                            8 
             2 RACD_NA_STS                             7 
             3 RACD_VAL2_STS                           6 
             4 RACD_VAL_STS                            5 
             5 BCLI_CODE                               4 
             6 BAI_CODE                                3 
             7 RAH_ID                                  2 
             8 RACD_ID                                 1 
    

    So when the row counter goes down to 1, you know you're on the last row from the cursor, and you can use that knowledge to omit the comma.

    You don't have to use column_id but it's probably useful here. You could order by column_name, or anything you like, as long as both clauses use the same ordering logic (but in reverse).

    qid & accept id: (23303779, 23303991) query: update a table from another table and add new values soup:

    You can use MERGE statement to put this UPSERT operation in one statement but there are issues with merge statement I would split it into two Statements, UPDATE and INSERT

    \n

    UPDATE

    \n
    UPDATE O\nSET O.Initials  = N.Initials  \nFROM Original_Table O INNER JOIN New_Table N \nON O.ID = N.ID\n
    \n

    INSERT

    \n
    INSERT INTO Original_Table (ID , Initials)\nSELECT ID , Initials  \nFROM New_Table\nWHERE NOT EXISTS ( SELECT 1 \n                   FROM Original_Table\n                   WHERE ID = Original_Table.ID)\n
    \n

    Important Note

    \n

    Reason why I suggested to avoid using merge statement read this article Use Caution with SQL Server's MERGE Statement by Aaron Bertrand

    \n soup wrap:

    You can use MERGE statement to put this UPSERT operation in one statement but there are issues with merge statement I would split it into two Statements, UPDATE and INSERT

    UPDATE

    UPDATE O
    SET O.Initials  = N.Initials  
    FROM Original_Table O INNER JOIN New_Table N 
    ON O.ID = N.ID
    

    INSERT

    INSERT INTO Original_Table (ID , Initials)
    SELECT ID , Initials  
    FROM New_Table
    WHERE NOT EXISTS ( SELECT 1 
                       FROM Original_Table
                       WHERE ID = Original_Table.ID)
    

    Important Note

    Reason why I suggested to avoid using merge statement read this article Use Caution with SQL Server's MERGE Statement by Aaron Bertrand

    qid & accept id: (23349694, 23349762) query: MySQL query to find partial duplicates soup:
    SELECT  first_name,last_name,school,contest FROM table \nWHERE contest IN ('blah','mah','wah')\nGROUP BY  first_name, last_name, school \nHAVING COUNT(DISTINCT contest)>1\n
    \n

    Edit

    \n
    SELECT * FROM table t JOIN\n(SELECT  GROUP_CONCAT(id)as ids,first_name,last_name,school,contest FROM table\nWHERE contest IN (1001,1002,1003)\nGROUP BY  first_name, last_name, school \nHAVING COUNT(DISTINCT contest)>1)x\nON FIND_IN_SET(t.id,x.ids)>0\n
    \n

    FIDDLE

    \n soup wrap:
    SELECT  first_name,last_name,school,contest FROM table 
    WHERE contest IN ('blah','mah','wah')
    GROUP BY  first_name, last_name, school 
    HAVING COUNT(DISTINCT contest)>1
    

    Edit

    SELECT * FROM table t JOIN
    (SELECT  GROUP_CONCAT(id)as ids,first_name,last_name,school,contest FROM table
    WHERE contest IN (1001,1002,1003)
    GROUP BY  first_name, last_name, school 
    HAVING COUNT(DISTINCT contest)>1)x
    ON FIND_IN_SET(t.id,x.ids)>0
    

    FIDDLE

    qid & accept id: (23369574, 23370310) query: How to replace ' or any special character in when using XMLELEMENT Oracle soup:

    You can make use of utl_i18n package and unescape_reference() function in particular. Here is an example:

    \n
    clear screen;\ncolumn res format a7;\n\nselect utl_i18n.unescape_reference(\n          rtrim(\n               xmlagg( -- use of xmlagg() function in \n                       -- this situation seems to be unnecessary \n                       XMLELEMENT(E,'I''m'||':')\n                      ).extract('//text()'),':'\n                )\n        ) as res\n from dual;\n
    \n

    Result:

    \n
    RES   \n-------\nI'm  \n
    \n soup wrap:

    You can make use of utl_i18n package and unescape_reference() function in particular. Here is an example:

    clear screen;
    column res format a7;
    
    select utl_i18n.unescape_reference(
              rtrim(
                   xmlagg( -- use of xmlagg() function in 
                           -- this situation seems to be unnecessary 
                           XMLELEMENT(E,'I''m'||':')
                          ).extract('//text()'),':'
                    )
            ) as res
     from dual;
    

    Result:

    RES   
    -------
    I'm  
    
    qid & accept id: (23400658, 23400704) query: SQL - ALL, Including all values soup:

    I think what you are after is a inner join. Not sure from your questions which way around you want your data. However this should give you a good clue how to procede and what keywords to lock for in the documentation to go further.

    \n
    SELECT a.*\nFROM xyz a\nINNER JOIN abc b ON b.account_number = a.account_number;\n
    \n

    EDIT:

    \n

    Seems I misunderstood the original question.. sorry. To get what you want you can just do:

    \n
    SELECT  campaign_id\nFROM    xyz \nWHERE   account_number IN ('1', '2', '3', '5')\nGROUP BY campaign_id\nHAVING  COUNT(DISTINCT account_number) = 4;\n
    \n

    This is called relational division if you want to investigate further.

    \n soup wrap:

    I think what you are after is a inner join. Not sure from your questions which way around you want your data. However this should give you a good clue how to procede and what keywords to lock for in the documentation to go further.

    SELECT a.*
    FROM xyz a
    INNER JOIN abc b ON b.account_number = a.account_number;
    

    EDIT:

    Seems I misunderstood the original question.. sorry. To get what you want you can just do:

    SELECT  campaign_id
    FROM    xyz 
    WHERE   account_number IN ('1', '2', '3', '5')
    GROUP BY campaign_id
    HAVING  COUNT(DISTINCT account_number) = 4;
    

    This is called relational division if you want to investigate further.

    qid & accept id: (23433143, 23433168) query: How to select records that have multiple values in sql? soup:

    To return all the subscription plan IDs in one row, use GROUP_CONCAT:

    \n
    SELECT user_id, GROUP_CONCAT(DISTINCT subscription_plan_id), MIN(created_at), MAX(created_at)\nFROM\n  subscriptions\nWHERE \n  created_at BETWEEN '2014-01-01' AND '2014-01-31'\nGROUP BY\n  user_id\nHAVING\n  COUNT(DISTINCT subscription_plan_id) > 1\n
    \n

    To return them in multiple rows:

    \n
    SELECT DISTINCT user_id, subscription_plan_id, created_at\nFROM subscriptions s\nWHERE user_id IN (\n    SELECT user_id\n    FROM subscriptions\n    WHERE \n      created_at BETWEEN '2014-01-01' AND '2014-01-31'\n    GROUP BY\n      user_id\n    HAVING\n      COUNT(DISTINCT subscription_plan_id) > 1)\nAND created_at BETWEEN '2014-01-01' AND '2014-01-31'\nORDER BY user_id, created_at\n
    \n soup wrap:

    To return all the subscription plan IDs in one row, use GROUP_CONCAT:

    SELECT user_id, GROUP_CONCAT(DISTINCT subscription_plan_id), MIN(created_at), MAX(created_at)
    FROM
      subscriptions
    WHERE 
      created_at BETWEEN '2014-01-01' AND '2014-01-31'
    GROUP BY
      user_id
    HAVING
      COUNT(DISTINCT subscription_plan_id) > 1
    

    To return them in multiple rows:

    SELECT DISTINCT user_id, subscription_plan_id, created_at
    FROM subscriptions s
    WHERE user_id IN (
        SELECT user_id
        FROM subscriptions
        WHERE 
          created_at BETWEEN '2014-01-01' AND '2014-01-31'
        GROUP BY
          user_id
        HAVING
          COUNT(DISTINCT subscription_plan_id) > 1)
    AND created_at BETWEEN '2014-01-01' AND '2014-01-31'
    ORDER BY user_id, created_at
    
    qid & accept id: (23470309, 23474733) query: sql select query self join or loop through to fetch records soup:

    This is a recursive query: For all rooms go to the connecting room till you find the one that has no more connecting room (i.e. connecting room id is 0).

    \n
    with rooms (roomid, connectingroomid) as \n(\n  select \n    roomid,\n    case when connectingroomid = 0 then \n      roomid \n    else \n      connectingroomid \n    end as connectingroomid\n  from room\n  where connectingroomid = 0\n  union all\n  select room.roomid, rooms.connectingroomid \n  from room\n  inner join rooms on room.connectingroomid = rooms.roomid\n) \nselect * from rooms\norder by connectingroomid, roomid;\n
    \n

    Here is the SQL fiddle: http://www.sqlfiddle.com/#!3/46ed0/1.

    \n

    EDIT: Here is the explanation. Rather than doing this in the comments I am doing it here for better readability.

    \n

    The WITH clause is used to create a recursion here. You see I named it rooms and inside rooms I select from rooms itself. Here is how to read it: Start with the part before UNION ALL. Then recursively do the part after UNION ALL. So, before UNION ALL I only select the records where connectingroomid is zero. In your example you show every room with its connectingroomid except for those with connectingroomid for which you show the room with itself. I use CASE here to do the same. But now that I am explaining this, I notice that connectingroomid is always zero because of the WHERE clause. So the statement can be simplified thus:

    \n
    with rooms (roomid, connectingroomid) as \n(\n  select \n    roomid,\n    roomid as connectingroomid\n  from room where connectingroomid = 0\n  union all\n  select room.roomid, rooms.connectingroomid \n  from room\n  inner join rooms on room.connectingroomid = rooms.roomid\n) \nselect * from rooms\norder by connectingroomid, roomid;\n
    \n

    The SQL fiddle: http://www.sqlfiddle.com/#!3/46ed0/2.

    \n

    With the part before the UNION ALL I found the two rooms without connecting room. Now the part after UNION ALL is executed for the two rooms found. It selects the rooms which connecting room was just found. And then it selects the rooms which connecting room was just found. And so on till the join returns no more rooms.

    \n

    Hope this helps understanding the query. You can look for "recursive cte" on the Internet to find more examples and explanations on the topic.

    \n soup wrap:

    This is a recursive query: For all rooms go to the connecting room till you find the one that has no more connecting room (i.e. connecting room id is 0).

    with rooms (roomid, connectingroomid) as 
    (
      select 
        roomid,
        case when connectingroomid = 0 then 
          roomid 
        else 
          connectingroomid 
        end as connectingroomid
      from room
      where connectingroomid = 0
      union all
      select room.roomid, rooms.connectingroomid 
      from room
      inner join rooms on room.connectingroomid = rooms.roomid
    ) 
    select * from rooms
    order by connectingroomid, roomid;
    

    Here is the SQL fiddle: http://www.sqlfiddle.com/#!3/46ed0/1.

    EDIT: Here is the explanation. Rather than doing this in the comments I am doing it here for better readability.

    The WITH clause is used to create a recursion here. You see I named it rooms and inside rooms I select from rooms itself. Here is how to read it: Start with the part before UNION ALL. Then recursively do the part after UNION ALL. So, before UNION ALL I only select the records where connectingroomid is zero. In your example you show every room with its connectingroomid except for those with connectingroomid for which you show the room with itself. I use CASE here to do the same. But now that I am explaining this, I notice that connectingroomid is always zero because of the WHERE clause. So the statement can be simplified thus:

    with rooms (roomid, connectingroomid) as 
    (
      select 
        roomid,
        roomid as connectingroomid
      from room where connectingroomid = 0
      union all
      select room.roomid, rooms.connectingroomid 
      from room
      inner join rooms on room.connectingroomid = rooms.roomid
    ) 
    select * from rooms
    order by connectingroomid, roomid;
    

    The SQL fiddle: http://www.sqlfiddle.com/#!3/46ed0/2.

    With the part before the UNION ALL I found the two rooms without connecting room. Now the part after UNION ALL is executed for the two rooms found. It selects the rooms which connecting room was just found. And then it selects the rooms which connecting room was just found. And so on till the join returns no more rooms.

    Hope this helps understanding the query. You can look for "recursive cte" on the Internet to find more examples and explanations on the topic.

    qid & accept id: (23478919, 23479108) query: Referencing table in another database soup:

    Yes, the kind of reference you describe is called a table synonym in SQL Server.

    \n
    USE DBS\nGO\n\nCREATE SYNONYM [dbo].[secondaryTableReference] FOR [DBS].[dbo].[secondaryTable]\nGO\n
    \n

    Then you may query it as though it is a table in your primary database.

    \n
    SELECT * FROM [dbo].[secondaryTableReference]\n
    \n soup wrap:

    Yes, the kind of reference you describe is called a table synonym in SQL Server.

    USE DBS
    GO
    
    CREATE SYNONYM [dbo].[secondaryTableReference] FOR [DBS].[dbo].[secondaryTable]
    GO
    

    Then you may query it as though it is a table in your primary database.

    SELECT * FROM [dbo].[secondaryTableReference]
    
    qid & accept id: (23507472, 23507574) query: SUM of columns and displaying multiple queries soup:

    Try using:

    \n
    $result = mysql_query("SELECT productLine, SUM(buyPrice) AS sum_buy_price, SUM(MSRP) AS sum_msrp FROM myTable group by productLine"); // selecting data through mysql_query()\n
    \n

    and to output the results:

    \n
    echo "
    " . $row['RESOURCENAME'] . \n "" . $row['EVENTTYPE'] . \n "
    " . $row['RESOURCENAME'] . "" . $row['EVENTTYPE'] . "
    ";\nwhile($row = mysql_fetch_array($result))\n{\n // we are running a while loop to print all the rows in a table\n echo ""; \n echo ""; \n echo ""; \n echo ""; \n echo ""; \n}\necho "
    " . $row['productLine'] . "" . $row['sum_buy_price'] . "" . $row['sum_msrp'] . "
    ";\n
    \n soup wrap:

    Try using:

    $result = mysql_query("SELECT productLine, SUM(buyPrice) AS sum_buy_price, SUM(MSRP) AS sum_msrp FROM myTable group by productLine"); // selecting data through mysql_query()
    

    and to output the results:

    echo "";
    while($row = mysql_fetch_array($result))
    {
        // we are running a while loop to print all the rows in a table
        echo "";  
        echo "";  
        echo "";  
        echo "";  
        echo ""; 
    }
    echo "
    " . $row['productLine'] . "" . $row['sum_buy_price'] . "" . $row['sum_msrp'] . "
    ";
    qid & accept id: (23527871, 23531944) query: change SQL column from Float to Decimal Type soup:

    You can simply update the Rate data and then change the column data type.

    \n

    First, you can verify the CAST by using the following query (for only rows that have the decimal part < 0.000001)

    \n
    SELECT \n  [Rate],\n  CAST([Rate] as decimal(28, 6)) Rate_decimal\nFROM [dbo].[TES_Tracks]\nWHERE [Rate] - FLOOR([Rate]) < 0.000001;\n
    \n

    Once you have verified that the CAST expression is correct, then you can apply it using an UPDATE statement. Again, you can update only those rows which have [Rate] - FLOOR([Rate]), thus getting good performance.

    \n
    UPDATE [dbo].[TES_Tracks]\nSET [Rate] = CAST([Rate] as decimal(28, 6))\nWHERE [Rate] - FLOOR([Rate]) < 0.000001;\n\nALTER TABLE [dbo].[TES_Tracks] ALTER COLUMN [Rate] DECIMAL(28,6);\n
    \n

    This way, you would not need to drop the Rate column.

    \n

    SQL Fiddle demo

    \n soup wrap:

    You can simply update the Rate data and then change the column data type.

    First, you can verify the CAST by using the following query (for only rows that have the decimal part < 0.000001)

    SELECT 
      [Rate],
      CAST([Rate] as decimal(28, 6)) Rate_decimal
    FROM [dbo].[TES_Tracks]
    WHERE [Rate] - FLOOR([Rate]) < 0.000001;
    

    Once you have verified that the CAST expression is correct, then you can apply it using an UPDATE statement. Again, you can update only those rows which have [Rate] - FLOOR([Rate]), thus getting good performance.

    UPDATE [dbo].[TES_Tracks]
    SET [Rate] = CAST([Rate] as decimal(28, 6))
    WHERE [Rate] - FLOOR([Rate]) < 0.000001;
    
    ALTER TABLE [dbo].[TES_Tracks] ALTER COLUMN [Rate] DECIMAL(28,6);
    

    This way, you would not need to drop the Rate column.

    SQL Fiddle demo

    qid & accept id: (23552848, 23552906) query: Can I add aggregated column without performing a join? soup:

    Depending on what your function is, you can use window functions (sometimes called analytic functions). For instance, if you wanted the maximum value of b for a given a:

    \n
    select a, b, c, max(b) over (partition by a) as d\nfrom table1;\n
    \n

    Without more information, it is hard to be more specific.

    \n

    EDIT:

    \n

    You should be able to do this with analytic functions:

    \n
    select count , avg, variance,\n       (sum(count * avg) over (partition by b) /\n        sum(count) over (partition by b)\n       ) as weighted_average\nfrom view_1;\n
    \n soup wrap:

    Depending on what your function is, you can use window functions (sometimes called analytic functions). For instance, if you wanted the maximum value of b for a given a:

    select a, b, c, max(b) over (partition by a) as d
    from table1;
    

    Without more information, it is hard to be more specific.

    EDIT:

    You should be able to do this with analytic functions:

    select count , avg, variance,
           (sum(count * avg) over (partition by b) /
            sum(count) over (partition by b)
           ) as weighted_average
    from view_1;
    
    qid & accept id: (23594298, 23594347) query: Select all data which is associated in and combination soup:

    You can do so

    \n
    SELECT * FROM documents d\nRIGHT JOIN doc_labels dl\nON(d.id = dl.doc_id)\nWHERE dl.label_id IN(1,2)\nGROUP BY d.id\nHAVING COUNT(DISTINCT dl.label_id) >= 2 /*this will give you the documents that must have lable 1,2 and can have more lables*/\n
    \n

    Or if you need the documents with only label 1 and 2 then change

    \n
    HAVING COUNT(DISTINCT dl.label_id) = 2\n
    \n soup wrap:

    You can do so

    SELECT * FROM documents d
    RIGHT JOIN doc_labels dl
    ON(d.id = dl.doc_id)
    WHERE dl.label_id IN(1,2)
    GROUP BY d.id
    HAVING COUNT(DISTINCT dl.label_id) >= 2 /*this will give you the documents that must have lable 1,2 and can have more lables*/
    

    Or if you need the documents with only label 1 and 2 then change

    HAVING COUNT(DISTINCT dl.label_id) = 2
    
    qid & accept id: (23608624, 23608729) query: select rows mysql where the value of the left join is different soup:

    You can do so

    \n
    select *\nfrom messages m\nleft join deleted_messages d on d.message_id = m.id\nwhere \n d.message_id IS NULL\nAND m.user_id = 1\n
    \n

    This will give all the messages from user 1 which are not deleted

    \n

    Demo

    \n

    Other way to use NOT EXISTS

    \n
    select *\nfrom messages m\nwhere not exists\n(select 1 from deleted_messages d where d.message_id = m.id)\nAND m.user_id = 1\n
    \n

    Demo

    \n

    For performance factor you can find the details here\nLEFT JOIN / IS NULL vs. NOT IN vs. NOT EXISTS: nullable columns

    \n soup wrap:

    You can do so

    select *
    from messages m
    left join deleted_messages d on d.message_id = m.id
    where 
     d.message_id IS NULL
    AND m.user_id = 1
    

    This will give all the messages from user 1 which are not deleted

    Demo

    Other way to use NOT EXISTS

    select *
    from messages m
    where not exists
    (select 1 from deleted_messages d where d.message_id = m.id)
    AND m.user_id = 1
    

    Demo

    For performance factor you can find the details here LEFT JOIN / IS NULL vs. NOT IN vs. NOT EXISTS: nullable columns

    qid & accept id: (23626176, 23626513) query: Combining data between three tables soup:

    What you can do to achieve this is using joins : Here is some MySQL doc about this

    \n

    But here, using 2 tables for single partners and popularity is not really needed, since one line of single_partners is strictly equal to one line of partner_popularty, you can put them in the same table. You should put them in the same table, and using a default of zero if the partner has no popularity registered, so it'll show last when sorting by popularity.

    \n

    So, then you'll have 2 tables :

    \n

    Table 1 - partners

    \n
    |  partner_id  |  name  |  type  |  logo  |\n
    \n

    Table 2 - single_partners

    \n
    |  id  |  partner_id  |  address  |  zipcode  |  city  | pop_men | pop_women | pop_family\n
    \n

    Now your query to select all that becomes extremely simple (just select the partners, filter the city, order them and you're done), and with a little grouping and a join, you can also select partners sorted by popularity summarized in all cities :

    \n
    SELECT p.*,\n       SUM(pop_men) AS total_pop_men,\n       SUM(pop_women) AS total_pop_women,\n       SUM(pop_family) AS total_pop_family\nFROM partners p\nJOIN single_partners sp ON sp.partner_id = p.partner_id\nGROUP BY partner_id\nORDER BY total_pop_men DESC,\n         total_pop_women DESC,\n         total_pop_family DESC\n
    \n soup wrap:

    What you can do to achieve this is using joins : Here is some MySQL doc about this

    But here, using 2 tables for single partners and popularity is not really needed, since one line of single_partners is strictly equal to one line of partner_popularty, you can put them in the same table. You should put them in the same table, and using a default of zero if the partner has no popularity registered, so it'll show last when sorting by popularity.

    So, then you'll have 2 tables :

    Table 1 - partners

    |  partner_id  |  name  |  type  |  logo  |
    

    Table 2 - single_partners

    |  id  |  partner_id  |  address  |  zipcode  |  city  | pop_men | pop_women | pop_family
    

    Now your query to select all that becomes extremely simple (just select the partners, filter the city, order them and you're done), and with a little grouping and a join, you can also select partners sorted by popularity summarized in all cities :

    SELECT p.*,
           SUM(pop_men) AS total_pop_men,
           SUM(pop_women) AS total_pop_women,
           SUM(pop_family) AS total_pop_family
    FROM partners p
    JOIN single_partners sp ON sp.partner_id = p.partner_id
    GROUP BY partner_id
    ORDER BY total_pop_men DESC,
             total_pop_women DESC,
             total_pop_family DESC
    
    qid & accept id: (23642201, 23642292) query: How do I write paging/limits into a SQL query for 2008 R2? soup:

    Since you're using Server 2008, you can use this excellent example from that link. (formatted to be more readable):

    \n
    DECLARE @RowsPerPage INT = 10\nDECLARE @PageNumber INT = 6\n\nSELECT SalesOrderDetailID\n    ,SalesOrderID\n    ,ProductID\nFROM (\n    SELECT SalesOrderDetailID\n        ,SalesOrderID\n        ,ProductID\n        ,ROW_NUMBER() OVER (\n            ORDER BY SalesOrderDetailID\n            ) AS RowNum\n    FROM Sales.SalesOrderDetail\n    ) AS SOD\nWHERE SOD.RowNum BETWEEN ((@PageNumber - 1) * @RowsPerPage) + 1\n        AND @RowsPerPage * (@PageNumber)\n
    \n

    This will return the sixth page, of ten records on each page. ROW_NUMBER() basically assigns a temporary Identity column for this query, ordered by SalesOrderDetailID.

    \n

    You can then select records where row number is between 61-70, for that sixth page.

    \n

    Hope that makes sense

    \n
    \n

    Working from your added attempt:

    \n
    DECLARE @RowsPerPage INT = 10\nDECLARE @PageNumber INT = 6\n\nSELECT *\nFROM (\n    SELECT t1.*\n        ,t3.[timestamp]\n        ,t3.comments\n        ,ROW_NUMBER() OVER (\n            ORDER BY t1.id\n            ) AS RowNum\n    FROM crm_main t1\n    INNER JOIN crm_group_relationships t2 ON t1.id = t2.customerid\n    OUTER APPLY (\n        SELECT TOP 1 t3.[timestamp]\n            ,t3.customerid\n            ,t3.comments\n        FROM crm_comments t3\n        WHERE t1.id = t3.customerid\n        ORDER BY t3.TIMESTAMP ASC\n        ) t3\n    WHERE t1.dealerid = '9999'\n        AND t2.groupid = '251'\n    ) AS x\nWHERE x.RowNum BETWEEN ((@PageNumber - 1) * @RowsPerPage) + 1\n        AND @RowsPerPage * (@PageNumber)\n
    \n soup wrap:

    Since you're using Server 2008, you can use this excellent example from that link. (formatted to be more readable):

    DECLARE @RowsPerPage INT = 10
    DECLARE @PageNumber INT = 6
    
    SELECT SalesOrderDetailID
        ,SalesOrderID
        ,ProductID
    FROM (
        SELECT SalesOrderDetailID
            ,SalesOrderID
            ,ProductID
            ,ROW_NUMBER() OVER (
                ORDER BY SalesOrderDetailID
                ) AS RowNum
        FROM Sales.SalesOrderDetail
        ) AS SOD
    WHERE SOD.RowNum BETWEEN ((@PageNumber - 1) * @RowsPerPage) + 1
            AND @RowsPerPage * (@PageNumber)
    

    This will return the sixth page, of ten records on each page. ROW_NUMBER() basically assigns a temporary Identity column for this query, ordered by SalesOrderDetailID.

    You can then select records where row number is between 61-70, for that sixth page.

    Hope that makes sense


    Working from your added attempt:

    DECLARE @RowsPerPage INT = 10
    DECLARE @PageNumber INT = 6
    
    SELECT *
    FROM (
        SELECT t1.*
            ,t3.[timestamp]
            ,t3.comments
            ,ROW_NUMBER() OVER (
                ORDER BY t1.id
                ) AS RowNum
        FROM crm_main t1
        INNER JOIN crm_group_relationships t2 ON t1.id = t2.customerid
        OUTER APPLY (
            SELECT TOP 1 t3.[timestamp]
                ,t3.customerid
                ,t3.comments
            FROM crm_comments t3
            WHERE t1.id = t3.customerid
            ORDER BY t3.TIMESTAMP ASC
            ) t3
        WHERE t1.dealerid = '9999'
            AND t2.groupid = '251'
        ) AS x
    WHERE x.RowNum BETWEEN ((@PageNumber - 1) * @RowsPerPage) + 1
            AND @RowsPerPage * (@PageNumber)
    
    qid & accept id: (23676371, 23680644) query: Performance monitoring for standalone .NET desktop application with New Relic soup:

    I work for New Relic.

    \n

    It is possible to monitor the performance of non-IIS applications as long as they meet these requirements:

    \n\n

    You can read more about these requirements on our documentation site here:\nhttps://docs.newrelic.com/docs/dotnet/instrumenting-custom-applications

    \n

    You may need gather custom metrics by using our .NET agent API. The methods RecordMetric, RecordResponseTimeMetric, and IncrementCounter specifically work with non-web applications.\nOur .NET agent API documentation is located here: https://docs.newrelic.com/docs/dotnet/net-agent-api

    \n

    You can also set up custom transactions to trace non-web transactions. We can normally trace functions that use HttpObjects, but the following is a new feature implemented in agent version 2.24.218.0.\nIn the cases of non-web apps and async calls where there is no transaction context the following feature can be used to create transactions where the agent would normally not do so. This is a manual process via a custom instrumentation file.

    \n

    Create a custom instrumentation file named, say CustomInstrumentation.xml, in C:\ProgramData\New Relic.NET Agent\Extensions along side CoreInstrumentation.xml. Add the following content to your custom instrumentation file:

    \n
    \n\n  \n    \n      \n        \n      \n    \n  \n\n
    \n

    You must change the attribute values Category/Name, AssemblyName, NameSpace.ClassName, and MethodName above:

    \n

    The transaction starts when an object of type NameSpace.ClassName from assembly AssemblyName invokes the method MethodName. The transaction ends when the method returns or throws an exception. The transaction will be named Name and will be grouped into the transaction type specified by Category. In the New Relic UI you can select the transaction type from the Type drop down menu when viewing the Monitoring > Transactions page.

    \n

    Note that both Category and Name must be present and must be separated by a slash.

    \n

    As you would expect, instrumented activity (methods, database, externals) occurring during the method's invocation will be shown in the transaction's breakdown table and in transaction traces.

    \n

    Here is a more concrete example. First, the instrumentation file:

    \n
    \n\n  \n    \n      \n        \n        \n      \n    \n    \n      \n        \n      \n    \n  \n\n
    \n

    Now some code:

    \n
    var foo = new Foo();\nfoo.Bar1(); // Creates a transaction named Bars in category Background\nfoo.Bar2(); // Same here.\nfoo.Bar3(); // Won't create a new transaction.  See notes below.\n\npublic class Foo\n{\n    // this will result in a transaction with an External Service request segment in the transaction trace\n    public void Bar1()\n    {\n        new WebClient().DownloadString("http://www.google.com/);\n    }\n\n    // this will result in a transaction that has one segment with a category of "Custom" and a name of "some custom metric name"\n    public void Bar2()\n    {\n        // the segment for Bar3 will contain your SQL query inside of it and possibly an execution plan\n        Bar3();\n    }\n\n    // if Bar3 is called directly, it won't get a transaction made for it.\n    // However, if it is called inside of Bar1 or Bar2 then it will show up as a segment containing the SQL query\n    private void Bar3()\n    {\n        using (var connection = new SqlConnection(ConnectionStrings["MsSqlConnection"].ConnectionString))\n        {\n            connection.Open();\n            using (var command = new SqlCommand("SELECT * FROM table", connection))\n            using (var reader = command.ExecuteReader())\n            {\n                reader.Read();\n            }\n        }\n    }\n}\n
    \n

    Here is a simple console app that demonstrates Custom Transactions:

    \n
    using System;\nusing System.Collections.Generic;\nusing System.Linq;\nusing System.Text;\nusing System.Threading.Tasks;\n\nnamespace ConsoleApplication1\n{\n    class Program\n    {\n        static void Main(string[] args)\n        {\n            Console.WriteLine("Custom Transactions");\n            var t = new CustomTransaction();\n            for (int i = 0; i < 100; ++i )\n                t.StartTransaction();\n        }\n    }\n    class CustomTransaction\n    {\n        public void StartTransaction()\n        {\n            Console.WriteLine("StartTransaction");     \n            Dummy();\n        }\n        void Dummy()\n        {\n            System.Threading.Thread.Sleep(5000);\n        }\n    }\n\n}\n
    \n

    Use the following custom instrumentation file:

    \n
    \n\n    \n        \n          \n            \n          \n        \n        \n          \n            \n          \n        \n    \n\n
    \n soup wrap:

    I work for New Relic.

    It is possible to monitor the performance of non-IIS applications as long as they meet these requirements:

    You can read more about these requirements on our documentation site here: https://docs.newrelic.com/docs/dotnet/instrumenting-custom-applications

    You may need gather custom metrics by using our .NET agent API. The methods RecordMetric, RecordResponseTimeMetric, and IncrementCounter specifically work with non-web applications. Our .NET agent API documentation is located here: https://docs.newrelic.com/docs/dotnet/net-agent-api

    You can also set up custom transactions to trace non-web transactions. We can normally trace functions that use HttpObjects, but the following is a new feature implemented in agent version 2.24.218.0. In the cases of non-web apps and async calls where there is no transaction context the following feature can be used to create transactions where the agent would normally not do so. This is a manual process via a custom instrumentation file.

    Create a custom instrumentation file named, say CustomInstrumentation.xml, in C:\ProgramData\New Relic.NET Agent\Extensions along side CoreInstrumentation.xml. Add the following content to your custom instrumentation file:

    
    
      
        
          
            
          
        
      
    
    

    You must change the attribute values Category/Name, AssemblyName, NameSpace.ClassName, and MethodName above:

    The transaction starts when an object of type NameSpace.ClassName from assembly AssemblyName invokes the method MethodName. The transaction ends when the method returns or throws an exception. The transaction will be named Name and will be grouped into the transaction type specified by Category. In the New Relic UI you can select the transaction type from the Type drop down menu when viewing the Monitoring > Transactions page.

    Note that both Category and Name must be present and must be separated by a slash.

    As you would expect, instrumented activity (methods, database, externals) occurring during the method's invocation will be shown in the transaction's breakdown table and in transaction traces.

    Here is a more concrete example. First, the instrumentation file:

    
    
      
        
          
            
            
          
        
        
          
            
          
        
      
    
    

    Now some code:

    var foo = new Foo();
    foo.Bar1(); // Creates a transaction named Bars in category Background
    foo.Bar2(); // Same here.
    foo.Bar3(); // Won't create a new transaction.  See notes below.
    
    public class Foo
    {
        // this will result in a transaction with an External Service request segment in the transaction trace
        public void Bar1()
        {
            new WebClient().DownloadString("http://www.google.com/);
        }
    
        // this will result in a transaction that has one segment with a category of "Custom" and a name of "some custom metric name"
        public void Bar2()
        {
            // the segment for Bar3 will contain your SQL query inside of it and possibly an execution plan
            Bar3();
        }
    
        // if Bar3 is called directly, it won't get a transaction made for it.
        // However, if it is called inside of Bar1 or Bar2 then it will show up as a segment containing the SQL query
        private void Bar3()
        {
            using (var connection = new SqlConnection(ConnectionStrings["MsSqlConnection"].ConnectionString))
            {
                connection.Open();
                using (var command = new SqlCommand("SELECT * FROM table", connection))
                using (var reader = command.ExecuteReader())
                {
                    reader.Read();
                }
            }
        }
    }
    

    Here is a simple console app that demonstrates Custom Transactions:

    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Text;
    using System.Threading.Tasks;
    
    namespace ConsoleApplication1
    {
        class Program
        {
            static void Main(string[] args)
            {
                Console.WriteLine("Custom Transactions");
                var t = new CustomTransaction();
                for (int i = 0; i < 100; ++i )
                    t.StartTransaction();
            }
        }
        class CustomTransaction
        {
            public void StartTransaction()
            {
                Console.WriteLine("StartTransaction");     
                Dummy();
            }
            void Dummy()
            {
                System.Threading.Thread.Sleep(5000);
            }
        }
    
    }
    

    Use the following custom instrumentation file:

    
    
        
            
              
                
              
            
            
              
                
              
            
        
    
    
    qid & accept id: (23705421, 23710941) query: Get the rest of the row in a max group by soup:

    I would think this would solve your problem:

    \n
    SELECT who.employee_id, course.course_id,\n       MAX(add_months(sess.end_date, vers.valid_for_months))\n
    \n

    That gets the latest end date. If you want the end date for the last session, use row_number():

    \n
    SELECT employee_id, course_id, end_date\nFROM (SELECT who.employee_id, course.course_id, sess.end_date,\n             row_number() over (partition by who.employee_id, course.course_id\n                                order by sess.end_date\n                               ) as seqnum\n      FROM employee_session_join esj\n      JOIN training_session sess on sess.session_id = esj.session_id\n      JOIN course_version vers on vers.version_id = sess.version_id\n      JOIN course course on course.course_id = vers.course_id\n      JOIN employee who on who.employee_id = esj.employee_id\n      WHERE esj.active_flag = 'Y'\n        AND sess.active_flag = 'Y'\n        AND course.active_flag = 'Y'\n        AND who.active_flag = 'Y'\n        AND esj.approval_status = 5 -- successfully passed\n) e\nWHERE seqnum = 1;\n
    \n soup wrap:

    I would think this would solve your problem:

    SELECT who.employee_id, course.course_id,
           MAX(add_months(sess.end_date, vers.valid_for_months))
    

    That gets the latest end date. If you want the end date for the last session, use row_number():

    SELECT employee_id, course_id, end_date
    FROM (SELECT who.employee_id, course.course_id, sess.end_date,
                 row_number() over (partition by who.employee_id, course.course_id
                                    order by sess.end_date
                                   ) as seqnum
          FROM employee_session_join esj
          JOIN training_session sess on sess.session_id = esj.session_id
          JOIN course_version vers on vers.version_id = sess.version_id
          JOIN course course on course.course_id = vers.course_id
          JOIN employee who on who.employee_id = esj.employee_id
          WHERE esj.active_flag = 'Y'
            AND sess.active_flag = 'Y'
            AND course.active_flag = 'Y'
            AND who.active_flag = 'Y'
            AND esj.approval_status = 5 -- successfully passed
    ) e
    WHERE seqnum = 1;
    
    qid & accept id: (23768482, 23779481) query: SSIS Converting Percent to Decimal soup:

    Hope this is what you are looking for

    \n

    Excel sheet like this is the source.

    \n

    enter image description here

    \n

    I just tested it in my system.It is working fine. This is what I did.

    \n
      \n
    1. Created an SSIS package with just 1 DFT.
    2. \n
    3. Data flow is given below. Please note that the value which appeared as 40% in Excel sheet is visible as 0.40. So I added two derived columns. One converting as such and the next which multiplies with 100.
    4. \n
    \n

    enter image description here

    \n

    the derived column structure is shown below.

    \n

    enter image description here

    \n

    The destination table structure be

    \n
    Create table Destination\n(\nid int,\nname varchar(15),\nhike decimal(8,2)\n)\n
    \n

    I am getting the result as expected.

    \n
    Select * from Destination\n
    \n

    enter image description here

    \n soup wrap:

    Hope this is what you are looking for

    Excel sheet like this is the source.

    enter image description here

    I just tested it in my system.It is working fine. This is what I did.

    1. Created an SSIS package with just 1 DFT.
    2. Data flow is given below. Please note that the value which appeared as 40% in Excel sheet is visible as 0.40. So I added two derived columns. One converting as such and the next which multiplies with 100.

    enter image description here

    the derived column structure is shown below.

    enter image description here

    The destination table structure be

    Create table Destination
    (
    id int,
    name varchar(15),
    hike decimal(8,2)
    )
    

    I am getting the result as expected.

    Select * from Destination
    

    enter image description here

    qid & accept id: (23803359, 23803584) query: SQL selecting a column, SUM and ORDER BY using three tables soup:

    Sub query to get the latest price date, and join to prices:-

    \n
    SELECT stocks.id, stocks.size, prices.price, SUM(stocks.qty) - sales.qtySold   \nFROM stocks\nINNER JOIN\n(\n    SELECT id, size, MAX(priceDT) AS MaxPriceDate\n    FROM prices\n    GROP BY id, size\n) Sub1\nON stocks.id = Sub1.id AND stocks.size = Sub1.size\nINNER JOIN prices\nON Sub1.id = prices.id AND Sub1.size = prices.size AND Sub1.MaxPriceDate = prices.priceDT\nINNER JOIN sales\nON stocks.id = sales.id AND stocks.size = sales.size\nGROUP BY stocks.id, stocks.size\n
    \n

    My concern is that sales has multiple rows for each id / size

    \n

    EDIT - to cope with multiple rows on sales for an id / size using an additional subquery:-

    \n
    SELECT stocks.id, stocks.size, prices.price, SUM(stocks.qty) - Sub2.tot_qtySold   \nFROM stocks\nINNER JOIN\n(\n    SELECT id, size, MAX(priceDT) AS MaxPriceDate\n    FROM prices\n    GROUP BY id, size\n) Sub1\nON stocks.id = Sub1.id AND stocks.size = Sub1.size\nINNER JOIN prices\nON Sub1.id = prices.id AND Sub1.size = prices.size AND Sub1.MaxPriceDate = prices.priceDT\nINNER JOIN\n(\n    SELECT id, size, SUM(qtySold) AS tot_qtySold\n    FROM sales\n    GROUP BY id, size\n) Sub2\nON stocks.id = Sub2.id AND stocks.size = Sub2.size\nGROUP BY stocks.id, stocks.size\n
    \n

    ON sqlfiddle:-

    \n

    http://www.sqlfiddle.com/#!2/f7d37/2

    \n

    EDIT - in answer to a question posted in the comment:-

    \n

    The reason for this is that there are 2 matching records on the stocks table.

    \n

    So for brandid 100 and size of 90 there are these 2 records from stocks:-

    \n
    brandId size    qtyArr\n(100 ,  90   ,  10),\n(100 ,  90   ,  100),\n
    \n

    and this one from sales:-

    \n
    brandId size    qtySold\n(100,   90, 35),\n
    \n

    So MySQL will build up table initially containing a set of 2 rows. The first row will contain the first row from stocks and the only matching row from sales. The 2nd row will have the 2nd row from stocks and (again the matching row from sales).

    \n
    brandId size    qtyArr  brandId size    qtySold\n(100,   90, 10, 100,    90, 35),\n(100,   90, 100,    100,    90, 35),\n
    \n

    It then performs the SUM of qtySold, but the quantities are counted twice (ie, once for each match records on stocks).

    \n

    To get around this will likely need a sub query to get the total qtysold for each brand / size, then join the results of that sub query against the stocks table

    \n
    SELECT SUM(s.qtyArr), SUM(l.qtySold) \nFROM stocks s \nINNER join \n(\n    SELECT brandId, size, sum(l.qtySold)\n    FROM sales\n    GROUP BY brandId, size\n) l \nON l.brandId = s.brandId\nAND l.size = s.size\nWHERE s.brandId='100' AND s.size='90';\n
    \n soup wrap:

    Sub query to get the latest price date, and join to prices:-

    SELECT stocks.id, stocks.size, prices.price, SUM(stocks.qty) - sales.qtySold   
    FROM stocks
    INNER JOIN
    (
        SELECT id, size, MAX(priceDT) AS MaxPriceDate
        FROM prices
        GROP BY id, size
    ) Sub1
    ON stocks.id = Sub1.id AND stocks.size = Sub1.size
    INNER JOIN prices
    ON Sub1.id = prices.id AND Sub1.size = prices.size AND Sub1.MaxPriceDate = prices.priceDT
    INNER JOIN sales
    ON stocks.id = sales.id AND stocks.size = sales.size
    GROUP BY stocks.id, stocks.size
    

    My concern is that sales has multiple rows for each id / size

    EDIT - to cope with multiple rows on sales for an id / size using an additional subquery:-

    SELECT stocks.id, stocks.size, prices.price, SUM(stocks.qty) - Sub2.tot_qtySold   
    FROM stocks
    INNER JOIN
    (
        SELECT id, size, MAX(priceDT) AS MaxPriceDate
        FROM prices
        GROUP BY id, size
    ) Sub1
    ON stocks.id = Sub1.id AND stocks.size = Sub1.size
    INNER JOIN prices
    ON Sub1.id = prices.id AND Sub1.size = prices.size AND Sub1.MaxPriceDate = prices.priceDT
    INNER JOIN
    (
        SELECT id, size, SUM(qtySold) AS tot_qtySold
        FROM sales
        GROUP BY id, size
    ) Sub2
    ON stocks.id = Sub2.id AND stocks.size = Sub2.size
    GROUP BY stocks.id, stocks.size
    

    ON sqlfiddle:-

    http://www.sqlfiddle.com/#!2/f7d37/2

    EDIT - in answer to a question posted in the comment:-

    The reason for this is that there are 2 matching records on the stocks table.

    So for brandid 100 and size of 90 there are these 2 records from stocks:-

    brandId size    qtyArr
    (100 ,  90   ,  10),
    (100 ,  90   ,  100),
    

    and this one from sales:-

    brandId size    qtySold
    (100,   90, 35),
    

    So MySQL will build up table initially containing a set of 2 rows. The first row will contain the first row from stocks and the only matching row from sales. The 2nd row will have the 2nd row from stocks and (again the matching row from sales).

    brandId size    qtyArr  brandId size    qtySold
    (100,   90, 10, 100,    90, 35),
    (100,   90, 100,    100,    90, 35),
    

    It then performs the SUM of qtySold, but the quantities are counted twice (ie, once for each match records on stocks).

    To get around this will likely need a sub query to get the total qtysold for each brand / size, then join the results of that sub query against the stocks table

    SELECT SUM(s.qtyArr), SUM(l.qtySold) 
    FROM stocks s 
    INNER join 
    (
        SELECT brandId, size, sum(l.qtySold)
        FROM sales
        GROUP BY brandId, size
    ) l 
    ON l.brandId = s.brandId
    AND l.size = s.size
    WHERE s.brandId='100' AND s.size='90';
    
    qid & accept id: (23807485, 23808051) query: How to nest multiple MAX (...) statements in one CASE WHEN Query soup:

    In the first example, you use your MAX function to turn a single article_code column into two different columns (has9 and has8). In your second example, you are no longer splitting up your article_code column into multiple columns, therefore, as far as I can tell, you no longer need your MAX function.

    \n

    Have you tried something along the following lines?

    \n
    SELECT CASE WHEN SUBSTRING(article_code,5,1) IN ('9') THEN 'has9'\n            ELSE SUBSTRING(article_code,5,1) IN ('8') THEN 'has8'\n            ELSE 'FIX'\n       END as test_version\nFROM xxxx\n
    \n

    EDIT: Ah, in that case you will still need the MAX function to reduce it to a single line.

    \n

    You should be able to use your original query as a subquery that gets a single line and then use a CASE WHEN to convert it to a single string:

    \n
    SELECT CASE WHEN has9 = 1 THEN 'has9'\n            WHEN has8 = 1 THEN 'has8'\n            ELSE 'FIX'\n       END as test_version\nFROM (  SELECT MAX(CASE WHEN SUBSTRING(article_code,5,1) IN ('9') THEN 1 ELSE 0 END) AS has9,\n               MAX(CASE WHEN SUBSTRING(article_code,5,1) IN ('8') THEN 1 ELSE 0 END) AS has8\n        FROM xxxx )\n
    \n

    Or, you could use my earlier query as subquery and use the MAX function to reduce it to a single line:

    \n
    SELECT CASE WHEN MAX(result_rank) = 3 THEN 'has9'\n            WHEN MAX(result_rank) = 2 THEN 'has8'\n            ELSE 'FIX'\n       END as test_version\nFROM ( SELECT CASE WHEN SUBSTRING(article_code,5,1) IN ('9') THEN 3\n                   ELSE SUBSTRING(article_code,5,1) IN ('8') THEN 2\n                   ELSE 1\n              END as result_rank\n       FROM xxxx )\n
    \n soup wrap:

    In the first example, you use your MAX function to turn a single article_code column into two different columns (has9 and has8). In your second example, you are no longer splitting up your article_code column into multiple columns, therefore, as far as I can tell, you no longer need your MAX function.

    Have you tried something along the following lines?

    SELECT CASE WHEN SUBSTRING(article_code,5,1) IN ('9') THEN 'has9'
                ELSE SUBSTRING(article_code,5,1) IN ('8') THEN 'has8'
                ELSE 'FIX'
           END as test_version
    FROM xxxx
    

    EDIT: Ah, in that case you will still need the MAX function to reduce it to a single line.

    You should be able to use your original query as a subquery that gets a single line and then use a CASE WHEN to convert it to a single string:

    SELECT CASE WHEN has9 = 1 THEN 'has9'
                WHEN has8 = 1 THEN 'has8'
                ELSE 'FIX'
           END as test_version
    FROM (  SELECT MAX(CASE WHEN SUBSTRING(article_code,5,1) IN ('9') THEN 1 ELSE 0 END) AS has9,
                   MAX(CASE WHEN SUBSTRING(article_code,5,1) IN ('8') THEN 1 ELSE 0 END) AS has8
            FROM xxxx )
    

    Or, you could use my earlier query as subquery and use the MAX function to reduce it to a single line:

    SELECT CASE WHEN MAX(result_rank) = 3 THEN 'has9'
                WHEN MAX(result_rank) = 2 THEN 'has8'
                ELSE 'FIX'
           END as test_version
    FROM ( SELECT CASE WHEN SUBSTRING(article_code,5,1) IN ('9') THEN 3
                       ELSE SUBSTRING(article_code,5,1) IN ('8') THEN 2
                       ELSE 1
                  END as result_rank
           FROM xxxx )
    
    qid & accept id: (23821632, 23822213) query: How to segment a sequence of event by signal in SQL? soup:

    This is written in SQL Server syntax (for the table variable for the sample data) but it's fairly standard SQL and by looking at the query reference, I think it should run in BigQuery (once adapted to your actual table):

    \n
    declare @t table ([order] int, event char(1))\ninsert into @t([order],event) values\n(1,'C'),    (2,'C'),    (3,'C'),    (4,'S'),    (5,'C'),\n(6,'S'),    (7,'C'),    (8,'C'),    (9,'S')\n\nselect\n    t.*,\n    s1.rn\nfrom @t t\n    inner join\n(\nselect\n    *,\n    ROW_NUMBER() OVER (ORDER BY [order]) as rn\nfrom\n    @t\nwhere\n    event='S'\n) s1\n    on\n        t.[order] <= s1.[order]\n    left join\n(\nselect\n    *,\n    ROW_NUMBER() OVER (ORDER BY [order]) as rn\nfrom\n    @t\nwhere\n    event='S'\n) s2\n    on\n        t.[order] <= s2.[order] and\n        s2.[order] < s1.[order]\nwhere\n    s2.[order] is null\n
    \n

    I would have normally used a Common Table Expression (CTE) rather than duplicating the subquery for the S values, but I couldn't see whether that was supported.

    \n

    The logic should be fairly straightforward to see - we number the S rows using a simple ROW_NUMBER() function, and then we match every row from the original table to the S row which most immediately succeeds it.

    \n
    \n

    CTE variant (but, like I said, I couldn't see support for CTEs in the documentation):

    \n
    declare @t table ([order] int, event char(1))\ninsert into @t([order],event) values\n(1,'C'),    (2,'C'),    (3,'C'),    (4,'S'),    (5,'C'),\n(6,'S'),    (7,'C'),    (8,'C'),    (9,'S')\n\n;With Numbered as (\n    select\n    *,\n    ROW_NUMBER() OVER (ORDER BY [order]) as rn\nfrom\n    @t\nwhere\n    event='S'\n)\nselect\n    t.*,\n    s1.rn\nfrom @t t\n    inner join\nNumbered s1\n    on\n        t.[order] <= s1.[order]\n    left join\nNumbered s2\n    on\n        t.[order] <= s2.[order] and\n        s2.[order] < s1.[order]\nwhere\n    s2.[order] is null\n
    \n soup wrap:

    This is written in SQL Server syntax (for the table variable for the sample data) but it's fairly standard SQL and by looking at the query reference, I think it should run in BigQuery (once adapted to your actual table):

    declare @t table ([order] int, event char(1))
    insert into @t([order],event) values
    (1,'C'),    (2,'C'),    (3,'C'),    (4,'S'),    (5,'C'),
    (6,'S'),    (7,'C'),    (8,'C'),    (9,'S')
    
    select
        t.*,
        s1.rn
    from @t t
        inner join
    (
    select
        *,
        ROW_NUMBER() OVER (ORDER BY [order]) as rn
    from
        @t
    where
        event='S'
    ) s1
        on
            t.[order] <= s1.[order]
        left join
    (
    select
        *,
        ROW_NUMBER() OVER (ORDER BY [order]) as rn
    from
        @t
    where
        event='S'
    ) s2
        on
            t.[order] <= s2.[order] and
            s2.[order] < s1.[order]
    where
        s2.[order] is null
    

    I would have normally used a Common Table Expression (CTE) rather than duplicating the subquery for the S values, but I couldn't see whether that was supported.

    The logic should be fairly straightforward to see - we number the S rows using a simple ROW_NUMBER() function, and then we match every row from the original table to the S row which most immediately succeeds it.


    CTE variant (but, like I said, I couldn't see support for CTEs in the documentation):

    declare @t table ([order] int, event char(1))
    insert into @t([order],event) values
    (1,'C'),    (2,'C'),    (3,'C'),    (4,'S'),    (5,'C'),
    (6,'S'),    (7,'C'),    (8,'C'),    (9,'S')
    
    ;With Numbered as (
        select
        *,
        ROW_NUMBER() OVER (ORDER BY [order]) as rn
    from
        @t
    where
        event='S'
    )
    select
        t.*,
        s1.rn
    from @t t
        inner join
    Numbered s1
        on
            t.[order] <= s1.[order]
        left join
    Numbered s2
        on
            t.[order] <= s2.[order] and
            s2.[order] < s1.[order]
    where
        s2.[order] is null
    
    qid & accept id: (23828906, 23829371) query: Getting Month and Day from a date soup:
    SELECT CONVERT(CHAR(5), GETDATE(), 10)\n
    \n

    Result:

    \n
    05-23\n
    \n soup wrap:
    SELECT CONVERT(CHAR(5), GETDATE(), 10)
    

    Result:

    05-23
    
    qid & accept id: (23892604, 23892744) query: Compare two MySQL tables and remove rows that no longer exist soup:

    If you are using SQL to merge, a simple SQL can do the delete as well:

    \n
    delete from database_production.table\nwhere pk not in (select pk from database_temporary.table)\n
    \n

    Notes:

    \n\n

    An example not exists:

    \n
    delete from database_production.table p\nwhere not exists (select 1 from database_temporary.table t where t.pk = p.pk)\n
    \n

    Performance Notes:
    \nAs pointed out by @mgonzalez in the comments on the question, you may want to use a timestamp column (something like last modified) for comparing/merging in general so that you vompare only changed rows. This does not apply to the delete specifically, you cannot use timestamp for the delete because, well, the row would not exist.

    \n soup wrap:

    If you are using SQL to merge, a simple SQL can do the delete as well:

    delete from database_production.table
    where pk not in (select pk from database_temporary.table)
    

    Notes:

    An example not exists:

    delete from database_production.table p
    where not exists (select 1 from database_temporary.table t where t.pk = p.pk)
    

    Performance Notes:
    As pointed out by @mgonzalez in the comments on the question, you may want to use a timestamp column (something like last modified) for comparing/merging in general so that you vompare only changed rows. This does not apply to the delete specifically, you cannot use timestamp for the delete because, well, the row would not exist.

    qid & accept id: (23907556, 23907600) query: Copying data want to keep to a new table and then rename soup:

    The query to copy everything to the new table goes like this:

    \n
    SELECT * INTO dbo.NewTable FROM dbo.OldTable WHERE [event id] <> 6030\n
    \n

    Then:

    \n
    ALTER TABLE dbo.OldTable\n  RENAME TO dbo.OldTable_History;\n
    \n

    And:

    \n
     ALTER TABLE dbo.NewTable \n  RENAME TO dbo.OldTable;\n
    \n

    If you want to create the table manually do it then and after that run this:

    \n
    INSERT INTO dbo.NewTable\nSELECT * FROM dbo.OldTable WHERE [event id] <> 6030\n
    \n soup wrap:

    The query to copy everything to the new table goes like this:

    SELECT * INTO dbo.NewTable FROM dbo.OldTable WHERE [event id] <> 6030
    

    Then:

    ALTER TABLE dbo.OldTable
      RENAME TO dbo.OldTable_History;
    

    And:

     ALTER TABLE dbo.NewTable 
      RENAME TO dbo.OldTable;
    

    If you want to create the table manually do it then and after that run this:

    INSERT INTO dbo.NewTable
    SELECT * FROM dbo.OldTable WHERE [event id] <> 6030
    
    qid & accept id: (23908145, 23908285) query: SQL Server - Change Date Format soup:

    Try like this

    \n
    SELECT LEFT(DATENAME(dw, GETDATE()), 3) + ' , ' + CAST(Day(GetDate()) AS Varchar(10))\n
    \n

    Fiddle Demo

    \n

    Query would be like this

    \n
    SELECT mydate,LEFT(DATENAME(dw, mydate), 3) + ' , ' + CAST(Day(mydate) AS Varchar(10)) As Date \nFrom tbl\n
    \n

    SQL FIDDLE

    \n

    O/P

    \n
    MYDATE        DATE\n2014-04-21    Mon ,21\n2014-04-22    Tue ,22\n2014-04-23    Wed ,23\n2014-04-24    Thu ,24\n
    \n soup wrap:

    Try like this

    SELECT LEFT(DATENAME(dw, GETDATE()), 3) + ' , ' + CAST(Day(GetDate()) AS Varchar(10))
    

    Fiddle Demo

    Query would be like this

    SELECT mydate,LEFT(DATENAME(dw, mydate), 3) + ' , ' + CAST(Day(mydate) AS Varchar(10)) As Date 
    From tbl
    

    SQL FIDDLE

    O/P

    MYDATE        DATE
    2014-04-21    Mon ,21
    2014-04-22    Tue ,22
    2014-04-23    Wed ,23
    2014-04-24    Thu ,24
    
    qid & accept id: (23924244, 23924333) query: How to identify duplicate rows having value within data range in oracle soup:

    You can use EXISTS for this:

    \n
    select * \nfrom yourtable y\nwhere exists (\n  select 1\n  from yourtable y2\n  where y.id <> y2.id \n    and y.name = y2.name\n    and (y2.startfield between y.startfield and y.endfield\n         or\n         y.startfield between y2.startfield and y2.endfield))\n
    \n\n

    I wasn't completely sure from your question if the end range had to be included as well. If so, you'll need to add that to the where criteria:

    \n
    select * \nfrom yourtable y\nwhere exists (\n  select 1\n  from yourtable y2\n  where y.id <> y2.id \n    and y.name = y2.name\n    and ((y2.startfield > y.startfield and y2.endfield < y.endfield)\n         or\n         (y.startfield > y2.startfield and y.endfield < y2.endfield)))\n
    \n soup wrap:

    You can use EXISTS for this:

    select * 
    from yourtable y
    where exists (
      select 1
      from yourtable y2
      where y.id <> y2.id 
        and y.name = y2.name
        and (y2.startfield between y.startfield and y.endfield
             or
             y.startfield between y2.startfield and y2.endfield))
    

    I wasn't completely sure from your question if the end range had to be included as well. If so, you'll need to add that to the where criteria:

    select * 
    from yourtable y
    where exists (
      select 1
      from yourtable y2
      where y.id <> y2.id 
        and y.name = y2.name
        and ((y2.startfield > y.startfield and y2.endfield < y.endfield)
             or
             (y.startfield > y2.startfield and y.endfield < y2.endfield)))
    
    qid & accept id: (23948815, 23948894) query: SQL: How to find product codes soup:

    Use substring() to extract the produce code, and group-by with having to find the hits:

    \n
    select substring(product_id, 5, len(product_id)) code\nfrom products\ngroup by substring(product_id, 5, len(product_id))\nhaving count(*) > 1\n
    \n

    If you want a specific one, add a where clause:

    \n
    select substring(product_id, 5, len(product_id)) code\nfrom products\nwhere substring(product_id, 5, len(product_id)) = '0700400B'\ngroup by substring(product_id, 5, len(product_id))\nhaving count(*) > 1\n
    \n soup wrap:

    Use substring() to extract the produce code, and group-by with having to find the hits:

    select substring(product_id, 5, len(product_id)) code
    from products
    group by substring(product_id, 5, len(product_id))
    having count(*) > 1
    

    If you want a specific one, add a where clause:

    select substring(product_id, 5, len(product_id)) code
    from products
    where substring(product_id, 5, len(product_id)) = '0700400B'
    group by substring(product_id, 5, len(product_id))
    having count(*) > 1
    
    qid & accept id: (23950035, 23950156) query: How would you select records from a table based on the difference between 'created' dates with MySQL? soup:

    slower option

    \n
    SELECT id, TIME_TO_SEC(TIMEDIFF(MAX(created_at),MIN(created_at))) as seconds_difference\nFROM table\nGROUP BY id\nHAVING seconds_difference > 3600*24\n
    \n

    faster option

    \n
    SELECT t1.id, TIME_TO_SEC(TIMEDIFF(t2.created_at, t1.created_at) as seconds_difference\nFROM table t1\nINNER JOIN table t2 ON (t2.id = t1.id AND t2.created_at > t1.created_at)\nWHERE TIME_TO_SEC(TIMEDIFF(t2.created_at, t1.created_at) > 3600*24\n
    \n soup wrap:

    slower option

    SELECT id, TIME_TO_SEC(TIMEDIFF(MAX(created_at),MIN(created_at))) as seconds_difference
    FROM table
    GROUP BY id
    HAVING seconds_difference > 3600*24
    

    faster option

    SELECT t1.id, TIME_TO_SEC(TIMEDIFF(t2.created_at, t1.created_at) as seconds_difference
    FROM table t1
    INNER JOIN table t2 ON (t2.id = t1.id AND t2.created_at > t1.created_at)
    WHERE TIME_TO_SEC(TIMEDIFF(t2.created_at, t1.created_at) > 3600*24
    
    qid & accept id: (23954139, 23955928) query: SSRS report to show missing/ NULL entries Mon to Fri. soup:

    SQL is still the best way to get all the data you need. What I would recommend is creating a temp table with the limited values list you want, for instance Monday, Tuesday, etc. Then you can use the apply operator against your data table and get the not matching day values.

    \n
    SELECT * FROM Days D \nOUTER APPLY \n   ( \n   SELECT * FROM Orders E \n   WHERE DATEPART(wd,e.OrderDate) = D.DayName\n   ) A \n
    \n

    Would return something like:

    \n
    DayName    OrderCount  Amount\nMonday     2           50.00\nTuesday    NULL        NULL\nWednesday  5           125.00\nThursday   NULL        NULL\nFriday     7           225.00\n
    \n

    Below you can find an article on the apply operators that you can use:

    \n

    Cross and Outer Apply

    \n soup wrap:

    SQL is still the best way to get all the data you need. What I would recommend is creating a temp table with the limited values list you want, for instance Monday, Tuesday, etc. Then you can use the apply operator against your data table and get the not matching day values.

    SELECT * FROM Days D 
    OUTER APPLY 
       ( 
       SELECT * FROM Orders E 
       WHERE DATEPART(wd,e.OrderDate) = D.DayName
       ) A 
    

    Would return something like:

    DayName    OrderCount  Amount
    Monday     2           50.00
    Tuesday    NULL        NULL
    Wednesday  5           125.00
    Thursday   NULL        NULL
    Friday     7           225.00
    

    Below you can find an article on the apply operators that you can use:

    Cross and Outer Apply

    qid & accept id: (23959544, 23962356) query: Sliding, variable "window" with highest density of rows soup:

    Lets start with a table definition and some INSERT statements. This reflects your data before you changed the question.

    \n
    create table log_test (\n  datetime date not null,\n  action varchar(15) not null,\n  username varchar(15) not null,\n  primary key (datetime, action, username)\n);\n\ninsert into log_test values\n('2013-09-30', 'update', 'username'),\n('2013-12-15', 'update', 'username'),\n('2014-03-01', 'update', 'username'),\n('2014-03-02', 'update', 'username'),\n('2014-03-03', 'update', 'username'),\n('2014-03-05', 'update', 'username'),\n('2015-05-20', 'update', 'username');\n
    \n

    Now we build a table of integers. This kind of table is useful in many ways; mine has several million rows in it. (There are ways to automate the insert statements.)

    \n
    create table integers (\n  n integer not null,\n  primary key n\n);\ninsert into n values \n (0),  (1),  (2),  (3),  (4),  (5),  (6),  (7),  (8),  (9),\n(10), (11), (12), (13), (14), (15), (16), (17), (18), (19),\n(20), (21), (22), (23), (24), (25), (26), (27), (28), (29),\n(30), (31), (32), (33), (34), (35), (36), (37), (38), (39),\n(40), (41), (42), (43), (44), (45), (46), (47), (48), (49);\n
    \n

    This statement gives us the dates from log_test, along with the number of days in the "window" we want to look at. You need to select distinct, because there might be multiple users with the same dates.

    \n
    select distinct datetime, t.n\nfrom log_test\ncross join (select n from integers where n between 10 and 40) t\norder by datetime, t.n;\n
    \n
    \ndatetime     n\n--\n2013-09-30   10\n2013-09-30   11\n2013-09-30   12\n...\n2015-05-20   39\n2015-05-20   40\n
    \n

    We can use that result as a derived table, and do date arithmetic on it.

    \n
    select datetime period_start, datetime + interval t2.n day period_end\nfrom (\n  select distinct datetime, t.n\n  from log_test\n  cross join (select n from integers where n between 10 and 40) t ) t2\norder by period_start, period_end;\n
    \n
    \nperiod_start  period_end\n--\n2013-09-30    2013-10-10\n2013-09-30    2013-10-11\n2013-09-30    2013-10-12\n...\n2015-05-20    2015-06-28\n2015-05-20    2015-06-29\n
    \n

    These intervals are off by one; 2013-09-30 to 2013-10-10 has 11 days. I'll leave that repair up to you.

    \n

    The next version counts the number of "happenings" in each period. In your case, as the question was originally written, we just need to count the number of rows in each period.

    \n
    select username, t3.period_start, t3.period_end, count(datetime) num_rows\nfrom log_test\ninner join (\n  select datetime period_start, datetime + interval t2.n day period_end\n  from (\n    select distinct datetime, t.n\n    from log_test\n    cross join (select n from integers where n between 10 and 40) t ) t2\n  order by period_start, period_end ) t3\non log_test.datetime between t3.period_start and t3.period_end\ngroup by username, t3.period_start, t3.period_end\norder by username, t3.period_start, t3.period_end;\n
    \n
    \nusername  period_start  period_end  num_rows\n--\nusername  2013-09-30    2013-10-10  1\nusername  2013-09-30    2013-10-11  1\nusername  2013-09-30    2013-10-12  1\n...\nusername  2014-03-01    2014-03-11  4\nusername  2014-03-01    2014-03-12  4\n...\nusername  2015-05-20    2015-06-28  1\nusername  2015-05-20    2015-06-29  1\n
    \n

    Finally, we can work some arithmetic magic, and get the density of each "window".

    \n
    select username, \n       t3.period_start, t3.period_end, t3.n, \n       count(datetime) num_rows,\n       count(datetime)/t3.n density\nfrom log_test\ninner join (\n  select datetime period_start, t2.n, datetime + interval t2.n day period_end\n  from (\n    select distinct datetime, t.n\n    from log_test\n    cross join (select n from integers where n between 10 and 40) t ) t2\n  order by period_start, period_end ) t3\non log_test.datetime between t3.period_start and t3.period_end\ngroup by username, t3.period_start, t3.period_end, t3.n\norder by username, density desc;\n
    \n
    \nusername  period_start  period_end  n   num_rows  density\n--\nusername  2014-03-01    2014-03-11  10  4         0.4000\nusername  2014-03-01    2014-03-12  11  4         0.3636\nusername  2014-03-01    2014-03-13  12  4         0.3333\n...\n
    \n

    Suggestions for refinement

    \n

    You might want to change the date arithmetic. As it stands, these queries simply add 'n' days to the dates in the test table. But that means the periods won't be symmetric around gaps. For example, the date 2014-03-01 appears after a long gap. As it stands now, we don't try to evaluate the density of a "window" that ends on 2014-03-01 (a "window" that comes at the first value in a gap from before it). This might be worth thinking through for your application.

    \n soup wrap:

    Lets start with a table definition and some INSERT statements. This reflects your data before you changed the question.

    create table log_test (
      datetime date not null,
      action varchar(15) not null,
      username varchar(15) not null,
      primary key (datetime, action, username)
    );
    
    insert into log_test values
    ('2013-09-30', 'update', 'username'),
    ('2013-12-15', 'update', 'username'),
    ('2014-03-01', 'update', 'username'),
    ('2014-03-02', 'update', 'username'),
    ('2014-03-03', 'update', 'username'),
    ('2014-03-05', 'update', 'username'),
    ('2015-05-20', 'update', 'username');
    

    Now we build a table of integers. This kind of table is useful in many ways; mine has several million rows in it. (There are ways to automate the insert statements.)

    create table integers (
      n integer not null,
      primary key n
    );
    insert into n values 
     (0),  (1),  (2),  (3),  (4),  (5),  (6),  (7),  (8),  (9),
    (10), (11), (12), (13), (14), (15), (16), (17), (18), (19),
    (20), (21), (22), (23), (24), (25), (26), (27), (28), (29),
    (30), (31), (32), (33), (34), (35), (36), (37), (38), (39),
    (40), (41), (42), (43), (44), (45), (46), (47), (48), (49);
    

    This statement gives us the dates from log_test, along with the number of days in the "window" we want to look at. You need to select distinct, because there might be multiple users with the same dates.

    select distinct datetime, t.n
    from log_test
    cross join (select n from integers where n between 10 and 40) t
    order by datetime, t.n;
    
    datetime     n
    --
    2013-09-30   10
    2013-09-30   11
    2013-09-30   12
    ...
    2015-05-20   39
    2015-05-20   40
    

    We can use that result as a derived table, and do date arithmetic on it.

    select datetime period_start, datetime + interval t2.n day period_end
    from (
      select distinct datetime, t.n
      from log_test
      cross join (select n from integers where n between 10 and 40) t ) t2
    order by period_start, period_end;
    
    period_start  period_end
    --
    2013-09-30    2013-10-10
    2013-09-30    2013-10-11
    2013-09-30    2013-10-12
    ...
    2015-05-20    2015-06-28
    2015-05-20    2015-06-29
    

    These intervals are off by one; 2013-09-30 to 2013-10-10 has 11 days. I'll leave that repair up to you.

    The next version counts the number of "happenings" in each period. In your case, as the question was originally written, we just need to count the number of rows in each period.

    select username, t3.period_start, t3.period_end, count(datetime) num_rows
    from log_test
    inner join (
      select datetime period_start, datetime + interval t2.n day period_end
      from (
        select distinct datetime, t.n
        from log_test
        cross join (select n from integers where n between 10 and 40) t ) t2
      order by period_start, period_end ) t3
    on log_test.datetime between t3.period_start and t3.period_end
    group by username, t3.period_start, t3.period_end
    order by username, t3.period_start, t3.period_end;
    
    username  period_start  period_end  num_rows
    --
    username  2013-09-30    2013-10-10  1
    username  2013-09-30    2013-10-11  1
    username  2013-09-30    2013-10-12  1
    ...
    username  2014-03-01    2014-03-11  4
    username  2014-03-01    2014-03-12  4
    ...
    username  2015-05-20    2015-06-28  1
    username  2015-05-20    2015-06-29  1
    

    Finally, we can work some arithmetic magic, and get the density of each "window".

    select username, 
           t3.period_start, t3.period_end, t3.n, 
           count(datetime) num_rows,
           count(datetime)/t3.n density
    from log_test
    inner join (
      select datetime period_start, t2.n, datetime + interval t2.n day period_end
      from (
        select distinct datetime, t.n
        from log_test
        cross join (select n from integers where n between 10 and 40) t ) t2
      order by period_start, period_end ) t3
    on log_test.datetime between t3.period_start and t3.period_end
    group by username, t3.period_start, t3.period_end, t3.n
    order by username, density desc;
    
    username  period_start  period_end  n   num_rows  density
    --
    username  2014-03-01    2014-03-11  10  4         0.4000
    username  2014-03-01    2014-03-12  11  4         0.3636
    username  2014-03-01    2014-03-13  12  4         0.3333
    ...
    

    Suggestions for refinement

    You might want to change the date arithmetic. As it stands, these queries simply add 'n' days to the dates in the test table. But that means the periods won't be symmetric around gaps. For example, the date 2014-03-01 appears after a long gap. As it stands now, we don't try to evaluate the density of a "window" that ends on 2014-03-01 (a "window" that comes at the first value in a gap from before it). This might be worth thinking through for your application.

    qid & accept id: (23963860, 23965305) query: Making ID attributes unique in XML soup:

    In your environment you can use XSLT 1.0 to transform the document and generate IDs during the process. See: DBMS_XSLPROCESSOR.

    \n

    With a XSLT stylesheet you can copy the nodes from your XML source to a result tree, creating unique IDs in the process. The IDs will not be sequential numbers, but unique string sequences generated by the generate-id() method. You can't control what they look like, but you can guarantee they are unique. (XSLT also allows you to get rid of duplicate nodes (using a key) if that's your intention, but from your example I understood that duplicate *ID*s doesn't actually mean the node is a duplicate, since you want to generate a new ID for it.)

    \n

    The stylesheet below has two templates. The second one is an identity transform: it simply copies elements and attributes to the result tree. The first template creates an attribute named id containing an unique ID.

    \n
    \n    \n    \n\n    \n        \n            \n                \n            \n            \n        \n    \n\n    \n        \n            \n        \n    \n\n\n
    \n

    The other templates (in this case only the identity template) are called for all nodes and attributes, except the id attribute by . The result is a copy of your original XML file with generated unique IDs for the book elements.

    \n

    If you had a XML such as this one:

    \n
    \n    \n        \n        \n        \n        \n            Text\n        \n        \n    \n    \n        \n    \n\n
    \n

    the XSLT above would transform it into this XML:

    \n
    \n   \n      \n      \n      \n      \n         Text\n      \n      \n   \n   \n      \n   \n\n
    \n

    (the string sequences are arbitrary, and might be different in your implementation).

    \n

    For creating ID/IDREF links the generated string sequences are better than numbers since you can use them anywhere (numbers and identifiers that start with numbers can't always be used as IDs). But if string sequences are not acceptable and you need sequential numbers, you can use XPath node position() in XQuery or XSLT to generate a number based on the element's position in the whole document (which will be unique). If all books are siblings in the same context, you can simply replace the generate-id(.) in the stylesheet above for position():

    \n
    \n    \n        \n            \n        \n        \n    \n\n
    \n

    (if the books are not siblings, you will need to do it in a slightly different way, using a variable).

    \n

    If you want to retain the existing IDs and only generate sequential ones for the duplicates, it will be a bit more complicated but you can achieve that with keys (or XQuery instead of XSLT). The maximum id can be obtained in XPath 2.0 using the max() function:

    \n
    max(//book/@id)\n
    \n

    That function does not exist in XPath 1.0, but you can obtain the maximum ID by using:

    \n
    //book[not(@id < //book/@id)]/@id\n
    \n soup wrap:

    In your environment you can use XSLT 1.0 to transform the document and generate IDs during the process. See: DBMS_XSLPROCESSOR.

    With a XSLT stylesheet you can copy the nodes from your XML source to a result tree, creating unique IDs in the process. The IDs will not be sequential numbers, but unique string sequences generated by the generate-id() method. You can't control what they look like, but you can guarantee they are unique. (XSLT also allows you to get rid of duplicate nodes (using a key) if that's your intention, but from your example I understood that duplicate *ID*s doesn't actually mean the node is a duplicate, since you want to generate a new ID for it.)

    The stylesheet below has two templates. The second one is an identity transform: it simply copies elements and attributes to the result tree. The first template creates an attribute named id containing an unique ID.

    
        
        
    
        
            
                
                    
                
                
            
        
    
        
            
                
            
        
    
    
    

    The other templates (in this case only the identity template) are called for all nodes and attributes, except the id attribute by . The result is a copy of your original XML file with generated unique IDs for the book elements.

    If you had a XML such as this one:

    
        
            
            
            
            
                Text
            
            
        
        
            
        
    
    

    the XSLT above would transform it into this XML:

    
       
          
          
          
          
             Text
          
          
       
       
          
       
    
    

    (the string sequences are arbitrary, and might be different in your implementation).

    For creating ID/IDREF links the generated string sequences are better than numbers since you can use them anywhere (numbers and identifiers that start with numbers can't always be used as IDs). But if string sequences are not acceptable and you need sequential numbers, you can use XPath node position() in XQuery or XSLT to generate a number based on the element's position in the whole document (which will be unique). If all books are siblings in the same context, you can simply replace the generate-id(.) in the stylesheet above for position():

    
        
            
                
            
            
        
    
    

    (if the books are not siblings, you will need to do it in a slightly different way, using a variable).

    If you want to retain the existing IDs and only generate sequential ones for the duplicates, it will be a bit more complicated but you can achieve that with keys (or XQuery instead of XSLT). The maximum id can be obtained in XPath 2.0 using the max() function:

    max(//book/@id)
    

    That function does not exist in XPath 1.0, but you can obtain the maximum ID by using:

    //book[not(@id < //book/@id)]/@id
    
    qid & accept id: (23992536, 23992643) query: Extract Date from VARCHAR string ORacle soup:

    By extract, do you mean something like:

    \n
    DECLARE\n    match VARCHAR2(255);\nBEGIN\n    match := REGEXP_SUBSTR(subject, '\d{2}-\w{3}-\d{4}', 1, 1, 'im');\nEND;\n
    \n

    Explain Regex

    \n
    \d{2}                    # digits (0-9) (2 times)\n-                        # '-'\n\w{3}                    # word characters (a-z, A-Z, 0-9, _) (3\n                         # times)\n-                        # '-'\n\d{4}                    # digits (0-9) (4 times)\n
    \n soup wrap:

    By extract, do you mean something like:

    DECLARE
        match VARCHAR2(255);
    BEGIN
        match := REGEXP_SUBSTR(subject, '\d{2}-\w{3}-\d{4}', 1, 1, 'im');
    END;
    

    Explain Regex

    \d{2}                    # digits (0-9) (2 times)
    -                        # '-'
    \w{3}                    # word characters (a-z, A-Z, 0-9, _) (3
                             # times)
    -                        # '-'
    \d{4}                    # digits (0-9) (4 times)
    
    qid & accept id: (23997222, 23997311) query: Select by a key for all associated records from a denormalizing database soup:

    This statement will probably prevent everything from working:

    \n
    EXEC ('SELECT *  FROM '+ @tablename +'where  EmployeeID = 102')\n
    \n

    You need a space after the table name:

    \n
    EXEC ('SELECT *  FROM '+ @tablename +' where  EmployeeID = 102')\n
    \n

    In addition, your cursor logic seems off. You should be checking for @@FETCH_STATUS and then closing and deallocating the cursor.

    \n

    Follow the example at the end of the documentation.

    \n soup wrap:

    This statement will probably prevent everything from working:

    EXEC ('SELECT *  FROM '+ @tablename +'where  EmployeeID = 102')
    

    You need a space after the table name:

    EXEC ('SELECT *  FROM '+ @tablename +' where  EmployeeID = 102')
    

    In addition, your cursor logic seems off. You should be checking for @@FETCH_STATUS and then closing and deallocating the cursor.

    Follow the example at the end of the documentation.

    qid & accept id: (24012213, 24015752) query: COUNT on Sub Query and Join soup:

    In the first query you group by ids, in the second by names. So the first query gives you counts per customer and product, whereas the second query gives you counts per equally named customers and equally named products.

    \n

    Example:

    \n
    user 1 = John, user 2 = John\nproduct a = toy, product b = toy\norders: 1 a, 1 a, 1 b, 2 a\n
    \n

    query 1:

    \n
    2, John, toy\n1, John, toy\n1, John, toy\n
    \n

    query 2:

    \n
    4, John, toy\n
    \n soup wrap:

    In the first query you group by ids, in the second by names. So the first query gives you counts per customer and product, whereas the second query gives you counts per equally named customers and equally named products.

    Example:

    user 1 = John, user 2 = John
    product a = toy, product b = toy
    orders: 1 a, 1 a, 1 b, 2 a
    

    query 1:

    2, John, toy
    1, John, toy
    1, John, toy
    

    query 2:

    4, John, toy
    
    qid & accept id: (24035933, 24036260) query: Select a record just if the one before it has a lower value takes too long and fail soup:

    Here's a solution for your question 1 which will run much faster, since you have many full table scans and dependent subqueries. Here you will at most have just one table scan (and maybe a temporary table, depending how large your data is and how much memory you've got). I think you can easily adjust it to your question here. Question 2 (I haven't read it really) is probably also answered since it's easy now to just add where date_column = whatever

    \n
    select * from (\n    select\n    t.*,\n    if(@prev_toner < Remain_Toner_Black and @prev_sn = SerialNumber, 1, 0) as select_it,\n    @prev_sn := SerialNumber,\n    @prev_toner := Remain_Toner_Black\n    from\n    Table1 t\n    , (select @prev_toner:=0, @prev_sn:=SerialNumber from Table1 order by SerialNumber limit 1) var_init\n    order by SerialNumber, id\n) sq  \nwhere select_it = 1\n
    \n\n

    EDIT:

    \n

    Explanation:

    \n

    With this line

    \n
        , (select @prev_toner:=0, @prev_sn:=SerialNumber from Table1 order by SerialNumber \n
    \n

    we just initialize the variables @prev_toner and @prev_sn on the fly. It's the same as not having this line in the query at all but writing in front of the query

    \n
    SET @prev_toner = 0;\nSET @prev_sn = (select serialnumber from your_table order by serialnumber limit 1);\nSELECT ...\n
    \n

    So, why do the query to assign a value to @prev_sn and why order by serialnumber? The order by is very important. Without an order by there's no guaranteed order in which rows are returned. Also we will access the previous rows value with variables, so it's important that same serial numbers are "grouped together".

    \n

    The columns in the select clause are evaluated one after another, so it's important that you first select this line

    \n
    if(@prev_toner < Remain_Toner_Black and @prev_sn = SerialNumber, 1, 0) as select_it,\n
    \n

    before you select these two lines

    \n
    @prev_sn := SerialNumber,\n@prev_toner := Remain_Toner_Black\n
    \n

    Why is that? The last two lines assign just the values of the current rows to the variables. Therefor in this line

    \n
    if(@prev_toner < Remain_Toner_Black and @prev_sn = SerialNumber, 1, 0) as select_it,\n
    \n

    the variables still hold the values of the previous rows. And what we do here is nothing more than saying "if the previous rows value in column Remain_Toner_Black is smaller than the one in the current row and the previous rows serial number is the same as the actual rows serial number, return 1, else return 0."

    \n

    Then we can simply say in the outer query "select every row, where the above returned 1".

    \n

    Given your query, you don't need all these subqueries. They are very expensive and unnecessary. Actually it's quite insane. In this part of the query

    \n
        SELECT  a.ID, \n            a.Time, \n            a.SerialNumber, \n            a.Remain_Toner_Black,\n            a.Remain_Toner_Cyan,\n            a.Remain_Toner_Magenta,\n            a.Remain_Toner_Yellow,\n            (\n                SELECT  COUNT(*)\n                FROM    Reports c\n                WHERE   c.SerialNumber = a.SerialNumber AND\n                        c.ID <= a.ID) AS RowNumber\n    FROM    Reports a\n
    \n

    you select the whole table and for every row you count the rows within that group. That's a dependent subquery. All just to have some sort of row number. Then you do this a second time, just so you can join those two temporary tables to get the previous row. Really, no wonder the performance is horrible.

    \n

    So, how to adjust my solution to your query? Instead of the one variable I used to get the previous row for Remain_Toner_Black use four for the colours black, cyan, magenta and yellow. And just join the Printers and Customers table like you did already. Don't forget the order by and you're done.

    \n soup wrap:

    Here's a solution for your question 1 which will run much faster, since you have many full table scans and dependent subqueries. Here you will at most have just one table scan (and maybe a temporary table, depending how large your data is and how much memory you've got). I think you can easily adjust it to your question here. Question 2 (I haven't read it really) is probably also answered since it's easy now to just add where date_column = whatever

    select * from (
        select
        t.*,
        if(@prev_toner < Remain_Toner_Black and @prev_sn = SerialNumber, 1, 0) as select_it,
        @prev_sn := SerialNumber,
        @prev_toner := Remain_Toner_Black
        from
        Table1 t
        , (select @prev_toner:=0, @prev_sn:=SerialNumber from Table1 order by SerialNumber limit 1) var_init
        order by SerialNumber, id
    ) sq  
    where select_it = 1
    

    EDIT:

    Explanation:

    With this line

        , (select @prev_toner:=0, @prev_sn:=SerialNumber from Table1 order by SerialNumber 
    

    we just initialize the variables @prev_toner and @prev_sn on the fly. It's the same as not having this line in the query at all but writing in front of the query

    SET @prev_toner = 0;
    SET @prev_sn = (select serialnumber from your_table order by serialnumber limit 1);
    SELECT ...
    

    So, why do the query to assign a value to @prev_sn and why order by serialnumber? The order by is very important. Without an order by there's no guaranteed order in which rows are returned. Also we will access the previous rows value with variables, so it's important that same serial numbers are "grouped together".

    The columns in the select clause are evaluated one after another, so it's important that you first select this line

    if(@prev_toner < Remain_Toner_Black and @prev_sn = SerialNumber, 1, 0) as select_it,
    

    before you select these two lines

    @prev_sn := SerialNumber,
    @prev_toner := Remain_Toner_Black
    

    Why is that? The last two lines assign just the values of the current rows to the variables. Therefor in this line

    if(@prev_toner < Remain_Toner_Black and @prev_sn = SerialNumber, 1, 0) as select_it,
    

    the variables still hold the values of the previous rows. And what we do here is nothing more than saying "if the previous rows value in column Remain_Toner_Black is smaller than the one in the current row and the previous rows serial number is the same as the actual rows serial number, return 1, else return 0."

    Then we can simply say in the outer query "select every row, where the above returned 1".

    Given your query, you don't need all these subqueries. They are very expensive and unnecessary. Actually it's quite insane. In this part of the query

        SELECT  a.ID, 
                a.Time, 
                a.SerialNumber, 
                a.Remain_Toner_Black,
                a.Remain_Toner_Cyan,
                a.Remain_Toner_Magenta,
                a.Remain_Toner_Yellow,
                (
                    SELECT  COUNT(*)
                    FROM    Reports c
                    WHERE   c.SerialNumber = a.SerialNumber AND
                            c.ID <= a.ID) AS RowNumber
        FROM    Reports a
    

    you select the whole table and for every row you count the rows within that group. That's a dependent subquery. All just to have some sort of row number. Then you do this a second time, just so you can join those two temporary tables to get the previous row. Really, no wonder the performance is horrible.

    So, how to adjust my solution to your query? Instead of the one variable I used to get the previous row for Remain_Toner_Black use four for the colours black, cyan, magenta and yellow. And just join the Printers and Customers table like you did already. Don't forget the order by and you're done.

    qid & accept id: (24040834, 24042008) query: converting sysdate to datetime format soup:

    There is a little trick because of the T inside your format, so you have to cut it in two:

    \n
    with w as\n(\n  select sysdate d from dual\n)\nselect to_char(w.d, 'yyyy-mm-dd') || 'T' || to_char(w.d, 'hh24:mi:ss')\nfrom w;\n
    \n

    EDIT : A better way exists in a single call to to_char, as shown in this other SO post:

    \n
    select to_char(sysdate, 'yyyy-mm-dd"T"hh24:mi:ss') from dual;\n
    \n soup wrap:

    There is a little trick because of the T inside your format, so you have to cut it in two:

    with w as
    (
      select sysdate d from dual
    )
    select to_char(w.d, 'yyyy-mm-dd') || 'T' || to_char(w.d, 'hh24:mi:ss')
    from w;
    

    EDIT : A better way exists in a single call to to_char, as shown in this other SO post:

    select to_char(sysdate, 'yyyy-mm-dd"T"hh24:mi:ss') from dual;
    
    qid & accept id: (24040926, 24041132) query: SQL Query Hotel Room from two tables (Type and Availability) soup:
    `SELECT * from Room R\nINNER JOIN Booking B on B.Room_ID = R.Room_ID\nwhere Room_Floor = 1\nAND From_date BETWEEN GETDATE() AND To_date\n`\n
    \n

    This will find all bookings for rooms on Floor 1

    \n
    `SELECT * from Room R\nwhere not exists (select * from bookings where Room_ID = R.RoomID and GETDATE()\nBetween From_date AND To_date)\nand Room_Floor = 2`\n
    \n

    This will find all available rooms on floor 2

    \n

    Something like that I think

    \n soup wrap:
    `SELECT * from Room R
    INNER JOIN Booking B on B.Room_ID = R.Room_ID
    where Room_Floor = 1
    AND From_date BETWEEN GETDATE() AND To_date
    `
    

    This will find all bookings for rooms on Floor 1

    `SELECT * from Room R
    where not exists (select * from bookings where Room_ID = R.RoomID and GETDATE()
    Between From_date AND To_date)
    and Room_Floor = 2`
    

    This will find all available rooms on floor 2

    Something like that I think

    qid & accept id: (24082669, 24083105) query: Finding Unknown XML Grandchildren Using SQL soup:

    To get all nodes not only from the first level use /form//* with // instead of /form/*

    \n
    SELECT distinct Parent.Items.value('local-name(.)', 'varchar(100)') as 'Item'\n    FROM    dbo.FormResults \n    CROSS APPLY xmlformfields.nodes('/form//*') as Parent(Items)\n
    \n

    SQLFiddle example

    \n

    To get also parent nodes use syntax ../. in local-name() call.\nTo get an Index of child inside a parent node and order by it you can use XQuery expression

    \n
    for $i in . return count(../*[. << $i])\n
    \n

    So the final query with order:

    \n
    SELECT distinct \n          Parent.Items.value('local-name(.)', 'varchar(100)') as 'Item',\n          Parent.Items.value('local-name(../.)', 'varchar(100)') as 'ParentItem',\n          Parent.Items.value('for $i in . return count(../*[. << $i])','int') \n              as ChildIndex\n    FROM    dbo.FormResults \n    CROSS APPLY xmlformfields.nodes('/form//*') as Parent(Items)\n    ORDER BY ParentItem,ChildIndex\n
    \n

    SQLFiddle example

    \n soup wrap:

    To get all nodes not only from the first level use /form//* with // instead of /form/*

    SELECT distinct Parent.Items.value('local-name(.)', 'varchar(100)') as 'Item'
        FROM    dbo.FormResults 
        CROSS APPLY xmlformfields.nodes('/form//*') as Parent(Items)
    

    SQLFiddle example

    To get also parent nodes use syntax ../. in local-name() call. To get an Index of child inside a parent node and order by it you can use XQuery expression

    for $i in . return count(../*[. << $i])
    

    So the final query with order:

    SELECT distinct 
              Parent.Items.value('local-name(.)', 'varchar(100)') as 'Item',
              Parent.Items.value('local-name(../.)', 'varchar(100)') as 'ParentItem',
              Parent.Items.value('for $i in . return count(../*[. << $i])','int') 
                  as ChildIndex
        FROM    dbo.FormResults 
        CROSS APPLY xmlformfields.nodes('/form//*') as Parent(Items)
        ORDER BY ParentItem,ChildIndex
    

    SQLFiddle example

    qid & accept id: (24116066, 24137252) query: Database schema for private messages with many different types of users soup:
    \n

    You should have a single range of userids that spans all four groups.\n Then you only need a single table for all message types. – Thilo

    \n
    \n

    This gives tables and statements. A table contains the rows that make its statement true.

    \n
    // assumes teacher(tid,...),student(sid,...),admin(aid,...),parent(pid,...)\n\nuser(uid) -- user [uid] is teacher or student or admin or parent\nuser_is_teacher(uid,tid) -- user [uid] is teacher [tid]\nuser_is_student(uid,sid) -- user [uid] is student [sid]\nuser_is_admin(uid,aid) -- user [uid] is admin [aid]\nuser_is_parent(uid,pid) -- user [uid] is parent [pid]\n\nuser_is_term_student(uid) -- user [uid] is term student\nuser_is_course student(uid) -- user [uid] is course student\n\nmessage_was_sent(mid,sid,rid,date,...) -- message [mid] was sent by user [sid] to user [rid] at [date] ...\nmessage_was_private(mid) -- message [mid] was private\n
    \n

    (Observe that if you had just made statements about user ids you would have discovered they are straightforward not impossible.)

    \n

    A superkey is columns with a unique value. A key is a superkey containing no superkey. Figure them out. Here are some:

    \n
    user_is_teacher keys (uid),(tid)\nmessage_was_sent key mid,(sid,rid,date)\n
    \n

    A foreign key is columns whose value is a value of some key columns. Figure them out. Here are some:

    \n
    user_is_teacher fk uid to user uid, fk tid to teacher tid\nmessage_was_sent fk sid to user uid, rid to user uid\n
    \n

    Suggest you write every design in this format.

    \n soup wrap:

    You should have a single range of userids that spans all four groups. Then you only need a single table for all message types. – Thilo

    This gives tables and statements. A table contains the rows that make its statement true.

    // assumes teacher(tid,...),student(sid,...),admin(aid,...),parent(pid,...)
    
    user(uid) -- user [uid] is teacher or student or admin or parent
    user_is_teacher(uid,tid) -- user [uid] is teacher [tid]
    user_is_student(uid,sid) -- user [uid] is student [sid]
    user_is_admin(uid,aid) -- user [uid] is admin [aid]
    user_is_parent(uid,pid) -- user [uid] is parent [pid]
    
    user_is_term_student(uid) -- user [uid] is term student
    user_is_course student(uid) -- user [uid] is course student
    
    message_was_sent(mid,sid,rid,date,...) -- message [mid] was sent by user [sid] to user [rid] at [date] ...
    message_was_private(mid) -- message [mid] was private
    

    (Observe that if you had just made statements about user ids you would have discovered they are straightforward not impossible.)

    A superkey is columns with a unique value. A key is a superkey containing no superkey. Figure them out. Here are some:

    user_is_teacher keys (uid),(tid)
    message_was_sent key mid,(sid,rid,date)
    

    A foreign key is columns whose value is a value of some key columns. Figure them out. Here are some:

    user_is_teacher fk uid to user uid, fk tid to teacher tid
    message_was_sent fk sid to user uid, rid to user uid
    

    Suggest you write every design in this format.

    qid & accept id: (24170440, 24170540) query: SQL set column to row count soup:

    You can fetch the count of Cars that belong to a driver, along with all Driver data with the following SELECT query:

    \n
    SELECT *\n    ,(\n        SELECT COUNT(*)\n        FROM Cars c\n        WHERE c.DriverID = d.DriverID\n        )\nFROM Driver d\n
    \n

    You can UPDATE the NumCars column with the following statement:

    \n
    UPDATE Driver\nSET NumCars = (\n    SELECT COUNT(*)\n    FROM Cars\n    WHERE Driver.DriverID = Cars.DriverID\n    )\n
    \n soup wrap:

    You can fetch the count of Cars that belong to a driver, along with all Driver data with the following SELECT query:

    SELECT *
        ,(
            SELECT COUNT(*)
            FROM Cars c
            WHERE c.DriverID = d.DriverID
            )
    FROM Driver d
    

    You can UPDATE the NumCars column with the following statement:

    UPDATE Driver
    SET NumCars = (
        SELECT COUNT(*)
        FROM Cars
        WHERE Driver.DriverID = Cars.DriverID
        )
    
    qid & accept id: (24194784, 24194895) query: Get SQL Results Between Specific Weekdays and Times soup:

    Just add hours:

    \n
    BETWEEN DATEADD(hh, 7, DATEADD(wk, DATEDIFF(wk, 7, GETDATE()), 7))\n    AND DATEADD(hh, 17, DATEADD(wk, DATEDIFF(wk, 11, GETDATE()), 11))\n
    \n

    If you need to get results within working hours for each day you need to set the time ranges separately:

    \n
    myDate BETWEEN DATEADD(hh, 7, DATEADD(wk, DATEDIFF(wk, 7, GETDATE()), 7)) \n   AND DATEADD(hh, 17, DATEADD(wk, DATEDIFF(wk, 7, GETDATE()), 7)) OR \nmyDate BETWEEN DATEADD(hh, 7, DATEADD(wk, DATEDIFF(wk, 8, GETDATE()), 8)) \n   AND DATEADD(hh, 17, DATEADD(wk, DATEDIFF(wk, 8, GETDATE()), 8)) OR \nmyDate BETWEEN DATEADD(hh, 7, DATEADD(wk, DATEDIFF(wk, 9, GETDATE()), 9)) \n   AND DATEADD(hh, 17, DATEADD(wk, DATEDIFF(wk, 8, GETDATE()), 8)) etc.\n
    \n

    Update: if you have other conditions that follows the date/time condition in your WHERE clause do not forget to enclose the conditions with OR operator into brackets:

    \n
    WHERE\n(myDate BETWEEN DATEADD(hh, 7, DATEADD(wk, DATEDIFF(wk, 7, GETDATE()), 7)) \n   AND DATEADD(hh, 17, DATEADD(wk, DATEDIFF(wk, 7, GETDATE()), 7)) OR \n myDate BETWEEN DATEADD(hh, 7, DATEADD(wk, DATEDIFF(wk, 8, GETDATE()), 8)) \n   AND DATEADD(hh, 17, DATEADD(wk, DATEDIFF(wk, 8, GETDATE()), 8)) OR \n myDate BETWEEN DATEADD(hh, 7, DATEADD(wk, DATEDIFF(wk, 9, GETDATE()), 9)) \n   AND DATEADD(hh, 17, DATEADD(wk, DATEDIFF(wk, 8, GETDATE()), 8)) etc.\n) AND Direction = 1 AND VMDuration = 0 AND ... etc.\n
    \n

    Read about SQL Server operator precedence here for more information

    \n soup wrap:

    Just add hours:

    BETWEEN DATEADD(hh, 7, DATEADD(wk, DATEDIFF(wk, 7, GETDATE()), 7))
        AND DATEADD(hh, 17, DATEADD(wk, DATEDIFF(wk, 11, GETDATE()), 11))
    

    If you need to get results within working hours for each day you need to set the time ranges separately:

    myDate BETWEEN DATEADD(hh, 7, DATEADD(wk, DATEDIFF(wk, 7, GETDATE()), 7)) 
       AND DATEADD(hh, 17, DATEADD(wk, DATEDIFF(wk, 7, GETDATE()), 7)) OR 
    myDate BETWEEN DATEADD(hh, 7, DATEADD(wk, DATEDIFF(wk, 8, GETDATE()), 8)) 
       AND DATEADD(hh, 17, DATEADD(wk, DATEDIFF(wk, 8, GETDATE()), 8)) OR 
    myDate BETWEEN DATEADD(hh, 7, DATEADD(wk, DATEDIFF(wk, 9, GETDATE()), 9)) 
       AND DATEADD(hh, 17, DATEADD(wk, DATEDIFF(wk, 8, GETDATE()), 8)) etc.
    

    Update: if you have other conditions that follows the date/time condition in your WHERE clause do not forget to enclose the conditions with OR operator into brackets:

    WHERE
    (myDate BETWEEN DATEADD(hh, 7, DATEADD(wk, DATEDIFF(wk, 7, GETDATE()), 7)) 
       AND DATEADD(hh, 17, DATEADD(wk, DATEDIFF(wk, 7, GETDATE()), 7)) OR 
     myDate BETWEEN DATEADD(hh, 7, DATEADD(wk, DATEDIFF(wk, 8, GETDATE()), 8)) 
       AND DATEADD(hh, 17, DATEADD(wk, DATEDIFF(wk, 8, GETDATE()), 8)) OR 
     myDate BETWEEN DATEADD(hh, 7, DATEADD(wk, DATEDIFF(wk, 9, GETDATE()), 9)) 
       AND DATEADD(hh, 17, DATEADD(wk, DATEDIFF(wk, 8, GETDATE()), 8)) etc.
    ) AND Direction = 1 AND VMDuration = 0 AND ... etc.
    

    Read about SQL Server operator precedence here for more information

    qid & accept id: (24207240, 24210673) query: Recursive CTE with alternating tables soup:

    Here is a recursive example that I believe meets your criteria. I added a ParentId to the result set, which will be NULL for the root/base file, since it does not have a parent.

    \n
    declare @BaseTableId int;\nset @BaseTableId  = 1;\n\n; WITH cteRecursive as (\n    --anchor/root parent file\n    SELECT null as ParentFileId\n        , f.FileId as ChildFileID\n        , lt.RecursiveId \n        , 0 as [level]\n        , bt.BaseTableId\n    FROM BaseTable bt\n        INNER JOIN Files f\n            on bt.BaseTableId = f.BaseTableId\n        INNER JOIN LinkingTable lt\n            on f.FileId = lt.FileId\n    WHERE bt.BaseTableId = @BaseTableId \n\n    UNION ALL \n\n    SELECT cte.ChildFileID as ParentFileID \n        , f.FileId as ChildFileID\n        , lt.RecursiveId\n        , cte.level + 1 as [level]\n        , cte.BaseTableId\n    FROM cteRecursive cte\n        INNER JOIN Files f on cte.RecursiveId = f.RecursiveId\n        INNER JOIN LinkingTable lt ON lt.FileId = f.FileId\n)\nSELECT * \nFROM cteRecursive\n;\n
    \n

    Results for @BaseTableID = 1:

    \n
    ParentFileId ChildFileID RecursiveId level       BaseTableId\n------------ ----------- ----------- ----------- -----------\nNULL         1           1           0           1\n1            3           2           1           1\n3            4           3           2           1\n
    \n

    Results for @BaseTableID = 2:

    \n
    ParentFileId ChildFileID RecursiveId level       BaseTableId\n------------ ----------- ----------- ----------- -----------\nNULL         2           1           0           2\nNULL         2           4           0           2\n2            6           5           1           2\n6            7           6           2           2\n2            3           2           1           2\n3            4           3           2           2\n
    \n soup wrap:

    Here is a recursive example that I believe meets your criteria. I added a ParentId to the result set, which will be NULL for the root/base file, since it does not have a parent.

    declare @BaseTableId int;
    set @BaseTableId  = 1;
    
    ; WITH cteRecursive as (
        --anchor/root parent file
        SELECT null as ParentFileId
            , f.FileId as ChildFileID
            , lt.RecursiveId 
            , 0 as [level]
            , bt.BaseTableId
        FROM BaseTable bt
            INNER JOIN Files f
                on bt.BaseTableId = f.BaseTableId
            INNER JOIN LinkingTable lt
                on f.FileId = lt.FileId
        WHERE bt.BaseTableId = @BaseTableId 
    
        UNION ALL 
    
        SELECT cte.ChildFileID as ParentFileID 
            , f.FileId as ChildFileID
            , lt.RecursiveId
            , cte.level + 1 as [level]
            , cte.BaseTableId
        FROM cteRecursive cte
            INNER JOIN Files f on cte.RecursiveId = f.RecursiveId
            INNER JOIN LinkingTable lt ON lt.FileId = f.FileId
    )
    SELECT * 
    FROM cteRecursive
    ;
    

    Results for @BaseTableID = 1:

    ParentFileId ChildFileID RecursiveId level       BaseTableId
    ------------ ----------- ----------- ----------- -----------
    NULL         1           1           0           1
    1            3           2           1           1
    3            4           3           2           1
    

    Results for @BaseTableID = 2:

    ParentFileId ChildFileID RecursiveId level       BaseTableId
    ------------ ----------- ----------- ----------- -----------
    NULL         2           1           0           2
    NULL         2           4           0           2
    2            6           5           1           2
    6            7           6           2           2
    2            3           2           1           2
    3            4           3           2           2
    
    qid & accept id: (24275420, 24279757) query: How to group multiple values into a single column in SQL soup:

    For older versions, I guess WM_CONCAT would work. Modifying Gordon Linoff's query:

    \n
    SELECT T1."PN" as "Part Number", max(T2."QTY") as "Quantity", T2."BRANCH" AS "Location",\n       WM_CONCAT(T3."STOCK") as Bins\nFROM "XYZ"."PARTS" T1 JOIN\n     "XYZ"."BALANCES" T2\n     ON T2."PART_ID" = T1."PART_ID" JOIN\n     "XYZ"."DETAILS" T3\n     ON T3."PART_ID" = T1."PART_ID"\nGROUP BY t1.PN, t2.Branch\nORDER BY "Part Number", "Location";\n
    \n

    Also refer this link for an alternate approach: Including the answer in the link for refernce:

    \n
    create table countries ( country_name varchar2 (100));\ninsert into countries values ('Albania');\ninsert into countries values ('Andorra');\ninsert into countries values ('Antigua');\n\n\nSELECT SUBSTR (SYS_CONNECT_BY_PATH (country_name , ','), 2) csv\n      FROM (SELECT country_name , ROW_NUMBER () OVER (ORDER BY country_name ) rn,\n                   COUNT (*) OVER () cnt\n              FROM countries)\n     WHERE rn = cnt\nSTART WITH rn = 1\nCONNECT BY rn = PRIOR rn + 1;\n\nCSV                                                                             \n--------------------------\nAlbania,Andorra,Antigua    \n
    \n soup wrap:

    For older versions, I guess WM_CONCAT would work. Modifying Gordon Linoff's query:

    SELECT T1."PN" as "Part Number", max(T2."QTY") as "Quantity", T2."BRANCH" AS "Location",
           WM_CONCAT(T3."STOCK") as Bins
    FROM "XYZ"."PARTS" T1 JOIN
         "XYZ"."BALANCES" T2
         ON T2."PART_ID" = T1."PART_ID" JOIN
         "XYZ"."DETAILS" T3
         ON T3."PART_ID" = T1."PART_ID"
    GROUP BY t1.PN, t2.Branch
    ORDER BY "Part Number", "Location";
    

    Also refer this link for an alternate approach: Including the answer in the link for refernce:

    create table countries ( country_name varchar2 (100));
    insert into countries values ('Albania');
    insert into countries values ('Andorra');
    insert into countries values ('Antigua');
    
    
    SELECT SUBSTR (SYS_CONNECT_BY_PATH (country_name , ','), 2) csv
          FROM (SELECT country_name , ROW_NUMBER () OVER (ORDER BY country_name ) rn,
                       COUNT (*) OVER () cnt
                  FROM countries)
         WHERE rn = cnt
    START WITH rn = 1
    CONNECT BY rn = PRIOR rn + 1;
    
    CSV                                                                             
    --------------------------
    Albania,Andorra,Antigua    
    
    qid & accept id: (24287463, 24287526) query: Create SQL summary using union soup:

    It looks like you want to add

    \n
     WITH ROLLUP\n
    \n

    to the end of your query

    \n

    eg:

    \n
    Select sum(a) as col1, sum(b) as col2\nfrom yourtable\ngroup by something\nwith rollup\n
    \n

    Depending on the full nature of your query, you may prefer to use with cube, which is similar. See http://technet.microsoft.com/en-us/library/ms189305(v=sql.90).aspx

    \n soup wrap:

    It looks like you want to add

     WITH ROLLUP
    

    to the end of your query

    eg:

    Select sum(a) as col1, sum(b) as col2
    from yourtable
    group by something
    with rollup
    

    Depending on the full nature of your query, you may prefer to use with cube, which is similar. See http://technet.microsoft.com/en-us/library/ms189305(v=sql.90).aspx

    qid & accept id: (24291644, 24292276) query: Extracting first available number and its following text from a string soup:

    SQL Fiddle

    \n

    MS SQL Server 2008 Schema Setup:

    \n
    CREATE TABLE Table1\n    ([dosage] varchar(144))\n;\n\nINSERT INTO Table1\n    ([dosage])\nVALUES\n    ('Pain Medication. 20 mg/100 mL NS (0.2mg/mL) \n      Therapy: IV PCA Adult / Qualifier: Standard Continuous Rate = 0 mg/hr, \n      IV, Routine PCA Dose = 0.4 mg')\n;\n
    \n

    Query 1:

    \n
    SELECT substring(dosage,\n                 PATINDEX('%[0-9]%',dosage),\n                 PATINDEX('%/%',dosage)-PATINDEX('%[0-9]%',dosage)\n                )\nFROM Table1\n
    \n

    Results:

    \n
    | COLUMN_0 |\n|----------|\n|    20 mg |\n
    \n soup wrap:

    SQL Fiddle

    MS SQL Server 2008 Schema Setup:

    CREATE TABLE Table1
        ([dosage] varchar(144))
    ;
    
    INSERT INTO Table1
        ([dosage])
    VALUES
        ('Pain Medication. 20 mg/100 mL NS (0.2mg/mL) 
          Therapy: IV PCA Adult / Qualifier: Standard Continuous Rate = 0 mg/hr, 
          IV, Routine PCA Dose = 0.4 mg')
    ;
    

    Query 1:

    SELECT substring(dosage,
                     PATINDEX('%[0-9]%',dosage),
                     PATINDEX('%/%',dosage)-PATINDEX('%[0-9]%',dosage)
                    )
    FROM Table1
    

    Results:

    | COLUMN_0 |
    |----------|
    |    20 mg |
    
    qid & accept id: (24310683, 24311027) query: Cursor? Loop? Aggregate up rows data along with row results soup:

    You can do this by using the GROUPING SETS extension of the GROUP BY clause:

    \n
    SELECT  Description, \n        COALESCE(Parition, 'Total') AS Partition,\n        SUM(Total) AS Total\nFROM    MyTable\nGROUP BY GROUPING SETS ((Description, Partition), (Description));\n
    \n

    or you could use:

    \n
    SELECT  Description, \n        COALESCE(Parition, 'Total') AS Partition,\n        SUM(Total) AS Total\nFROM    MyTable\nGROUP BY ROLLUP (Description, Partition);\n
    \n

    Without ROLLUP, you can do this using UNION ALL:

    \n
    SELECT  Description, \n        Parition,\n        Total\nFROM    MyTable\nUNION ALL\nSELECT  Description, \n        'Total' AS Partition,\n        SUM(Total) AS Total\nFROM    MyTable\nGROUP BY Description;\n
    \n soup wrap:

    You can do this by using the GROUPING SETS extension of the GROUP BY clause:

    SELECT  Description, 
            COALESCE(Parition, 'Total') AS Partition,
            SUM(Total) AS Total
    FROM    MyTable
    GROUP BY GROUPING SETS ((Description, Partition), (Description));
    

    or you could use:

    SELECT  Description, 
            COALESCE(Parition, 'Total') AS Partition,
            SUM(Total) AS Total
    FROM    MyTable
    GROUP BY ROLLUP (Description, Partition);
    

    Without ROLLUP, you can do this using UNION ALL:

    SELECT  Description, 
            Parition,
            Total
    FROM    MyTable
    UNION ALL
    SELECT  Description, 
            'Total' AS Partition,
            SUM(Total) AS Total
    FROM    MyTable
    GROUP BY Description;
    
    qid & accept id: (24316425, 24316488) query: strange calculate data on a table soup:

    You can do what you want with a cumulative sum. The following syntax is ANSI standard and should work (depending on the version of your database):

    \n
    select sum(a*(revcumb - b)) as a_sum, sum(b*(revcuma - a)) as b_sum\nfrom (select t.*,\n             sum(b) over (order by id desc) as revcumb,\n             sum(a) over (order by id desc) as revcuma\n      from table t\n     ) t;\n
    \n

    Note that instead of using rows between or range between, this just subtracts the value in the current row from the (reverse) cumulative sum.

    \n

    Also note that this assumes the presence of an id column or some other column to specify the ordering of rows. SQL tables are inherently unordered, so you need a column to specify ordering, when that is important.

    \n

    And, if you don't have cumulative sum (i.e. SQL Server < 2012), then you can do the same thing with correlated subqueries.

    \n

    EDIT:

    \n

    Sybase may or may not support the above. There are so many different versions of that database that it is hardly worth anything as a tag. I think this will work on most versions:

    \n
    select sum(a*revcumb) as a_sum, sum(b*revcuma) as b_sum\nfrom (select t.*,\n             (select sum(b) from table t2 where t2.id > t.id) as revcumb,\n             (select sum(a) from table t2 where t2.id > t.id) as revcuma\n      from table t\n     ) t;\n
    \n soup wrap:

    You can do what you want with a cumulative sum. The following syntax is ANSI standard and should work (depending on the version of your database):

    select sum(a*(revcumb - b)) as a_sum, sum(b*(revcuma - a)) as b_sum
    from (select t.*,
                 sum(b) over (order by id desc) as revcumb,
                 sum(a) over (order by id desc) as revcuma
          from table t
         ) t;
    

    Note that instead of using rows between or range between, this just subtracts the value in the current row from the (reverse) cumulative sum.

    Also note that this assumes the presence of an id column or some other column to specify the ordering of rows. SQL tables are inherently unordered, so you need a column to specify ordering, when that is important.

    And, if you don't have cumulative sum (i.e. SQL Server < 2012), then you can do the same thing with correlated subqueries.

    EDIT:

    Sybase may or may not support the above. There are so many different versions of that database that it is hardly worth anything as a tag. I think this will work on most versions:

    select sum(a*revcumb) as a_sum, sum(b*revcuma) as b_sum
    from (select t.*,
                 (select sum(b) from table t2 where t2.id > t.id) as revcumb,
                 (select sum(a) from table t2 where t2.id > t.id) as revcuma
          from table t
         ) t;
    
    qid & accept id: (24342739, 24342931) query: Concatenate rows from a complex select in SQL soup:

    You can use CTE :

    \n
    WITH cteTbl (NominationId, NominationOrderId, GiftName) AS ( Your Query here)\n
    \n

    And then concatenate all rows with the same NominationId and NominationOrderId with FOR XML PATH('') and after that replace the first comma , with STUFF:

    \n
    SELECT t.NominationId\n     , t.NominationOrderId\n     , STUFF( ( SELECT ', ' + GiftName\n                FROM cteTbl\n                WHERE NominationId = t.NominationId\n                  AND NominationOrderId = t.NominationOrderId\n                ORDER BY GiftName DESC\n                FOR XML PATH('') ), 1, 1, '')\nFROM cteTbl t \nGROUP BY t.NominationId\n       , t.NominationOrderId\n
    \n

    SQLFiddle

    \n soup wrap:

    You can use CTE :

    WITH cteTbl (NominationId, NominationOrderId, GiftName) AS ( Your Query here)
    

    And then concatenate all rows with the same NominationId and NominationOrderId with FOR XML PATH('') and after that replace the first comma , with STUFF:

    SELECT t.NominationId
         , t.NominationOrderId
         , STUFF( ( SELECT ', ' + GiftName
                    FROM cteTbl
                    WHERE NominationId = t.NominationId
                      AND NominationOrderId = t.NominationOrderId
                    ORDER BY GiftName DESC
                    FOR XML PATH('') ), 1, 1, '')
    FROM cteTbl t 
    GROUP BY t.NominationId
           , t.NominationOrderId
    

    SQLFiddle

    qid & accept id: (24372541, 24373732) query: SQL PIVOT, JOIN, and aggregate function to generate report soup:

    Interesting. Pivot requires an aggregate function to build the 1-5 values, so you'll have to rewrite your inner query probably as a union, and use MAX() as a throwaway aggregate function (throwaway because every record should be unique, so MAX, MIN, SUM, etc. should all return the same value:

    \n
    SELECT * INTO #newblah from (\n   SELECT PersonFK, 1 as StrengthIndex, Strength1 as Strength from blah UNION ALL\n   SELECT PersonFK, 2 as StrengthIndex, Strength2 as Strength from blah UNION ALL\n   SELECT PersonFK, 3 as StrengthIndex, Strength3 as Strength from blah UNION ALL\n   SELECT PersonFK, 4 as StrengthIndex, Strength4 as Strength from blah UNION ALL\n   SELECT PersonFK, 5 as StrengthIndex, Strength5 as Strength from blah\n )\n
    \n

    Then

    \n
    select PersonFK, [Achiever], [Activator], [Adaptability], [Analytical], [Belief] .....\nfrom\n(\n  select PersonFK, StrengthIndex, Strength\n  from #newblah\n) pivotsource\npivot\n(\n  max(StrengthIndex)\n  for Strength in ([Achiever], [Activator], [Adaptability], [Analytical], [Belief] ..... )\n) myPivot;\n
    \n

    The result of that query should be able to be joined back to your other tables to get the Person name, Strength Category, and Team name, so I'll leave that to you. You don't HAVE to do the first join as a temporary table -- you could do it as a subselect inline, so this could all be done in one SQL query, but that seems painful if you can avoid it.

    \n soup wrap:

    Interesting. Pivot requires an aggregate function to build the 1-5 values, so you'll have to rewrite your inner query probably as a union, and use MAX() as a throwaway aggregate function (throwaway because every record should be unique, so MAX, MIN, SUM, etc. should all return the same value:

    SELECT * INTO #newblah from (
       SELECT PersonFK, 1 as StrengthIndex, Strength1 as Strength from blah UNION ALL
       SELECT PersonFK, 2 as StrengthIndex, Strength2 as Strength from blah UNION ALL
       SELECT PersonFK, 3 as StrengthIndex, Strength3 as Strength from blah UNION ALL
       SELECT PersonFK, 4 as StrengthIndex, Strength4 as Strength from blah UNION ALL
       SELECT PersonFK, 5 as StrengthIndex, Strength5 as Strength from blah
     )
    

    Then

    select PersonFK, [Achiever], [Activator], [Adaptability], [Analytical], [Belief] .....
    from
    (
      select PersonFK, StrengthIndex, Strength
      from #newblah
    ) pivotsource
    pivot
    (
      max(StrengthIndex)
      for Strength in ([Achiever], [Activator], [Adaptability], [Analytical], [Belief] ..... )
    ) myPivot;
    

    The result of that query should be able to be joined back to your other tables to get the Person name, Strength Category, and Team name, so I'll leave that to you. You don't HAVE to do the first join as a temporary table -- you could do it as a subselect inline, so this could all be done in one SQL query, but that seems painful if you can avoid it.

    qid & accept id: (24375773, 24377611) query: Replace each letter with it's ASCII code in a string in PL/SQL soup:

    I think you might be looking for something like this:

    \n
    CREATE OR REPLACE FUNCTION FUBAR_STR(in_str VARCHAR2) RETURN VARCHAR2 AS\n  out_str VARCHAR2(4000) := '';\nBEGIN\n  FOR i IN 1..LENGTH(in_str) LOOP\n    out_str := out_str || TO_CHAR(ASCII(SUBSTR(in_str,i,1)) - 55);\n  END LOOP;\n  RETURN out_str;\nEND FUBAR_STR;\n
    \n

    So when you run:

    \n
    select fubar_str('abcd') from dual;\n
    \n

    You get: 42434445.

    \n

    Here is the reversible, safer one to use.

    \n
    CREATE OR REPLACE FUNCTION FUBAR_STR(in_str VARCHAR2) RETURN VARCHAR2 AS\n  out_str VARCHAR2(32676) := '';\nBEGIN\n  FOR i IN 1..LEAST(LENGTH(in_str),10892) LOOP\n    out_str := out_str || LPAD(TO_CHAR(ASCII(SUBSTR(in_str,i,1)) - 55),3,'0');\n  END LOOP;\n  RETURN out_str;\nEND FUBAR_STR;\n
    \n

    So when you run:

    \n
    select fubar_str('abcd') from dual;\n
    \n

    You get: 042043044045.

    \n

    And because I'm really bored tonight:

    \n
    CREATE OR REPLACE FUNCTION UNFUBAR_STR(in_str VARCHAR2) RETURN VARCHAR2 AS\n  out_str VARCHAR2(10892) := '';\nBEGIN\n  FOR i IN 0..(((LENGTH(in_str) - MOD(LENGTH(in_str),3))/3) - 1) LOOP\n    out_str := out_str || CHR(TO_NUMBER(LTRIM(SUBSTR(in_str,(i * 3) + 1,3),'0')) + 55);\n  END LOOP;\n  RETURN out_str;\nEND UNFUBAR_STR;\n
    \n

    So when you run:

    \n
    select unfubar_str('042043044045') from dual;\n
    \n

    You get: abcd.

    \n soup wrap:

    I think you might be looking for something like this:

    CREATE OR REPLACE FUNCTION FUBAR_STR(in_str VARCHAR2) RETURN VARCHAR2 AS
      out_str VARCHAR2(4000) := '';
    BEGIN
      FOR i IN 1..LENGTH(in_str) LOOP
        out_str := out_str || TO_CHAR(ASCII(SUBSTR(in_str,i,1)) - 55);
      END LOOP;
      RETURN out_str;
    END FUBAR_STR;
    

    So when you run:

    select fubar_str('abcd') from dual;
    

    You get: 42434445.

    Here is the reversible, safer one to use.

    CREATE OR REPLACE FUNCTION FUBAR_STR(in_str VARCHAR2) RETURN VARCHAR2 AS
      out_str VARCHAR2(32676) := '';
    BEGIN
      FOR i IN 1..LEAST(LENGTH(in_str),10892) LOOP
        out_str := out_str || LPAD(TO_CHAR(ASCII(SUBSTR(in_str,i,1)) - 55),3,'0');
      END LOOP;
      RETURN out_str;
    END FUBAR_STR;
    

    So when you run:

    select fubar_str('abcd') from dual;
    

    You get: 042043044045.

    And because I'm really bored tonight:

    CREATE OR REPLACE FUNCTION UNFUBAR_STR(in_str VARCHAR2) RETURN VARCHAR2 AS
      out_str VARCHAR2(10892) := '';
    BEGIN
      FOR i IN 0..(((LENGTH(in_str) - MOD(LENGTH(in_str),3))/3) - 1) LOOP
        out_str := out_str || CHR(TO_NUMBER(LTRIM(SUBSTR(in_str,(i * 3) + 1,3),'0')) + 55);
      END LOOP;
      RETURN out_str;
    END UNFUBAR_STR;
    

    So when you run:

    select unfubar_str('042043044045') from dual;
    

    You get: abcd.

    qid & accept id: (24391293, 24399074) query: Avoid multiple calls on same function when expanding composite result soup:

    A CTE is not even necessary. A plain subquery does the job as well (tested with pg 9.3):

    \n
    SELECT i, (f).*                     -- decompose here\nFROM  (\n   SELECT i, (slow_func(i)) AS f    -- do not decompose here\n   FROM   generate_series(1, 3) i\n   ) sub;\n
    \n

    Be sure not to decompose the composite result of the function in the subquery. Reserve that for the outer query.
    \nRequires a well known type, of course. Would not work with anonymous records.

    \n

    Or, what @Richard wrote, a LATERAL JOIN works, too. The syntax can be simpler:

    \n
    SELECT * FROM generate_series(1, 3) i, slow_func(i) f\n
    \n\n

    SQL Fiddle with EXPLAIN VERBOSE output for all variants. You can see multiple evaluation of the function if it happens.

    \n

    COST setting

    \n

    Generally (should not matter for this particular query), make sure to apply a high cost setting to your function, so the planner knows to avoid evaluating more often then necessary. Like:

    \n
    CREATE OR REPLACE FUNCTION slow_function(int)\n  RETURNS result_t AS\n$func$\n    -- expensive body\n$func$ LANGUAGE sql IMMUTABLE COST 100000;
    \n

    Per documentation:

    \n
    \n

    Larger values cause the planner to try to avoid evaluating the function more often than necessary.

    \n
    \n soup wrap:

    A CTE is not even necessary. A plain subquery does the job as well (tested with pg 9.3):

    SELECT i, (f).*                     -- decompose here
    FROM  (
       SELECT i, (slow_func(i)) AS f    -- do not decompose here
       FROM   generate_series(1, 3) i
       ) sub;
    

    Be sure not to decompose the composite result of the function in the subquery. Reserve that for the outer query.
    Requires a well known type, of course. Would not work with anonymous records.

    Or, what @Richard wrote, a LATERAL JOIN works, too. The syntax can be simpler:

    SELECT * FROM generate_series(1, 3) i, slow_func(i) f
    

    SQL Fiddle with EXPLAIN VERBOSE output for all variants. You can see multiple evaluation of the function if it happens.

    COST setting

    Generally (should not matter for this particular query), make sure to apply a high cost setting to your function, so the planner knows to avoid evaluating more often then necessary. Like:

    CREATE OR REPLACE FUNCTION slow_function(int)
      RETURNS result_t AS
    $func$
        -- expensive body
    $func$ LANGUAGE sql IMMUTABLE COST 100000;

    Per documentation:

    Larger values cause the planner to try to avoid evaluating the function more often than necessary.

    qid & accept id: (24438529, 24438845) query: How can I find missing date range in sql server 2008? soup:

    There may be a simpler way to do this, but often when trying to find missing numbers/dates you need to create those numbers/dates then LEFT JOIN to your existing data to find what is missing. You can create the dates in question with a recursive cte:

    \n
    WITH cal AS (SELECT CAST('2014-07-01' AS DATE) dt\n              UNION  ALL\n              SELECT DATEADD(DAY,1,dt)\n              FROM cal\n              WHERE dt < '2014-07-30')\nSELECT *\nFROM cal\n
    \n

    Then, you LEFT JOIN to your table to get a list of missing dates:

    \n
    WITH cal AS (SELECT CAST('2014-07-01' AS DATE) dt\n              UNION  ALL\n              SELECT DATEADD(DAY,1,dt)\n              FROM cal\n              WHERE dt < '2014-07-30')\nSELECT DISTINCT cal.dt \nFROM  cal\nLEFT JOIN YourTable a\n   ON cal.dt BETWEEN CAST(SS_StartDate AS DATE) AND CAST(SS_EndDate AS DATE)\nWHERE a.SS_StartDate IS NULL\n
    \n

    Then you need to find out whether or not consecutive rows belong in the same range, or if they have a gap between them, using DATEDIFF() and ROW_NUMBER():

    \n
    WITH cal AS (SELECT CAST('2014-07-01' AS DATE) dt\n              UNION  ALL\n              SELECT DATEADD(DAY,1,dt)\n              FROM cal\n              WHERE dt < '2014-07-30')\n    ,dt_list AS (SELECT DISTINCT cal.dt \n                  FROM  cal\n                  LEFT JOIN YourTable a\n                    ON cal.dt BETWEEN CAST(SS_StartDate AS DATE) AND CAST(SS_EndDate AS DATE)\n                  WHERE a.SS_StartDate IS NULL)        \nSELECT dt\n      ,DATEDIFF(D, ROW_NUMBER() OVER(ORDER BY dt), dt) AS dt_range\nFROM dt_list\n
    \n

    Then use MIN() and MAX() to get the ranges:

    \n
    WITH cal AS (SELECT CAST('2014-07-01' AS DATE) dt\n              UNION  ALL\n              SELECT DATEADD(DAY,1,dt)\n              FROM cal\n              WHERE dt < '2014-07-30')\n    ,dt_list AS (SELECT DISTINCT cal.dt \n                  FROM  cal\n                  LEFT JOIN YourTable a\n                    ON cal.dt BETWEEN CAST(SS_StartDate AS DATE) AND CAST(SS_EndDate AS DATE)\n                  WHERE a.SS_StartDate IS NULL)        \n    ,dt_range AS (SELECT dt\n                         ,DATEDIFF(D, ROW_NUMBER() OVER(ORDER BY dt), dt) AS dt_range\n                  FROM dt_list)\nSELECT  MIN(dt) AS BeginRange\n       ,MAX(dt) AS EndRange\nFROM dt_range\nGROUP BY dt_range;\n--OPTION (MAXRECURSION 0)\n
    \n

    Demo: SQL Fiddle

    \n

    Note: If the range you're checking is more than 100 days you'll need to specify the MAXRECURSION, 0 means no limit.

    \n

    Note2: If your SE dates are intended to drive the complete date range, then change the cal cte from fixed dates to queries using MIN() and MAX() respectively.

    \n soup wrap:

    There may be a simpler way to do this, but often when trying to find missing numbers/dates you need to create those numbers/dates then LEFT JOIN to your existing data to find what is missing. You can create the dates in question with a recursive cte:

    WITH cal AS (SELECT CAST('2014-07-01' AS DATE) dt
                  UNION  ALL
                  SELECT DATEADD(DAY,1,dt)
                  FROM cal
                  WHERE dt < '2014-07-30')
    SELECT *
    FROM cal
    

    Then, you LEFT JOIN to your table to get a list of missing dates:

    WITH cal AS (SELECT CAST('2014-07-01' AS DATE) dt
                  UNION  ALL
                  SELECT DATEADD(DAY,1,dt)
                  FROM cal
                  WHERE dt < '2014-07-30')
    SELECT DISTINCT cal.dt 
    FROM  cal
    LEFT JOIN YourTable a
       ON cal.dt BETWEEN CAST(SS_StartDate AS DATE) AND CAST(SS_EndDate AS DATE)
    WHERE a.SS_StartDate IS NULL
    

    Then you need to find out whether or not consecutive rows belong in the same range, or if they have a gap between them, using DATEDIFF() and ROW_NUMBER():

    WITH cal AS (SELECT CAST('2014-07-01' AS DATE) dt
                  UNION  ALL
                  SELECT DATEADD(DAY,1,dt)
                  FROM cal
                  WHERE dt < '2014-07-30')
        ,dt_list AS (SELECT DISTINCT cal.dt 
                      FROM  cal
                      LEFT JOIN YourTable a
                        ON cal.dt BETWEEN CAST(SS_StartDate AS DATE) AND CAST(SS_EndDate AS DATE)
                      WHERE a.SS_StartDate IS NULL)        
    SELECT dt
          ,DATEDIFF(D, ROW_NUMBER() OVER(ORDER BY dt), dt) AS dt_range
    FROM dt_list
    

    Then use MIN() and MAX() to get the ranges:

    WITH cal AS (SELECT CAST('2014-07-01' AS DATE) dt
                  UNION  ALL
                  SELECT DATEADD(DAY,1,dt)
                  FROM cal
                  WHERE dt < '2014-07-30')
        ,dt_list AS (SELECT DISTINCT cal.dt 
                      FROM  cal
                      LEFT JOIN YourTable a
                        ON cal.dt BETWEEN CAST(SS_StartDate AS DATE) AND CAST(SS_EndDate AS DATE)
                      WHERE a.SS_StartDate IS NULL)        
        ,dt_range AS (SELECT dt
                             ,DATEDIFF(D, ROW_NUMBER() OVER(ORDER BY dt), dt) AS dt_range
                      FROM dt_list)
    SELECT  MIN(dt) AS BeginRange
           ,MAX(dt) AS EndRange
    FROM dt_range
    GROUP BY dt_range;
    --OPTION (MAXRECURSION 0)
    

    Demo: SQL Fiddle

    Note: If the range you're checking is more than 100 days you'll need to specify the MAXRECURSION, 0 means no limit.

    Note2: If your SE dates are intended to drive the complete date range, then change the cal cte from fixed dates to queries using MIN() and MAX() respectively.

    qid & accept id: (24497436, 24497698) query: Select values from different rows in a mysql join soup:

    Since each category (and by the way, you might want to rename either the table or the level so that "category" doesn't mean two different things) has a singular known parent, but an indeterminate number of unknown children, you need to "walk up" from the most specific (at depth = 2) to the most general category, performing a self-join on the category table for each additional value you want to insert.

    \n

    If you're impatient, skip to the SQL Fiddle link at the bottom of the post. If you'd rather be walked through it, continue reading - it's really not that different from any other case where you have a surrogate ID that you want to replace with data from the corresponding table.

    \n

    You could start by looking at all the information:

    \n
    SELECT * FROM products AS P\n        JOIN\n    products_categories AS PC ON P.id = PC.product_id\n        JOIN\n    categories AS C ON PC.category_id = C.id\nWHERE P.id = 1 AN D C.depth = 2;\n\n+----+------------+------------+-------------+----+-----------+-------+---------+\n| id | name       | product_id | category_id | id | parent_id | depth | name    |\n+----+------------+------------+-------------+----+-----------+-------+---------+\n| 1  | Rad Widget | 1          | 3           | 3  | 2         | 2     | Widgets |\n+----+------------+------------+-------------+----+-----------+-------+---------+\n
    \n

    First thing you have to do is recognize which information is useful and which is not. You don't want to be SELECT *-ing all day here. You have the first two columns you want, and the last column (recognize this as your "class"); you need parent_id to find the next column you want, and let's hold onto depth just for illustration. Forget the rest, they're clutter.

    \n

    So replace that * with specific column names, alias "class", and go after the data represented by parent_id. This information is stored in the category table - you might be thinking, but I already joined that table! Don't care; do it again, only give it a new alias. Remember that your ON condition is a bit different - the products_categories has done its job already, now you want the row that matches C.parent_id - and that you only need certain columns to find the next parent:

    \n
    SELECT\n    P.id,\n    P.name,\n    C1.parent_id,\n    C1.depth,\n    C1.name,\n    C.name AS 'class'\nFROM\n    products AS P\n        JOIN\n    products_categories AS PC ON P.id = PC.product_id\n        JOIN\n    categories AS C ON PC.category_id = C.id\n        JOIN\n    categories AS C1 ON C.parent_id = C1.id\nWHERE\n    P.id = 1\n        AND C.depth = 2;\n\n+----+------------+-----------+---------------+---------+\n| id | name       | parent_id | name          | class   |\n+----+------------+-----------+---------------+---------+\n| 1  | Rad Widget | 1         | Miscellaneous | Widgets |\n+----+------------+-----------+---------------+---------+\n
    \n

    Repeat the process one more time, aliasing the column you just added and using the new C1.parent_id in your next join condition:

    \n
    SELECT\n    P.id,\n    P.name,\n    PC.category_id,\n    C2.parent_id,\n    C2.depth,\n    C2.name,\n    C1.name AS 'category',\n    C.name AS 'class'\nFROM\n    products AS P\n        JOIN\n    products_categories AS PC ON P.id = PC.product_id\n        JOIN\n    categories AS C ON PC.category_id = C.id\n        JOIN\n    categories AS C1 ON C.parent_id = C1.id\n        JOIN\n    categories AS C2 ON C1.parent_id = C2.id\nWHERE\n    P.id = 1\n        AND C.depth = 2;\n\n+----+------------+-----------+-------+-------------+---------------+---------+\n| id | name       | parent_id | depth | name        | category      | class   |\n+----+------------+-----------+-------+-------------+---------------+---------+\n| 1  | Rad Widget | NULL      | 0     | Electronics | Miscellaneous | Widgets |\n+----+------------+-----------+-------+-------------+---------------+---------+\n
    \n

    Now we're clearly done; we can't join another copy on C2.parent_id = NULL and we also see that depth = 0, so all that's left to do is get rid of the columns we don't want to display and double check our aliases. Here it is in action on SQL Fiddle.

    \n soup wrap:

    Since each category (and by the way, you might want to rename either the table or the level so that "category" doesn't mean two different things) has a singular known parent, but an indeterminate number of unknown children, you need to "walk up" from the most specific (at depth = 2) to the most general category, performing a self-join on the category table for each additional value you want to insert.

    If you're impatient, skip to the SQL Fiddle link at the bottom of the post. If you'd rather be walked through it, continue reading - it's really not that different from any other case where you have a surrogate ID that you want to replace with data from the corresponding table.

    You could start by looking at all the information:

    SELECT * FROM products AS P
            JOIN
        products_categories AS PC ON P.id = PC.product_id
            JOIN
        categories AS C ON PC.category_id = C.id
    WHERE P.id = 1 AN D C.depth = 2;
    
    +----+------------+------------+-------------+----+-----------+-------+---------+
    | id | name       | product_id | category_id | id | parent_id | depth | name    |
    +----+------------+------------+-------------+----+-----------+-------+---------+
    | 1  | Rad Widget | 1          | 3           | 3  | 2         | 2     | Widgets |
    +----+------------+------------+-------------+----+-----------+-------+---------+
    

    First thing you have to do is recognize which information is useful and which is not. You don't want to be SELECT *-ing all day here. You have the first two columns you want, and the last column (recognize this as your "class"); you need parent_id to find the next column you want, and let's hold onto depth just for illustration. Forget the rest, they're clutter.

    So replace that * with specific column names, alias "class", and go after the data represented by parent_id. This information is stored in the category table - you might be thinking, but I already joined that table! Don't care; do it again, only give it a new alias. Remember that your ON condition is a bit different - the products_categories has done its job already, now you want the row that matches C.parent_id - and that you only need certain columns to find the next parent:

    SELECT
        P.id,
        P.name,
        C1.parent_id,
        C1.depth,
        C1.name,
        C.name AS 'class'
    FROM
        products AS P
            JOIN
        products_categories AS PC ON P.id = PC.product_id
            JOIN
        categories AS C ON PC.category_id = C.id
            JOIN
        categories AS C1 ON C.parent_id = C1.id
    WHERE
        P.id = 1
            AND C.depth = 2;
    
    +----+------------+-----------+---------------+---------+
    | id | name       | parent_id | name          | class   |
    +----+------------+-----------+---------------+---------+
    | 1  | Rad Widget | 1         | Miscellaneous | Widgets |
    +----+------------+-----------+---------------+---------+
    

    Repeat the process one more time, aliasing the column you just added and using the new C1.parent_id in your next join condition:

    SELECT
        P.id,
        P.name,
        PC.category_id,
        C2.parent_id,
        C2.depth,
        C2.name,
        C1.name AS 'category',
        C.name AS 'class'
    FROM
        products AS P
            JOIN
        products_categories AS PC ON P.id = PC.product_id
            JOIN
        categories AS C ON PC.category_id = C.id
            JOIN
        categories AS C1 ON C.parent_id = C1.id
            JOIN
        categories AS C2 ON C1.parent_id = C2.id
    WHERE
        P.id = 1
            AND C.depth = 2;
    
    +----+------------+-----------+-------+-------------+---------------+---------+
    | id | name       | parent_id | depth | name        | category      | class   |
    +----+------------+-----------+-------+-------------+---------------+---------+
    | 1  | Rad Widget | NULL      | 0     | Electronics | Miscellaneous | Widgets |
    +----+------------+-----------+-------+-------------+---------------+---------+
    

    Now we're clearly done; we can't join another copy on C2.parent_id = NULL and we also see that depth = 0, so all that's left to do is get rid of the columns we don't want to display and double check our aliases. Here it is in action on SQL Fiddle.

    qid & accept id: (24550681, 24551996) query: How to create new table where database's name begin with ...? soup:

    There is a better and cheaper way to do this. This is very very simple and works perfectly.\n

    With SELECT INTO statement you can copy the structure of a table as well as data to another table in same or external databases.

    \nReference:http://www.w3schools.com/sql/sql_select_into.asp

    \n
    DECLARE @sql VARCHAR(8000)\nSET @sql=''\nSELECT @sql=@sql+'; SELECT * INTO '+name+'.dbo.E_Invent2 FROM OriginalDB.dbo.E_Invent2' FROM sysdatabases WHERE name LIKE 'CM_0%' and name<>'OriginalDB'\nSELECT @sql\nEXEC(@sql)\n
    \n

    Here OrigialDB is the name of database where you have this table.
    \n
    \nIf your table in OrginalDB carries data and you don't want to copy data and need to copy only the structure then you may try this-

    \n
    DECLARE @sql VARCHAR(8000)   \n\nSET @sql=''\n    SELECT @sql=@sql+'; SELECT * INTO '+name+'.dbo.E_Invent2 FROM OriginalDB.dbo.E_Invent2 WHERE 1<>1' FROM sysdatabases WHERE name LIKE 'CM_0%' and name<>'OriginalDB'\n    SELECT @sql\n    EXEC(@sql)\n
    \n

    This should work else let me know if I can help you.

    \n

    NOTE: Constraints will not be copied

    \n soup wrap:

    There is a better and cheaper way to do this. This is very very simple and works perfectly.

    With SELECT INTO statement you can copy the structure of a table as well as data to another table in same or external databases.

    Reference:http://www.w3schools.com/sql/sql_select_into.asp

    DECLARE @sql VARCHAR(8000)
    SET @sql=''
    SELECT @sql=@sql+'; SELECT * INTO '+name+'.dbo.E_Invent2 FROM OriginalDB.dbo.E_Invent2' FROM sysdatabases WHERE name LIKE 'CM_0%' and name<>'OriginalDB'
    SELECT @sql
    EXEC(@sql)
    

    Here OrigialDB is the name of database where you have this table.

    If your table in OrginalDB carries data and you don't want to copy data and need to copy only the structure then you may try this-

    DECLARE @sql VARCHAR(8000)   
    
    SET @sql=''
        SELECT @sql=@sql+'; SELECT * INTO '+name+'.dbo.E_Invent2 FROM OriginalDB.dbo.E_Invent2 WHERE 1<>1' FROM sysdatabases WHERE name LIKE 'CM_0%' and name<>'OriginalDB'
        SELECT @sql
        EXEC(@sql)
    

    This should work else let me know if I can help you.

    NOTE: Constraints will not be copied

    qid & accept id: (24610143, 24610562) query: How to create grouped daily,weekly and monthly reports including calculated fields in SQL Server soup:

    I'm not sure if I understood your question correctly, but this gives you all the users created per day:

    \n
    SELECT year(userCreated), month(userCreated), day(userCreated), count(*)\nFROM Users\nGROUP BY year(userCreated), month(userCreated), day(userCreated)\n
    \n

    this one by month:

    \n
    SELECT year(userCreated), month(userCreated), count(*)\nFROM Users\nGROUP BY year(userCreated), month(userCreated)\n
    \n

    and this one by week:

    \n
    SELECT year(userCreated), datepart(week, userCreated), count(*)\nFROM Users\nGROUP BY year(userCreated), datepart(week, userCreated)\n
    \n

    Edit according to you the missing total field I give you here the example for the month query:

    \n
    SELECT year(userCreated), month(userCreated), count(*) AS NewCount,\n(SELECT COUNT(*) FROM Users u2 WHERE \n    CAST(CAST(year(u1.userCreated) AS VARCHAR(4)) + RIGHT('0' + CAST(month(u1.userCreated) AS VARCHAR(2)), 2) + '01' AS DATETIME) > u2.userCreated) AS TotalCount\nFROM Users u1\nGROUP BY year(userCreated), month(userCreated)\n
    \n

    Hope this helps for the other two queries.

    \n soup wrap:

    I'm not sure if I understood your question correctly, but this gives you all the users created per day:

    SELECT year(userCreated), month(userCreated), day(userCreated), count(*)
    FROM Users
    GROUP BY year(userCreated), month(userCreated), day(userCreated)
    

    this one by month:

    SELECT year(userCreated), month(userCreated), count(*)
    FROM Users
    GROUP BY year(userCreated), month(userCreated)
    

    and this one by week:

    SELECT year(userCreated), datepart(week, userCreated), count(*)
    FROM Users
    GROUP BY year(userCreated), datepart(week, userCreated)
    

    Edit according to you the missing total field I give you here the example for the month query:

    SELECT year(userCreated), month(userCreated), count(*) AS NewCount,
    (SELECT COUNT(*) FROM Users u2 WHERE 
        CAST(CAST(year(u1.userCreated) AS VARCHAR(4)) + RIGHT('0' + CAST(month(u1.userCreated) AS VARCHAR(2)), 2) + '01' AS DATETIME) > u2.userCreated) AS TotalCount
    FROM Users u1
    GROUP BY year(userCreated), month(userCreated)
    

    Hope this helps for the other two queries.

    qid & accept id: (24622282, 24622345) query: Select from MS Access Table between two dates? soup:

    Try CDate() to convert your string into a date.

    \n
    select  *  from audience \nwhere CDate(audate) between #01/06/2014# and #01/08/2014#;\n
    \n

    If it doesn't work because CDate does not reconize your format you can use DateSerial(year, month, day) to build a Date. You will need to use mid$ and Cint() to build the year, month and day arguments. Something like this for a format "yyyy-mm-dd":

    \n
    DateSerial(CInt(mid(audate, 1, 4)), CInt(mid(audate, 6, 2)), CInt(mid(audate, 9, 2))\n
    \n

    Hope this helps.

    \n soup wrap:

    Try CDate() to convert your string into a date.

    select  *  from audience 
    where CDate(audate) between #01/06/2014# and #01/08/2014#;
    

    If it doesn't work because CDate does not reconize your format you can use DateSerial(year, month, day) to build a Date. You will need to use mid$ and Cint() to build the year, month and day arguments. Something like this for a format "yyyy-mm-dd":

    DateSerial(CInt(mid(audate, 1, 4)), CInt(mid(audate, 6, 2)), CInt(mid(audate, 9, 2))
    

    Hope this helps.

    qid & accept id: (24633875, 24635063) query: Oracle: insert from type table soup:

    Assuming that you have something like

    \n
    CREATE TYPE my_nested_table_type\n    AS TABLE OF <>;\n\nDECLARE\n  l_nt my_nested_table_type;\nBEGIN\n  <>\n
    \n

    then the way to do a bulk insert of the data from the collection into a heap-organized table would be to use a FORALL

    \n
    FORALL i in 1..l_nt.count\n  INSERT INTO some_table( <> )\n    VALUES( l_nt(i).col1, l_nt(i).col2, ... , l_nt(i).colN );\n
    \n soup wrap:

    Assuming that you have something like

    CREATE TYPE my_nested_table_type
        AS TABLE OF <>;
    
    DECLARE
      l_nt my_nested_table_type;
    BEGIN
      <>
    

    then the way to do a bulk insert of the data from the collection into a heap-organized table would be to use a FORALL

    FORALL i in 1..l_nt.count
      INSERT INTO some_table( <> )
        VALUES( l_nt(i).col1, l_nt(i).col2, ... , l_nt(i).colN );
    
    qid & accept id: (24636896, 24637221) query: SQL sum of all unique values per date soup:

    How you combine values depends on the database. That is the only tricky part of a question that is otherwise basic SQL. Here is an example using the standard concat() function:

    \n
    select date, concat(event1, event2, event3) as comb_event, count(*)\nfrom example\ngroup by date, concat(event1, event2, event3)\norder by date, concat(event1, event2, event3);\n
    \n

    Depending on the database, the syntax might be:

    \n
    select date, event1 || event2 || event3 as comb_event, count(*)\nfrom example\ngroup by date, event1 || event2 || event3\norder by date, event1 || event2 || event3;\n
    \n

    or:

    \n
    select date, event1 + event2 + event3 as comb_event, count(*)\nfrom example\ngroup by date, event1 + event2 + event3\norder by date, event1 + event2 + event3;\n
    \n

    or event:

    \n
    select date, event1 & event2 & event3 as comb_event, count(*)\nfrom example\ngroup by date, event1 & event2 & event3\norder by date, event1 & event2 & event3;\n
    \n soup wrap:

    How you combine values depends on the database. That is the only tricky part of a question that is otherwise basic SQL. Here is an example using the standard concat() function:

    select date, concat(event1, event2, event3) as comb_event, count(*)
    from example
    group by date, concat(event1, event2, event3)
    order by date, concat(event1, event2, event3);
    

    Depending on the database, the syntax might be:

    select date, event1 || event2 || event3 as comb_event, count(*)
    from example
    group by date, event1 || event2 || event3
    order by date, event1 || event2 || event3;
    

    or:

    select date, event1 + event2 + event3 as comb_event, count(*)
    from example
    group by date, event1 + event2 + event3
    order by date, event1 + event2 + event3;
    

    or event:

    select date, event1 & event2 & event3 as comb_event, count(*)
    from example
    group by date, event1 & event2 & event3
    order by date, event1 & event2 & event3;
    
    qid & accept id: (24656842, 24659678) query: Revert CAST(0xABCD AS date) soup:
    SELECT CAST(CAST(0xABCD AS INT) AS DATETIME)\n
    \n

    -- 2020-06-01 00:00:00.000

    \n
    SELECT CAST(CAST(CAST('2020-06-01 00:00:00.000' AS DATETIME) AS INT) AS BINARY(2))\n
    \n

    -- 0xABCD

    \n soup wrap:
    SELECT CAST(CAST(0xABCD AS INT) AS DATETIME)
    

    -- 2020-06-01 00:00:00.000

    SELECT CAST(CAST(CAST('2020-06-01 00:00:00.000' AS DATETIME) AS INT) AS BINARY(2))
    

    -- 0xABCD

    qid & accept id: (24657408, 24661152) query: Pass EXEC command as a variable into .sql soup:

    Turns out the issue was regarding the semicolon at the end of my %command% variable's value. I removed the semicolon from the value of the variable and added it to the end of the exec command in the .sql file. I also wrapped the %command% parameter pass in quotes because the variable contained spaces.

    \n

    file.sql

    \n
    set serveroutput on\nvariable out_val varchar2;\nexec &1;\nprint out_val\nexit\n
    \n

    mybatch.bat

    \n
    set procedure=%1\nset param1=%2\nset param2=%3\nset strYN = ' '\nset command=%procedure%('%param1%', '%param2%', :out_val)\n\nrem ** This line stores out_val value Y or N as strYN.\nfor /F "usebackq" %%i in (`sqlplus database/pw@user @"file.sql" "%command%"`) do (\n    set stryn=%%i\n    if /I "!strYN!"=="N" (goto:nextN) else (if /I "!strYN!"=="Y" goto:nextY)\n)\n
    \n soup wrap:

    Turns out the issue was regarding the semicolon at the end of my %command% variable's value. I removed the semicolon from the value of the variable and added it to the end of the exec command in the .sql file. I also wrapped the %command% parameter pass in quotes because the variable contained spaces.

    file.sql

    set serveroutput on
    variable out_val varchar2;
    exec &1;
    print out_val
    exit
    

    mybatch.bat

    set procedure=%1
    set param1=%2
    set param2=%3
    set strYN = ' '
    set command=%procedure%('%param1%', '%param2%', :out_val)
    
    rem ** This line stores out_val value Y or N as strYN.
    for /F "usebackq" %%i in (`sqlplus database/pw@user @"file.sql" "%command%"`) do (
        set stryn=%%i
        if /I "!strYN!"=="N" (goto:nextN) else (if /I "!strYN!"=="Y" goto:nextY)
    )
    
    qid & accept id: (24660075, 25095318) query: Counting the number of hits for a given search query/term per document in Oracle soup:

    You can continue using CTX_DOC; the procedure HIGHLIGHT can be contorted slightly to do exactly what you're asking for.

    \n

    Using this environment:

    \n
    create table docs ( id number, text clob, primary key (id) );\n\nTable created.\n\ninsert all\n into docs values (1, to_clob('a dog and a dog'))\n into docs values (2, to_clob('a dog and a cat'))\n into docs values (3, to_clob('just a cat'))\nselect * from dual;\n\n3 rows created.\n\ncreate index i_text_docs on docs(text) indextype is ctxsys.context;\n\nIndex created.\n
    \n

    CTX_DOC.HIGHLIGHT has an OUT parameter of a HIGHLIGHT_TAB type, which contains the count of the number of hits within a document.

    \n
    declare\n   l_highlight ctx_doc.highlight_tab;\nbegin\n  ctx_doc.set_key_type('PRIMARY_KEY');\n\n  for i in ( select * from docs where contains(text, 'dog') > 0 ) loop\n     ctx_doc.highlight('I_TEXT_DOCS', i.id, 'dog', l_highlight);\n     dbms_output.put_line('id: ' || i.id || ' hits: ' || l_highlight.count);\n  end loop;\n\nend;\n/\nid: 1 hits: 2\nid: 2 hits: 1\n\nPL/SQL procedure successfully completed.\n
    \n

    Obviously if you're doing this in a query then a procedure isn't the best thing in the world, but you can wrap it in a function if you want:

    \n
    create or replace function docs_count (\n        Pid in docs.id%type, Ptext in varchar2\n         ) return integer is\n\n   l_highlight ctx_doc.highlight_tab;\nbegin\n  ctx_doc.set_key_type('PRIMARY_KEY');\n  ctx_doc.highlight('I_TEXT_DOCS', Pid, Ptext, l_highlight);\n  return l_highlight.count;\nend;\n
    \n

    This can then be called normally

    \n
    select id\n     , to_char(text) as text\n     , docs_count(id, 'dog') as dogs\n     , docs_count(id, 'cat') as cats\n  from docs;\n\n        ID TEXT                  DOGS       CATS\n---------- --------------- ---------- ----------\n         1 a dog and a dog          2          0\n         2 a dog and a cat          1          1\n         3 just a cat               0          1\n
    \n

    If possible, it might be simpler to replace the keywords as Gordon notes. I'd use DBMS_LOB.GETLENGTH() function instead of simply LENGTH() to avoid potential problems, but REPLACE() works on CLOBs so this won't be a problem. Something like the following (assuming we're still searching for dogs)

    \n
    select (dbms_lob.getlength(text) - dbms_lob.getlength(replace(text, 'dog')))\n         / length('dog')\n  from docs\n
    \n

    It's worth noting that string searching gets progressively slower as strings get larger (hence the need for text indexing) so while this performs fine on the tiny example given it might suffer from performance problems on larger documents.

    \n
    \n

    I've just seen your comment:

    \n
    \n

    ... but it would require me going through each document and doing a count of the hits which frankly is computationally expensive

    \n
    \n

    No matter what you do you're going to have to go through each document. You want to find the exact number of instances of a string within another string and the only way to do this is to look through the entire string. (I would highly recommend reading Joel's post on strings; it makes a point about XML and relational databases but I think it fits nicely here too.) If you were looking for an estimate you could calculate the number of times a word appears in the first 100 characters and then average it out over the length of the LOB (crap algorithm I know), but you want to be accurate.

    \n

    Obviously we don't know how Oracle has implemented all their functions internally, but let's make some assumptions. To calculate the length of a string you need to literally count the number of bytes in it. This means iterating over the entire string. There are some algorithms to improve this, but they still involve iterating over the string. If you want to replace a string with another string, you have to iterate over the original string, looking for the string you want to replace.

    \n

    Theoretically, depending on how Oracle's implemented everything, using CTX_DOC.HIGHLIGHT should be quicker than anything else as it only has to iterate over the original string once, looking for the string you want to find and storing the byte/character offset from the start of the original string.

    \n

    The suggestion length(replace(, )) - length( may have to iterate three separate times over the original string (or something that's close to it in length). I doubt that it would actually do this as everything can be cached and Oracle should be storing the byte length to make LENGTH() efficient. This is the reason I suggest using DBMS_LOB.GETLENGTH rather than just LENGTH(); Oracle's almost certainly storing the byte length of the document.

    \n

    If you don't want to parse the document each time you run your queries it might be worth doing a single run when loading/updating data and store, separately, the words and the number of occurrences per document.

    \n soup wrap:

    You can continue using CTX_DOC; the procedure HIGHLIGHT can be contorted slightly to do exactly what you're asking for.

    Using this environment:

    create table docs ( id number, text clob, primary key (id) );
    
    Table created.
    
    insert all
     into docs values (1, to_clob('a dog and a dog'))
     into docs values (2, to_clob('a dog and a cat'))
     into docs values (3, to_clob('just a cat'))
    select * from dual;
    
    3 rows created.
    
    create index i_text_docs on docs(text) indextype is ctxsys.context;
    
    Index created.
    

    CTX_DOC.HIGHLIGHT has an OUT parameter of a HIGHLIGHT_TAB type, which contains the count of the number of hits within a document.

    declare
       l_highlight ctx_doc.highlight_tab;
    begin
      ctx_doc.set_key_type('PRIMARY_KEY');
    
      for i in ( select * from docs where contains(text, 'dog') > 0 ) loop
         ctx_doc.highlight('I_TEXT_DOCS', i.id, 'dog', l_highlight);
         dbms_output.put_line('id: ' || i.id || ' hits: ' || l_highlight.count);
      end loop;
    
    end;
    /
    id: 1 hits: 2
    id: 2 hits: 1
    
    PL/SQL procedure successfully completed.
    

    Obviously if you're doing this in a query then a procedure isn't the best thing in the world, but you can wrap it in a function if you want:

    create or replace function docs_count (
            Pid in docs.id%type, Ptext in varchar2
             ) return integer is
    
       l_highlight ctx_doc.highlight_tab;
    begin
      ctx_doc.set_key_type('PRIMARY_KEY');
      ctx_doc.highlight('I_TEXT_DOCS', Pid, Ptext, l_highlight);
      return l_highlight.count;
    end;
    

    This can then be called normally

    select id
         , to_char(text) as text
         , docs_count(id, 'dog') as dogs
         , docs_count(id, 'cat') as cats
      from docs;
    
            ID TEXT                  DOGS       CATS
    ---------- --------------- ---------- ----------
             1 a dog and a dog          2          0
             2 a dog and a cat          1          1
             3 just a cat               0          1
    

    If possible, it might be simpler to replace the keywords as Gordon notes. I'd use DBMS_LOB.GETLENGTH() function instead of simply LENGTH() to avoid potential problems, but REPLACE() works on CLOBs so this won't be a problem. Something like the following (assuming we're still searching for dogs)

    select (dbms_lob.getlength(text) - dbms_lob.getlength(replace(text, 'dog')))
             / length('dog')
      from docs
    

    It's worth noting that string searching gets progressively slower as strings get larger (hence the need for text indexing) so while this performs fine on the tiny example given it might suffer from performance problems on larger documents.


    I've just seen your comment:

    ... but it would require me going through each document and doing a count of the hits which frankly is computationally expensive

    No matter what you do you're going to have to go through each document. You want to find the exact number of instances of a string within another string and the only way to do this is to look through the entire string. (I would highly recommend reading Joel's post on strings; it makes a point about XML and relational databases but I think it fits nicely here too.) If you were looking for an estimate you could calculate the number of times a word appears in the first 100 characters and then average it out over the length of the LOB (crap algorithm I know), but you want to be accurate.

    Obviously we don't know how Oracle has implemented all their functions internally, but let's make some assumptions. To calculate the length of a string you need to literally count the number of bytes in it. This means iterating over the entire string. There are some algorithms to improve this, but they still involve iterating over the string. If you want to replace a string with another string, you have to iterate over the original string, looking for the string you want to replace.

    Theoretically, depending on how Oracle's implemented everything, using CTX_DOC.HIGHLIGHT should be quicker than anything else as it only has to iterate over the original string once, looking for the string you want to find and storing the byte/character offset from the start of the original string.

    The suggestion length(replace(, )) - length( may have to iterate three separate times over the original string (or something that's close to it in length). I doubt that it would actually do this as everything can be cached and Oracle should be storing the byte length to make LENGTH() efficient. This is the reason I suggest using DBMS_LOB.GETLENGTH rather than just LENGTH(); Oracle's almost certainly storing the byte length of the document.

    If you don't want to parse the document each time you run your queries it might be worth doing a single run when loading/updating data and store, separately, the words and the number of occurrences per document.

    qid & accept id: (24669926, 24684247) query: Use SQL to remove duplicates from a type 2 slowly changing dimension soup:

    The following query, containing multiple CTE's compresses the date ranges of the updates and removes duplicate values.

    \n

    1 First ranks are assigned within each id group, based on the RowStartDate.

    \n

    2 Next, the maximum rank (next_rank_no) of the range of ranks which has the same value for NAME is determined. Thus, for the example data, row 1 of id=5 would have next_rank_no=5 and row 2 of id=4 would have next_rank_no=3. This version only handles the NAME column. If you want to handle additional columns, they must be included in the condition as well. For example, if you want to include a LOCATION column, then the join conditions would read as:

    \n
      left join sorted_versions sv2 on sv2.id = sv1.id and sv2.rank_no > sv1.rank_no and sv2.name = sv1.name and sv2.location = sv1.location\n  left join sorted_versions sv3 on sv3.id = sv1.id and sv3.rank_no > sv1.rank_no and (sv3.name <> sv1.name or sv3.location <> sv1.location)\n
    \n

    3 Finally, the first row for each id is selected. Then, the row corresponding to the next_rank_no is selected in a recursive fashion.

    \n
    with sorted_versions as --ranks are assigned within each id group\n(\n  select \n    v1.id,\n    v1.name,\n    v1.RowStartDate,\n    v1.RowEndDate,\n    rank() over (partition by v1.id order by v1.RowStartDate) rank_no\n  from versions v1\n  left join versions v2 on (v1.id = v2.id and v2.RowStartDate = v1.RowEndDate)\n),\nnext_rank as --the maximum rank of the range of ranks which has the same value for NAME\n(\n  select \n  sv1.id id, sv1.rank_no rank_no, COALESCE(min(sv3.rank_no)-1 , COALESCE(max(sv2.rank_no), sv1.rank_no)) next_rank_no\n  from sorted_versions sv1\n  left join sorted_versions sv2 on sv2.id = sv1.id and sv2.rank_no > sv1.rank_no and sv2.name = sv1.name\n  left join sorted_versions sv3 on sv3.id = sv1.id and sv3.rank_no > sv1.rank_no and sv3.name <> sv1.name\n  group by sv1.id, sv1.rank_no\n),\nversions_cte as --the rowenddate of the "maximum rank" is selected \n(\n  select sv.id, sv.name, sv.rowstartdate, sv3.rowenddate, nr.next_rank_no rank_no\n  from sorted_versions sv\n  inner join next_rank nr on sv.id = nr.id and sv.rank_no = nr.rank_no and sv.rank_no = 1\n  inner join sorted_versions sv3 on nr.id = sv3.id and nr.next_rank_no = sv3.rank_no  \n  union all\n  select\n    sv2.id,\n    sv2.name, \n    sv2.rowstartdate,\n    sv3.rowenddate,\n    nr.next_rank_no\n  from versions_cte vc\n  inner join sorted_versions sv2 on sv2.id = vc.id and sv2.rank_no = vc.rank_no + 1\n  inner join next_rank nr on sv2.id = nr.id and sv2.rank_no = nr.rank_no  \n  inner join sorted_versions sv3 on nr.id = sv3.id and nr.next_rank_no = sv3.rank_no\n)\nselect id, name, rowstartdate, rowenddate\nfrom versions_cte\norder by id, rowstartdate;\n
    \n

    SQL Fiddle demo

    \n soup wrap:

    The following query, containing multiple CTE's compresses the date ranges of the updates and removes duplicate values.

    1 First ranks are assigned within each id group, based on the RowStartDate.

    2 Next, the maximum rank (next_rank_no) of the range of ranks which has the same value for NAME is determined. Thus, for the example data, row 1 of id=5 would have next_rank_no=5 and row 2 of id=4 would have next_rank_no=3. This version only handles the NAME column. If you want to handle additional columns, they must be included in the condition as well. For example, if you want to include a LOCATION column, then the join conditions would read as:

      left join sorted_versions sv2 on sv2.id = sv1.id and sv2.rank_no > sv1.rank_no and sv2.name = sv1.name and sv2.location = sv1.location
      left join sorted_versions sv3 on sv3.id = sv1.id and sv3.rank_no > sv1.rank_no and (sv3.name <> sv1.name or sv3.location <> sv1.location)
    

    3 Finally, the first row for each id is selected. Then, the row corresponding to the next_rank_no is selected in a recursive fashion.

    with sorted_versions as --ranks are assigned within each id group
    (
      select 
        v1.id,
        v1.name,
        v1.RowStartDate,
        v1.RowEndDate,
        rank() over (partition by v1.id order by v1.RowStartDate) rank_no
      from versions v1
      left join versions v2 on (v1.id = v2.id and v2.RowStartDate = v1.RowEndDate)
    ),
    next_rank as --the maximum rank of the range of ranks which has the same value for NAME
    (
      select 
      sv1.id id, sv1.rank_no rank_no, COALESCE(min(sv3.rank_no)-1 , COALESCE(max(sv2.rank_no), sv1.rank_no)) next_rank_no
      from sorted_versions sv1
      left join sorted_versions sv2 on sv2.id = sv1.id and sv2.rank_no > sv1.rank_no and sv2.name = sv1.name
      left join sorted_versions sv3 on sv3.id = sv1.id and sv3.rank_no > sv1.rank_no and sv3.name <> sv1.name
      group by sv1.id, sv1.rank_no
    ),
    versions_cte as --the rowenddate of the "maximum rank" is selected 
    (
      select sv.id, sv.name, sv.rowstartdate, sv3.rowenddate, nr.next_rank_no rank_no
      from sorted_versions sv
      inner join next_rank nr on sv.id = nr.id and sv.rank_no = nr.rank_no and sv.rank_no = 1
      inner join sorted_versions sv3 on nr.id = sv3.id and nr.next_rank_no = sv3.rank_no  
      union all
      select
        sv2.id,
        sv2.name, 
        sv2.rowstartdate,
        sv3.rowenddate,
        nr.next_rank_no
      from versions_cte vc
      inner join sorted_versions sv2 on sv2.id = vc.id and sv2.rank_no = vc.rank_no + 1
      inner join next_rank nr on sv2.id = nr.id and sv2.rank_no = nr.rank_no  
      inner join sorted_versions sv3 on nr.id = sv3.id and nr.next_rank_no = sv3.rank_no
    )
    select id, name, rowstartdate, rowenddate
    from versions_cte
    order by id, rowstartdate;
    

    SQL Fiddle demo

    qid & accept id: (24707125, 24707293) query: How to merge two SQL rows (same item ID) with the SUM() qty but show only the last row's info? soup:

    You're correct in thinking of partition by; though you'll also need to use a join (or an inline SQL in the results). Simplified example below:

    \n
    select firstRow.id\n, firstRow.upc\n, firstRow.name\n, sum(d.value) TotalUPCValue\nfrom (\n  select id, upc, name\n  , row_number() over (partition by upc order by id) r\n  from demo\n) firstRow\ninner join demo d on d.upc = firstRow.upc\nwhere firstRow.r = 1\ngroup by firstRow.id\n, firstRow.upc\n, firstRow.name\n
    \n

    Working copy with table definition on SQL Fiddle: http://sqlfiddle.com/#!6/6bfee/1

    \n

    Here's the alternate version which doesn't use a join:

    \n
    select id\n, upc\n, name\n, (select sum(d.value) from demo d where d.upc = firstRow.upc) TotalUPCValue\nfrom (\n  select id, upc, name\n  , row_number() over (partition by upc order by id) r\n  from demo\n) firstRow\nwhere firstRow.r = 1\n
    \n

    SQL Fiddle: http://sqlfiddle.com/#!6/6bfee/2

    \n

    The first (join) method should typically be faster, but it's worth comparing against your data to confirm that.

    \n

    UPDATE

    \n

    Thanks to @AndriyM for improving my second version:

    \n
    select id\n, upc\n, name\n, TotalUPCValue\nfrom (\n  select id, upc, name\n  , row_number() over (partition by upc order by id) r\n  , sum(value) over (partition by upc) as TotalUPCValue\n  from demo\n) firstRow\nwhere firstRow.r = 1\n;\n
    \n

    SQL Fiddle: http://sqlfiddle.com/#!6/6bfee/7

    \n soup wrap:

    You're correct in thinking of partition by; though you'll also need to use a join (or an inline SQL in the results). Simplified example below:

    select firstRow.id
    , firstRow.upc
    , firstRow.name
    , sum(d.value) TotalUPCValue
    from (
      select id, upc, name
      , row_number() over (partition by upc order by id) r
      from demo
    ) firstRow
    inner join demo d on d.upc = firstRow.upc
    where firstRow.r = 1
    group by firstRow.id
    , firstRow.upc
    , firstRow.name
    

    Working copy with table definition on SQL Fiddle: http://sqlfiddle.com/#!6/6bfee/1

    Here's the alternate version which doesn't use a join:

    select id
    , upc
    , name
    , (select sum(d.value) from demo d where d.upc = firstRow.upc) TotalUPCValue
    from (
      select id, upc, name
      , row_number() over (partition by upc order by id) r
      from demo
    ) firstRow
    where firstRow.r = 1
    

    SQL Fiddle: http://sqlfiddle.com/#!6/6bfee/2

    The first (join) method should typically be faster, but it's worth comparing against your data to confirm that.

    UPDATE

    Thanks to @AndriyM for improving my second version:

    select id
    , upc
    , name
    , TotalUPCValue
    from (
      select id, upc, name
      , row_number() over (partition by upc order by id) r
      , sum(value) over (partition by upc) as TotalUPCValue
      from demo
    ) firstRow
    where firstRow.r = 1
    ;
    

    SQL Fiddle: http://sqlfiddle.com/#!6/6bfee/7

    qid & accept id: (24726688, 24726929) query: Use SSIS To Copy A Table's Structure And Data With A Different Name soup:

    I guess you can make use of Execute Sql Task for this and simply execute the following statements inside your task.

    \n

    Instead of Drop and Create simply Truncate table, as dropping table means you will have to give permission to users if you have some sort of restrictions and only some specific user who can access the data.

    \n

    Without Dropping the table

    \n
    TRUNCATE TABLE Test_myTable;\nGO\n\nINSERT INTO Test_myTable (Col1, Col2, Col3, .....)\nSELECT Col1, Col2, Col3, .....\nFROM myTable\nGO\n
    \n

    Drop Table and Create

    \n

    If for some reason you have to drop table and re-create it again you could execute the following statements inside your execute sql task.

    \n
    --Drop tables if exists\n\nIF OBJECT_ID('dbo.Test_myTable', 'U') IS NOT NULL\n  DROP TABLE dbo.Test_myTable\nGO\n\n--Create and populate table\nSELECT Col1, Col2, Col3, .....\nINTO dbo.Test_myTable\nFROM myTable\nGO\n
    \n soup wrap:

    I guess you can make use of Execute Sql Task for this and simply execute the following statements inside your task.

    Instead of Drop and Create simply Truncate table, as dropping table means you will have to give permission to users if you have some sort of restrictions and only some specific user who can access the data.

    Without Dropping the table

    TRUNCATE TABLE Test_myTable;
    GO
    
    INSERT INTO Test_myTable (Col1, Col2, Col3, .....)
    SELECT Col1, Col2, Col3, .....
    FROM myTable
    GO
    

    Drop Table and Create

    If for some reason you have to drop table and re-create it again you could execute the following statements inside your execute sql task.

    --Drop tables if exists
    
    IF OBJECT_ID('dbo.Test_myTable', 'U') IS NOT NULL
      DROP TABLE dbo.Test_myTable
    GO
    
    --Create and populate table
    SELECT Col1, Col2, Col3, .....
    INTO dbo.Test_myTable
    FROM myTable
    GO
    
    qid & accept id: (24779447, 26926830) query: MySQL - Return group of last inserted ID's soup:

    Well, it turns out that MySQL is... painful to work with, however if anyone wants a solution here it is:

    \n

    You need to create a cursor, and set its value to last_insert_id().\nFor example:

    \n
        declare last_insert_pk int;\n    declare last_insert2_pk int;\n
    \n

    Then, in the cursor, you set the last inserted pk(s) for that iteration:

    \n
        set last_insert_pk = last_insert_id();\n    -- ...some stuff...\n    set _insert2_pk = last_insert_id();\n
    \n

    I had to use 8 different primary keys in a giant relation table, however it worked really well. There may be a better way, but this is understandable and repeatable.

    \n

    Good luck!

    \n soup wrap:

    Well, it turns out that MySQL is... painful to work with, however if anyone wants a solution here it is:

    You need to create a cursor, and set its value to last_insert_id(). For example:

        declare last_insert_pk int;
        declare last_insert2_pk int;
    

    Then, in the cursor, you set the last inserted pk(s) for that iteration:

        set last_insert_pk = last_insert_id();
        -- ...some stuff...
        set _insert2_pk = last_insert_id();
    

    I had to use 8 different primary keys in a giant relation table, however it worked really well. There may be a better way, but this is understandable and repeatable.

    Good luck!

    qid & accept id: (24794466, 24794756) query: Database Schema Design: Tracking User Balance with concurrency soup:

    Relying on calculating an account balance every time you go to insert a new transaction is not a very good design - for one thing, as time goes by it will take longer and longer, as more and more rows appear in the transaction table.

    \n

    A better idea is to store the current balance in another table - either a new table, or in the existing users table that you are already using as a foreign key reference.

    \n

    It could look like this:

    \n
    CREATE TABLE users (\n    user_id INT PRIMARY KEY,\n    balance BIGINT NOT NULL DEFAULT 0 CHECK(balance>=0)\n);\n
    \n

    Then, whenever you add a transaction, you update the balance like this:

    \n
    UPDATE user SET balance=balance+$1 WHERE user_id=$2;\n
    \n

    You must do this inside a transaction, in which you also insert the transaction record.

    \n

    Concurrency issues are taken care of automatically: if you attempt to update the same record twice from two different transactions, then the second one will be blocked until the first one commits or rolls back. The default transaction isolation level of 'Read Committed' ensures this - see the manual section on concurrency.

    \n

    You can issue the whole sequence from your application, or if you prefer you can add a trigger to the user_transaction table such that whenever a record is inserted into the user_transaction table, the balance is updated automatically.

    \n

    That way, the CHECK clause ensures that no transactions can be entered into the database that would cause the balance to go below 0.

    \n soup wrap:

    Relying on calculating an account balance every time you go to insert a new transaction is not a very good design - for one thing, as time goes by it will take longer and longer, as more and more rows appear in the transaction table.

    A better idea is to store the current balance in another table - either a new table, or in the existing users table that you are already using as a foreign key reference.

    It could look like this:

    CREATE TABLE users (
        user_id INT PRIMARY KEY,
        balance BIGINT NOT NULL DEFAULT 0 CHECK(balance>=0)
    );
    

    Then, whenever you add a transaction, you update the balance like this:

    UPDATE user SET balance=balance+$1 WHERE user_id=$2;
    

    You must do this inside a transaction, in which you also insert the transaction record.

    Concurrency issues are taken care of automatically: if you attempt to update the same record twice from two different transactions, then the second one will be blocked until the first one commits or rolls back. The default transaction isolation level of 'Read Committed' ensures this - see the manual section on concurrency.

    You can issue the whole sequence from your application, or if you prefer you can add a trigger to the user_transaction table such that whenever a record is inserted into the user_transaction table, the balance is updated automatically.

    That way, the CHECK clause ensures that no transactions can be entered into the database that would cause the balance to go below 0.

    qid & accept id: (24795288, 24800471) query: INSERT interpolated rows into existing table soup:

    Possible to do. Have a sub query that gets the max reported time for each order id / stock id and join that against the orders table where the stock id is the same and the latest time is less that the order time. This gets you all the report times for that stock id that are greater than the latest time for that stock id and order id.

    \n

    Use MIN to get the lowest reported time. Convert the 2 times to seconds, add them together and divide by 2, then convert back from seconds to a time.

    \n

    Something like this:-

    \n
    SELECT orderid, stockid, 0, SEC_TO_TIME((TIME_TO_SEC(next_poss_order_report) + TIME_TO_SEC(last_order_report)) / 2)\nFROM\n(\n    SELECT a.orderid, a.stockid, last_order_report, MIN(b.reported) next_poss_order_report\n    FROM \n    (\n        SELECT orderid, stockid, MAX(reported) last_order_report\n        FROM orders_table\n        GROUP BY orderid, stockid\n    ) a\n    INNER JOIN orders_table b\n    ON a.stockid = b.stockid\n    AND a.last_order_report < b.reported\n    GROUP BY a.orderid, a.stockid, a.last_order_report\n) sub0;\n
    \n

    SQL fiddle here:-

    \n

    http://www.sqlfiddle.com/#!2/cf129/17

    \n

    Possible to simplify this a bit to:-

    \n
    SELECT a.orderid, a.stockid, 0, SEC_TO_TIME((TIME_TO_SEC(MIN(b.reported)) + TIME_TO_SEC(last_order_report)) / 2)\nFROM \n(\n    SELECT orderid, stockid, MAX(reported) last_order_report\n    FROM orders_table\n    GROUP BY orderid, stockid\n) a\nINNER JOIN orders_table b\nON a.stockid = b.stockid\nAND a.last_order_report < b.reported\nGROUP BY a.orderid, a.stockid, a.last_order_report;\n
    \n

    These queries might take a while, but are probably more efficient than running many queries from scripted code.

    \n soup wrap:

    Possible to do. Have a sub query that gets the max reported time for each order id / stock id and join that against the orders table where the stock id is the same and the latest time is less that the order time. This gets you all the report times for that stock id that are greater than the latest time for that stock id and order id.

    Use MIN to get the lowest reported time. Convert the 2 times to seconds, add them together and divide by 2, then convert back from seconds to a time.

    Something like this:-

    SELECT orderid, stockid, 0, SEC_TO_TIME((TIME_TO_SEC(next_poss_order_report) + TIME_TO_SEC(last_order_report)) / 2)
    FROM
    (
        SELECT a.orderid, a.stockid, last_order_report, MIN(b.reported) next_poss_order_report
        FROM 
        (
            SELECT orderid, stockid, MAX(reported) last_order_report
            FROM orders_table
            GROUP BY orderid, stockid
        ) a
        INNER JOIN orders_table b
        ON a.stockid = b.stockid
        AND a.last_order_report < b.reported
        GROUP BY a.orderid, a.stockid, a.last_order_report
    ) sub0;
    

    SQL fiddle here:-

    http://www.sqlfiddle.com/#!2/cf129/17

    Possible to simplify this a bit to:-

    SELECT a.orderid, a.stockid, 0, SEC_TO_TIME((TIME_TO_SEC(MIN(b.reported)) + TIME_TO_SEC(last_order_report)) / 2)
    FROM 
    (
        SELECT orderid, stockid, MAX(reported) last_order_report
        FROM orders_table
        GROUP BY orderid, stockid
    ) a
    INNER JOIN orders_table b
    ON a.stockid = b.stockid
    AND a.last_order_report < b.reported
    GROUP BY a.orderid, a.stockid, a.last_order_report;
    

    These queries might take a while, but are probably more efficient than running many queries from scripted code.

    qid & accept id: (24833153, 24833766) query: How to store messages with multiple recipients in PostgreSQL? soup:

    You need:

    \n
      \n
    • a table for the users of your app, with the usual columns (unique id, name, etc.),

    • \n
    • a table for messages, with also a unique id, and a column to indicate which message it replies to; this will let you build threading

    • \n
    • a third table which constitutes the many to many relationship, with a foreign key on the user table and a foreign key on the message table,

    • \n
    \n

    Getting all the recipients for a given message, or all the messages for a given recipient is just doing a couple of inner joins between all three tables and the proper where clause.

    \n

    For threading, you will need a recursive common table expression, which let you follow up the links between rows in the message table.

    \n

    Something like:

    \n
    WITH RECURSIVE threads AS (\n    SELECT id, parent_id, id AS root_id, body\n    FROM messages\n    WHERE parent_id IS NULL\n    UNION ALL\n    SELECT msg.id AS id , msg.parent_id AS parent_id, msgp.root_id AS root_id, msg.body AS body\n    FROM messages AS msg\n    INNER JOIN threads AS msgp\n    ON (msg.parent_id = msgp.id)\n)\nSELECT *\nFROM threads\nWHERE root_id = :root;\n
    \n

    Where the column root_id contains the row id at the origin of the thread of the current row, will let you select a single thread whose root_id is set by the parameter :root.

    \n

    With multiple recipients, you need to do the inner joins on threads:

    \n
    WITH ...\n)\nSELECT *\nFROM threads\nINNER JOIN threads_users tu\nON threads.id = tu.msg_id\nINNER JOIN users\nON users.id = tu.user_id\nWHERE root_id=:root\n
    \n soup wrap:

    You need:

    • a table for the users of your app, with the usual columns (unique id, name, etc.),

    • a table for messages, with also a unique id, and a column to indicate which message it replies to; this will let you build threading

    • a third table which constitutes the many to many relationship, with a foreign key on the user table and a foreign key on the message table,

    Getting all the recipients for a given message, or all the messages for a given recipient is just doing a couple of inner joins between all three tables and the proper where clause.

    For threading, you will need a recursive common table expression, which let you follow up the links between rows in the message table.

    Something like:

    WITH RECURSIVE threads AS (
        SELECT id, parent_id, id AS root_id, body
        FROM messages
        WHERE parent_id IS NULL
        UNION ALL
        SELECT msg.id AS id , msg.parent_id AS parent_id, msgp.root_id AS root_id, msg.body AS body
        FROM messages AS msg
        INNER JOIN threads AS msgp
        ON (msg.parent_id = msgp.id)
    )
    SELECT *
    FROM threads
    WHERE root_id = :root;
    

    Where the column root_id contains the row id at the origin of the thread of the current row, will let you select a single thread whose root_id is set by the parameter :root.

    With multiple recipients, you need to do the inner joins on threads:

    WITH ...
    )
    SELECT *
    FROM threads
    INNER JOIN threads_users tu
    ON threads.id = tu.msg_id
    INNER JOIN users
    ON users.id = tu.user_id
    WHERE root_id=:root
    
    qid & accept id: (24848880, 24849369) query: oracle month to day soup:

    As one of the approaches, you can turn a month into a list of days(dates) that constitute it (ease filtering operation), and perform calculation as follows:

    \n
    /* sample of data that you've provided */\nwith t1(mnth,val) as(\n  select 1, 93  from dual union all\n  select 2, 56  from dual union all\n  select 3, 186 from dual union all\n  select 4, 60  from dual\n), \n/*\n    Generates current year dates \n    From January 1st 2014 to December 31st 2014  \n */\ndates(dt) as(\n  select trunc(sysdate, 'YEAR') - 1 + level\n    from dual\n  connect by extract(year from (trunc(sysdate, 'YEAR') - 1 + level)) <= \n             extract(year from sysdate)\n)\n/* \n   The query that performs calculations based on range of dates \n */\nselect sum(val / extract(day from last_day(dt))) as result\n  from dates d\n  join t1\n    on (extract(month from d.dt) = t1.mnth)\n where dt between date '2014-01-17' and        -- January 17th 2014 to    \n                  date '2014-03-31'            -- March 31st 2014\n
    \n

    Result:

    \n
        RESULT\n----------\n       287 \n
    \n soup wrap:

    As one of the approaches, you can turn a month into a list of days(dates) that constitute it (ease filtering operation), and perform calculation as follows:

    /* sample of data that you've provided */
    with t1(mnth,val) as(
      select 1, 93  from dual union all
      select 2, 56  from dual union all
      select 3, 186 from dual union all
      select 4, 60  from dual
    ), 
    /*
        Generates current year dates 
        From January 1st 2014 to December 31st 2014  
     */
    dates(dt) as(
      select trunc(sysdate, 'YEAR') - 1 + level
        from dual
      connect by extract(year from (trunc(sysdate, 'YEAR') - 1 + level)) <= 
                 extract(year from sysdate)
    )
    /* 
       The query that performs calculations based on range of dates 
     */
    select sum(val / extract(day from last_day(dt))) as result
      from dates d
      join t1
        on (extract(month from d.dt) = t1.mnth)
     where dt between date '2014-01-17' and        -- January 17th 2014 to    
                      date '2014-03-31'            -- March 31st 2014
    

    Result:

        RESULT
    ----------
           287 
    
    qid & accept id: (24855053, 24856282) query: Concatenating row values using Inner Join soup:

    You can do what you want by pre-aggregating the table before the join. If there are only two values and you don't care about the order, then this will work:

    \n
    DECLARE @DocHoldReasons VARCHAR(8000);\nSET @DocHoldReasons = 'DocType Hold';\n\nUPDATE dbo.EpnPackages \n    SET Error = 1,\n        Msg = (COALESCE(@DocHoldReasons + ': ', '') + minv +\n               (case when minv <> maxv then ': ' + maxv else '' end)\n              )\n    FROM EpnPackages p INNER JOIN\n         (select cv.CountyId, min(cv.value) as minv, max(cv.value) as maxv\n          from EpnCountyValues cv\n          where cv.ValueName = 'DocHoldReason'\n         ) cv\n        ON cv.CountyId = p.CountyId\n    WHERE p.Status = 1000 AND p.Error = 0;\n
    \n

    EDIT:

    \n

    For more than two values, you have to do string concatenation. That is "unpleasant" in SQL Server. Here is the approach:

    \n
    DECLARE @DocHoldReasons VARCHAR(8000);\nSET @DocHoldReasons = 'DocType Hold';\n\nUPDATE dbo.EpnPackages \n    SET Error = 1,\n        Msg = (COALESCE(@DocHoldReasons + ': ', '') + \n               stuff((select ': ' + cv.value\n                      from EpnCountyValues cv\n                      where cv.ValueName = 'DocHoldReason' and\n                            cv.CountyId = p.CountyId\n                      for xml path ('')\n                     ), 1, 2, '')\n               )\n    WHERE p.Status = 1000 AND p.Error = 0;\n
    \n

    This version does it using a correlated subquery rather than a join with an aggregation.

    \n

    EDIT II:

    \n

    You can fix this with an additional coalesce:

    \n
    DECLARE @DocHoldReasons VARCHAR(8000);\nSET @DocHoldReasons = 'DocType Hold';\n\nUPDATE dbo.EpnPackages \n    SET Error = 1,\n        Msg = (COALESCE(@DocHoldReasons + ': ', '') + \n               COALESCE(stuff((select ': ' + cv.value\n                               from EpnCountyValues cv\n                               where cv.ValueName = 'DocHoldReason' and\n                                     cv.CountyId = p.CountyId\n                               for xml path ('')\n                              ), 1, 2, ''), '')\n               )\n    WHERE p.Status = 1000 AND p.Error = 0;\n
    \n soup wrap:

    You can do what you want by pre-aggregating the table before the join. If there are only two values and you don't care about the order, then this will work:

    DECLARE @DocHoldReasons VARCHAR(8000);
    SET @DocHoldReasons = 'DocType Hold';
    
    UPDATE dbo.EpnPackages 
        SET Error = 1,
            Msg = (COALESCE(@DocHoldReasons + ': ', '') + minv +
                   (case when minv <> maxv then ': ' + maxv else '' end)
                  )
        FROM EpnPackages p INNER JOIN
             (select cv.CountyId, min(cv.value) as minv, max(cv.value) as maxv
              from EpnCountyValues cv
              where cv.ValueName = 'DocHoldReason'
             ) cv
            ON cv.CountyId = p.CountyId
        WHERE p.Status = 1000 AND p.Error = 0;
    

    EDIT:

    For more than two values, you have to do string concatenation. That is "unpleasant" in SQL Server. Here is the approach:

    DECLARE @DocHoldReasons VARCHAR(8000);
    SET @DocHoldReasons = 'DocType Hold';
    
    UPDATE dbo.EpnPackages 
        SET Error = 1,
            Msg = (COALESCE(@DocHoldReasons + ': ', '') + 
                   stuff((select ': ' + cv.value
                          from EpnCountyValues cv
                          where cv.ValueName = 'DocHoldReason' and
                                cv.CountyId = p.CountyId
                          for xml path ('')
                         ), 1, 2, '')
                   )
        WHERE p.Status = 1000 AND p.Error = 0;
    

    This version does it using a correlated subquery rather than a join with an aggregation.

    EDIT II:

    You can fix this with an additional coalesce:

    DECLARE @DocHoldReasons VARCHAR(8000);
    SET @DocHoldReasons = 'DocType Hold';
    
    UPDATE dbo.EpnPackages 
        SET Error = 1,
            Msg = (COALESCE(@DocHoldReasons + ': ', '') + 
                   COALESCE(stuff((select ': ' + cv.value
                                   from EpnCountyValues cv
                                   where cv.ValueName = 'DocHoldReason' and
                                         cv.CountyId = p.CountyId
                                   for xml path ('')
                                  ), 1, 2, ''), '')
                   )
        WHERE p.Status = 1000 AND p.Error = 0;
    
    qid & accept id: (24910861, 24914798) query: Restrict foreign key relationship to rows of related subtypes soup:

    Simplify building on MATCH SIMPLE behavior of fk constraints

    \n

    If at least one column of multicolumn foreign constraint with default MATCH SIMPLE behaviour is NULL, the constraint is not enforced. You can build on that to largely simplify your design.

    \n
    CREATE SCHEMA test;\n\nCREATE TABLE test.status(\n   status_id  integer PRIMARY KEY\n  ,sub        bool NOT NULL DEFAULT FALSE  -- TRUE .. *can* be sub-status\n  ,UNIQUE (sub, status_id)\n);\n\nCREATE TABLE test.entity(\n   entity_id  integer PRIMARY KEY\n  ,status_id  integer REFERENCES test.status  -- can reference all statuses\n  ,sub        bool      -- see examples below\n  ,additional_col1 text -- should be NULL for main entities\n  ,additional_col2 text -- should be NULL for main entities\n  ,FOREIGN KEY (sub, status_id) REFERENCES test.status(sub, status_id)\n     MATCH SIMPLE ON UPDATE CASCADE  -- optionally enforce sub-status\n);\n
    \n

    It is very cheap to store some additional NULL columns (for main entities):

    \n\n

    BTW, per documentation:

    \n
    \n

    If the refcolumn list is omitted, the primary key of the reftable is used.

    \n
    \n

    Demo-data:

    \n
    INSERT INTO test.status VALUES\n  (1, TRUE)\n, (2, TRUE)\n, (3, FALSE);     -- not valid for sub-entities\n\nINSERT INTO test.entity(entity_id, status_id, sub) VALUES\n  (11, 1, TRUE)   -- sub-entity (can be main, UPDATES to status.sub cascaded)\n, (13, 3, FALSE)  -- entity  (cannot be sub,  UPDATES to status.sub cascaded)\n, (14, 2, NULL)   -- entity  (can    be sub,  UPDATES to status.sub NOT cascaded)\n, (15, 3, NULL)   -- entity  (cannot be sub,  UPDATES to status.sub NOT cascaded)\n
    \n

    SQL Fiddle (including your tests).

    \n

    Alternative with single FK

    \n

    Another option would be to enter all combinations of (status_id, sub) into the status table (there can only be 2 per status_id) and only have a single fk constraint:

    \n
    CREATE TABLE test.status(\n   status_id  integer\n  ,sub        bool DEFAULT FALSE\n  ,PRIMARY KEY (status_id, sub)\n);\n\nCREATE TABLE test.entity(\n   entity_id  integer PRIMARY KEY\n  ,status_id  integer NOT NULL  -- cannot be NULL in this case\n  ,sub        bool NOT NULL     -- cannot be NULL in this case\n  ,additional_col1 text\n  ,additional_col2 text\n  ,FOREIGN KEY (status_id, sub) REFERENCES test.status\n     MATCH SIMPLE ON UPDATE CASCADE  -- optionally enforce sub-status\n);\n\nINSERT INTO test.status VALUES\n  (1, TRUE)       -- can be sub ...\n  (1, FALSE)      -- ... and main\n, (2, TRUE)\n, (2, FALSE)\n, (3, FALSE);     -- only main\n
    \n

    Etc.

    \n

    Related answers:

    \n\n

    Keep all tables

    \n

    If you need all four tables for some reason not in the question consider this detailed solution to a very similar question on dba.SE:

    \n\n

    Inheritance

    \n

    ... might be another option for what you describe. If you can live with some major limitations. Related answer:

    \n\n soup wrap:

    Simplify building on MATCH SIMPLE behavior of fk constraints

    If at least one column of multicolumn foreign constraint with default MATCH SIMPLE behaviour is NULL, the constraint is not enforced. You can build on that to largely simplify your design.

    CREATE SCHEMA test;
    
    CREATE TABLE test.status(
       status_id  integer PRIMARY KEY
      ,sub        bool NOT NULL DEFAULT FALSE  -- TRUE .. *can* be sub-status
      ,UNIQUE (sub, status_id)
    );
    
    CREATE TABLE test.entity(
       entity_id  integer PRIMARY KEY
      ,status_id  integer REFERENCES test.status  -- can reference all statuses
      ,sub        bool      -- see examples below
      ,additional_col1 text -- should be NULL for main entities
      ,additional_col2 text -- should be NULL for main entities
      ,FOREIGN KEY (sub, status_id) REFERENCES test.status(sub, status_id)
         MATCH SIMPLE ON UPDATE CASCADE  -- optionally enforce sub-status
    );
    

    It is very cheap to store some additional NULL columns (for main entities):

    BTW, per documentation:

    If the refcolumn list is omitted, the primary key of the reftable is used.

    Demo-data:

    INSERT INTO test.status VALUES
      (1, TRUE)
    , (2, TRUE)
    , (3, FALSE);     -- not valid for sub-entities
    
    INSERT INTO test.entity(entity_id, status_id, sub) VALUES
      (11, 1, TRUE)   -- sub-entity (can be main, UPDATES to status.sub cascaded)
    , (13, 3, FALSE)  -- entity  (cannot be sub,  UPDATES to status.sub cascaded)
    , (14, 2, NULL)   -- entity  (can    be sub,  UPDATES to status.sub NOT cascaded)
    , (15, 3, NULL)   -- entity  (cannot be sub,  UPDATES to status.sub NOT cascaded)
    

    SQL Fiddle (including your tests).

    Alternative with single FK

    Another option would be to enter all combinations of (status_id, sub) into the status table (there can only be 2 per status_id) and only have a single fk constraint:

    CREATE TABLE test.status(
       status_id  integer
      ,sub        bool DEFAULT FALSE
      ,PRIMARY KEY (status_id, sub)
    );
    
    CREATE TABLE test.entity(
       entity_id  integer PRIMARY KEY
      ,status_id  integer NOT NULL  -- cannot be NULL in this case
      ,sub        bool NOT NULL     -- cannot be NULL in this case
      ,additional_col1 text
      ,additional_col2 text
      ,FOREIGN KEY (status_id, sub) REFERENCES test.status
         MATCH SIMPLE ON UPDATE CASCADE  -- optionally enforce sub-status
    );
    
    INSERT INTO test.status VALUES
      (1, TRUE)       -- can be sub ...
      (1, FALSE)      -- ... and main
    , (2, TRUE)
    , (2, FALSE)
    , (3, FALSE);     -- only main
    

    Etc.

    Related answers:

    Keep all tables

    If you need all four tables for some reason not in the question consider this detailed solution to a very similar question on dba.SE:

    Inheritance

    ... might be another option for what you describe. If you can live with some major limitations. Related answer:

    qid & accept id: (24920949, 24921180) query: Remove text of a field after last repeating character soup:

    Test Data

    \n
    DECLARE @TABLE TABLE (partnum VARCHAR(100))\nINSERT INTO @TABLE VALUES       \n('H24897-D-001'),\n('BHF44-82-V-1325'),\n('BKNG5222'),\n('YAKJD-78AB')\n
    \n

    Query

    \n
    SELECT   PartNum\n        ,REVERSE(\n                SUBSTRING(REVERSE(Partnum), \n              CHARINDEX('-',REVERSE(Partnum)) \n               , LEN(Partnum) - CHARINDEX('-',REVERSE(Partnum)) + 1)\n               ) AS Result\nFROM @TABLE\n
    \n

    OUTPUT

    \n
    ╔═════════════════╦═════════════╗\n║     PartNum     ║   Result    ║\n╠═════════════════╬═════════════╣\n║ H24897-D-001    ║ H24897-D-   ║\n║ BHF44-82-V-1325 ║ BHF44-82-V- ║\n║ BKNG5222        ║ BKNG5222    ║\n║ YAKJD-78AB      ║ YAKJD-      ║\n╚═════════════════╩═════════════╝\n
    \n soup wrap:

    Test Data

    DECLARE @TABLE TABLE (partnum VARCHAR(100))
    INSERT INTO @TABLE VALUES       
    ('H24897-D-001'),
    ('BHF44-82-V-1325'),
    ('BKNG5222'),
    ('YAKJD-78AB')
    

    Query

    SELECT   PartNum
            ,REVERSE(
                    SUBSTRING(REVERSE(Partnum), 
                  CHARINDEX('-',REVERSE(Partnum)) 
                   , LEN(Partnum) - CHARINDEX('-',REVERSE(Partnum)) + 1)
                   ) AS Result
    FROM @TABLE
    

    OUTPUT

    ╔═════════════════╦═════════════╗
    ║     PartNum     ║   Result    ║
    ╠═════════════════╬═════════════╣
    ║ H24897-D-001    ║ H24897-D-   ║
    ║ BHF44-82-V-1325 ║ BHF44-82-V- ║
    ║ BKNG5222        ║ BKNG5222    ║
    ║ YAKJD-78AB      ║ YAKJD-      ║
    ╚═════════════════╩═════════════╝
    
    qid & accept id: (24921796, 24921834) query: SUM subquery for total amount for each line soup:
    SELECT\n  O.FileNumber,\n  O.CloseDate,\n  SUM(CL.Amount) as Total\nFROM dbo.Orders O\n    LEFT JOIN dbo.Checks C\n        ON O.OrdersID = C.OrdersID\n    LEFT JOIN dbo.CheckLine CL\n        ON C.ChecksID = CL.ChecksID\n GROUP BY O.FileNumber, O.CloseDate\n
    \n

    When you calculate Total in a subquery, that value will be treated as constant by SQL Server that will repeat every row.

    \n

    It is very common to confuse GROUP BY with DISTINCT (please look at here and here) since they return the same values if no aggregation function is in the SELECT clause. In your example:

    \n
    SELECT DISTINCT FileNumber FROM ORDERS \n
    \n

    will return the same of

    \n
    SELECT FileNumber FROM ORDERS GROUP BY FileNumber\n
    \n

    Use GROUP BY if you are wanting to aggregate information (like your field TOTAL).

    \n soup wrap:
    SELECT
      O.FileNumber,
      O.CloseDate,
      SUM(CL.Amount) as Total
    FROM dbo.Orders O
        LEFT JOIN dbo.Checks C
            ON O.OrdersID = C.OrdersID
        LEFT JOIN dbo.CheckLine CL
            ON C.ChecksID = CL.ChecksID
     GROUP BY O.FileNumber, O.CloseDate
    

    When you calculate Total in a subquery, that value will be treated as constant by SQL Server that will repeat every row.

    It is very common to confuse GROUP BY with DISTINCT (please look at here and here) since they return the same values if no aggregation function is in the SELECT clause. In your example:

    SELECT DISTINCT FileNumber FROM ORDERS 
    

    will return the same of

    SELECT FileNumber FROM ORDERS GROUP BY FileNumber
    

    Use GROUP BY if you are wanting to aggregate information (like your field TOTAL).

    qid & accept id: (24939702, 24940006) query: Increase Date datatype by Number soup:

    You can try use the dateadd function here. This function takes a specific value, and adds it to a specified date. You can add days, years, minutes, hours, and so on. In your case, you want to add minutes, and since you are adding to the already existing scheddate, you will use that as a parameter.

    \n

    Here's what the syntax may look like:

    \n
    UPDATE scpomgr.schedrcpts sr\nSET sr.scheddate = dateadd(\n                        minute, \n                        (SELECT n.transleadtime FROM scpomgr.network n WHERE n.source = sr.loc),\n                        (SELECT sr.scheddate)\n                   );\n
    \n

    This will add minutes (specified by the first parameter), to the sr.scheddate (specified by the third parameter). The minutes that will be added are the n.translead time (specified by the second parameter).

    \n

    Right now, this makes the assumption that selecting the sr.scheddate and select n.transleadtime that you have will only return 1 value. If they return more, you may have to adjust your where statement or limit the result set.

    \n

    I also took out the NVL function, but if you want to protect against null values I would put them in the second and/or third parameters. Definitely in the second, but if your scheddate column doesn't accept null values, then you won't need it.

    \n
    UPDATE scpomgr.schedrcpts sr\nSET sr.scheddate = dateadd(\n                        minute, \n                        NVL((SELECT n.transleadtime FROM scpomgr.network n WHERE n.source = sr.loc), 0),\n                        (SELECT sr.scheddate)\n                   );\n
    \n

    I can't test this at the moment, so it may take some tweaking, but start there and let me know how we can improve it.

    \n

    EDIT

    \n

    If you're looking for the highest transleadtime, I do think the MAX function would be the simplest way. Try adjusting the subquery in the second parameter to:

    \n
    SELECT MAX(n.transleadtime) FROM scpomgr.network n WHERE n.source = sr.loc\n
    \n soup wrap:

    You can try use the dateadd function here. This function takes a specific value, and adds it to a specified date. You can add days, years, minutes, hours, and so on. In your case, you want to add minutes, and since you are adding to the already existing scheddate, you will use that as a parameter.

    Here's what the syntax may look like:

    UPDATE scpomgr.schedrcpts sr
    SET sr.scheddate = dateadd(
                            minute, 
                            (SELECT n.transleadtime FROM scpomgr.network n WHERE n.source = sr.loc),
                            (SELECT sr.scheddate)
                       );
    

    This will add minutes (specified by the first parameter), to the sr.scheddate (specified by the third parameter). The minutes that will be added are the n.translead time (specified by the second parameter).

    Right now, this makes the assumption that selecting the sr.scheddate and select n.transleadtime that you have will only return 1 value. If they return more, you may have to adjust your where statement or limit the result set.

    I also took out the NVL function, but if you want to protect against null values I would put them in the second and/or third parameters. Definitely in the second, but if your scheddate column doesn't accept null values, then you won't need it.

    UPDATE scpomgr.schedrcpts sr
    SET sr.scheddate = dateadd(
                            minute, 
                            NVL((SELECT n.transleadtime FROM scpomgr.network n WHERE n.source = sr.loc), 0),
                            (SELECT sr.scheddate)
                       );
    

    I can't test this at the moment, so it may take some tweaking, but start there and let me know how we can improve it.

    EDIT

    If you're looking for the highest transleadtime, I do think the MAX function would be the simplest way. Try adjusting the subquery in the second parameter to:

    SELECT MAX(n.transleadtime) FROM scpomgr.network n WHERE n.source = sr.loc
    
    qid & accept id: (24970105, 24970139) query: How do I find the shortest bus route when there is more than 1 switch? soup:

    I have posted such a thing a little while ago, here:\nGraph problems: connect by NOCYCLE prior replacement in SQL server?

    \n

    You'll find further going tips here, where i cross-posted the question:
    \nhttp://social.msdn.microsoft.com/Forums/sqlserver/en-US/32069da7-4820-490a-a8b7-09900ea1de69/is-there-a-nocycle-prior-replacement-in-sql-server?forum=transactsql

    \n

    Graph

    \n
    CREATE TABLE [dbo].[T_Hops](\n    [UID] [uniqueidentifier] NULL,\n    [From] [nvarchar](1000) NULL,\n    [To] [nvarchar](1000) NULL,\n    [Distance] [decimal](18, 5) NULL\n) ON [PRIMARY]\n\nGO\n\n\n\n\n      INSERT INTO [dbo].[T_Hops]             ([UID]             ,[From]             ,[To]             ,[Distance])       VALUES             (newid()              ,'A'              ,'E'              ,10.00000              );   \n      INSERT INTO [dbo].[T_Hops]             ([UID]             ,[From]             ,[To]             ,[Distance])       VALUES             (newid()              ,'E'              ,'D'              ,20.00000              );   \n      INSERT INTO [dbo].[T_Hops]             ([UID]             ,[From]             ,[To]             ,[Distance])       VALUES             (newid()              ,'A'              ,'B'              ,5.00000              );   \n      INSERT INTO [dbo].[T_Hops]             ([UID]             ,[From]             ,[To]             ,[Distance])       VALUES             (newid()              ,'B'              ,'C'              ,10.00000              );   \n      INSERT INTO [dbo].[T_Hops]             ([UID]             ,[From]             ,[To]             ,[Distance])       VALUES             (newid()              ,'C'              ,'D'              ,5.00000              );   \n      INSERT INTO [dbo].[T_Hops]             ([UID]             ,[From]             ,[To]             ,[Distance])       VALUES             (newid()              ,'A'              ,'F'              ,2.00000              );   \n      INSERT INTO [dbo].[T_Hops]             ([UID]             ,[From]             ,[To]             ,[Distance])       VALUES             (newid()              ,'F'              ,'G'              ,6.00000              );   \n      INSERT INTO [dbo].[T_Hops]             ([UID]             ,[From]             ,[To]             ,[Distance])       VALUES             (newid()              ,'G'              ,'H'              ,3.00000              );   \n      INSERT INTO [dbo].[T_Hops]             ([UID]             ,[From]             ,[To]             ,[Distance])       VALUES             (newid()              ,'H'              ,'D'              ,1.00000              );   \n
    \n

    Now I can query the best connection from point x to point y like this:

    \n
    WITH AllRoutes \n(\n     [UID]\n    ,[FROM]\n    ,[To]\n    ,[Distance]\n    ,[Path]\n    ,[Hops]\n)\nAS\n(\n    SELECT \n         [UID]\n        ,[FROM]\n        ,[To]\n        ,[Distance]\n        ,CAST(([dbo].[T_Hops].[FROM] + [dbo].[T_Hops].[To]) AS varchar(MAX)) AS [Path]\n        ,1 AS [Hops]\n      FROM [dbo].[T_Hops]\n      WHERE [FROM] = 'A'\n\n    UNION ALL\n\n\n    SELECT \n         [dbo].[T_Hops].[UID]\n        --,[dbo].[T_Hops].[FROM]\n        ,Parent.[FROM]\n        ,[dbo].[T_Hops].[To]\n        ,CAST((Parent.[Distance] + [dbo].[T_Hops].[Distance]) AS [decimal](18, 5)) AS distance\n        ,CAST((Parent.[Path] + '/' + [dbo].[T_Hops].[FROM] + [dbo].[T_Hops].[To]) AS varchar(MAX)) AS [Path]\n        ,(Parent.[Hops] + 1) AS [Hops]\n     FROM [dbo].[T_Hops]\nINNER JOIN AllRoutes AS Parent \n            ON Parent.[To] = [dbo].[T_Hops].[FROM] \n\n)\n\nSELECT TOP 100 PERCENT * FROM AllRoutes\n\n\n/*\nWHERE [FROM] = 'A' \nAND [To] = 'D'\nAND CHARINDEX('F', [Path]) != 0 -- via F\nORDER BY Hops, Distance ASC\n*/\n\nGO\n
    \n soup wrap:

    I have posted such a thing a little while ago, here: Graph problems: connect by NOCYCLE prior replacement in SQL server?

    You'll find further going tips here, where i cross-posted the question:
    http://social.msdn.microsoft.com/Forums/sqlserver/en-US/32069da7-4820-490a-a8b7-09900ea1de69/is-there-a-nocycle-prior-replacement-in-sql-server?forum=transactsql

    Graph

    CREATE TABLE [dbo].[T_Hops](
        [UID] [uniqueidentifier] NULL,
        [From] [nvarchar](1000) NULL,
        [To] [nvarchar](1000) NULL,
        [Distance] [decimal](18, 5) NULL
    ) ON [PRIMARY]
    
    GO
    
    
    
    
          INSERT INTO [dbo].[T_Hops]             ([UID]             ,[From]             ,[To]             ,[Distance])       VALUES             (newid()              ,'A'              ,'E'              ,10.00000              );   
          INSERT INTO [dbo].[T_Hops]             ([UID]             ,[From]             ,[To]             ,[Distance])       VALUES             (newid()              ,'E'              ,'D'              ,20.00000              );   
          INSERT INTO [dbo].[T_Hops]             ([UID]             ,[From]             ,[To]             ,[Distance])       VALUES             (newid()              ,'A'              ,'B'              ,5.00000              );   
          INSERT INTO [dbo].[T_Hops]             ([UID]             ,[From]             ,[To]             ,[Distance])       VALUES             (newid()              ,'B'              ,'C'              ,10.00000              );   
          INSERT INTO [dbo].[T_Hops]             ([UID]             ,[From]             ,[To]             ,[Distance])       VALUES             (newid()              ,'C'              ,'D'              ,5.00000              );   
          INSERT INTO [dbo].[T_Hops]             ([UID]             ,[From]             ,[To]             ,[Distance])       VALUES             (newid()              ,'A'              ,'F'              ,2.00000              );   
          INSERT INTO [dbo].[T_Hops]             ([UID]             ,[From]             ,[To]             ,[Distance])       VALUES             (newid()              ,'F'              ,'G'              ,6.00000              );   
          INSERT INTO [dbo].[T_Hops]             ([UID]             ,[From]             ,[To]             ,[Distance])       VALUES             (newid()              ,'G'              ,'H'              ,3.00000              );   
          INSERT INTO [dbo].[T_Hops]             ([UID]             ,[From]             ,[To]             ,[Distance])       VALUES             (newid()              ,'H'              ,'D'              ,1.00000              );   
    

    Now I can query the best connection from point x to point y like this:

    WITH AllRoutes 
    (
         [UID]
        ,[FROM]
        ,[To]
        ,[Distance]
        ,[Path]
        ,[Hops]
    )
    AS
    (
        SELECT 
             [UID]
            ,[FROM]
            ,[To]
            ,[Distance]
            ,CAST(([dbo].[T_Hops].[FROM] + [dbo].[T_Hops].[To]) AS varchar(MAX)) AS [Path]
            ,1 AS [Hops]
          FROM [dbo].[T_Hops]
          WHERE [FROM] = 'A'
    
        UNION ALL
    
    
        SELECT 
             [dbo].[T_Hops].[UID]
            --,[dbo].[T_Hops].[FROM]
            ,Parent.[FROM]
            ,[dbo].[T_Hops].[To]
            ,CAST((Parent.[Distance] + [dbo].[T_Hops].[Distance]) AS [decimal](18, 5)) AS distance
            ,CAST((Parent.[Path] + '/' + [dbo].[T_Hops].[FROM] + [dbo].[T_Hops].[To]) AS varchar(MAX)) AS [Path]
            ,(Parent.[Hops] + 1) AS [Hops]
         FROM [dbo].[T_Hops]
    INNER JOIN AllRoutes AS Parent 
                ON Parent.[To] = [dbo].[T_Hops].[FROM] 
    
    )
    
    SELECT TOP 100 PERCENT * FROM AllRoutes
    
    
    /*
    WHERE [FROM] = 'A' 
    AND [To] = 'D'
    AND CHARINDEX('F', [Path]) != 0 -- via F
    ORDER BY Hops, Distance ASC
    */
    
    GO
    
    qid & accept id: (25032106, 25032320) query: selection based on certain condition soup:
    SELECT col1,\n       col2,\n       col3\nFROM (SELECT col1,\n             col2,\n             col3,\n             sum(col2) OVER (PARTITION BY col1) sum_col2\n      FROM tab1)\nWHERE (  (   sum_col2 <> 0\n         AND col2 <> 0)\n      OR sum_col2 = 0)\n
    \n

    If col2 can be negative and the requirement is that the sum of col2 has "non-zero" data then the above is OK, however, if it is the requirement that any col2 value has "non-zero" data then it should be changed to:

    \n
    SELECT col1,\n       col2,\n       col3\nFROM (SELECT col1,\n             col2,\n             col3,\n             sum(abs(col2)) OVER (PARTITION BY col1) sum_col2\n      FROM tab1)\nWHERE (  (   sum_col2 <> 0\n         AND col2 <> 0)\n      OR sum_col2 = 0)\n
    \n soup wrap:
    SELECT col1,
           col2,
           col3
    FROM (SELECT col1,
                 col2,
                 col3,
                 sum(col2) OVER (PARTITION BY col1) sum_col2
          FROM tab1)
    WHERE (  (   sum_col2 <> 0
             AND col2 <> 0)
          OR sum_col2 = 0)
    

    If col2 can be negative and the requirement is that the sum of col2 has "non-zero" data then the above is OK, however, if it is the requirement that any col2 value has "non-zero" data then it should be changed to:

    SELECT col1,
           col2,
           col3
    FROM (SELECT col1,
                 col2,
                 col3,
                 sum(abs(col2)) OVER (PARTITION BY col1) sum_col2
          FROM tab1)
    WHERE (  (   sum_col2 <> 0
             AND col2 <> 0)
          OR sum_col2 = 0)
    
    qid & accept id: (25036420, 25036494) query: Shift manipulation in SQL to get counts soup:

    I think you can get what you want using conditional aggregation:

    \n
    SELECT EID,\n       sum(case when shift = 'd' then 1 else 0 end) as dayshifts,\n       sum(case when shift = 'n' then 1 else 0 end) as nightshifts,\n       count(*) as total\nFROM Attendance a\nWHERE (in_time BETWEEN CONVERT(DATETIME, '2014-01-07 00:00:00', 102) AND\n                       CONVERT(DATETIME, '2014-07-31 00:00:00', 102)) AND\n      PID = 'A002';\n
    \n

    EDIT:

    \n

    If you want counts of distinct dates for the total, then use count(distinct):

    \n
    SELECT EID,\n       sum(case when shift = 'd' then 1 else 0 end) as dayshifts,\n       sum(case when shift = 'n' then 1 else 0 end) as nightshifts,\n       count(distinct case when shift in ('d', 'n') then cast(in_time as date) end) as total\nFROM Attendance a\nWHERE (in_time BETWEEN CONVERT(DATETIME, '2014-01-07 00:00:00', 102) AND\n                       CONVERT(DATETIME, '2014-07-31 00:00:00', 102)) AND\n      PID = 'A002';\n
    \n soup wrap:

    I think you can get what you want using conditional aggregation:

    SELECT EID,
           sum(case when shift = 'd' then 1 else 0 end) as dayshifts,
           sum(case when shift = 'n' then 1 else 0 end) as nightshifts,
           count(*) as total
    FROM Attendance a
    WHERE (in_time BETWEEN CONVERT(DATETIME, '2014-01-07 00:00:00', 102) AND
                           CONVERT(DATETIME, '2014-07-31 00:00:00', 102)) AND
          PID = 'A002';
    

    EDIT:

    If you want counts of distinct dates for the total, then use count(distinct):

    SELECT EID,
           sum(case when shift = 'd' then 1 else 0 end) as dayshifts,
           sum(case when shift = 'n' then 1 else 0 end) as nightshifts,
           count(distinct case when shift in ('d', 'n') then cast(in_time as date) end) as total
    FROM Attendance a
    WHERE (in_time BETWEEN CONVERT(DATETIME, '2014-01-07 00:00:00', 102) AND
                           CONVERT(DATETIME, '2014-07-31 00:00:00', 102)) AND
          PID = 'A002';
    
    qid & accept id: (25046224, 25116669) query: How to update a varray type within a table with a simple update statement? soup:

    I don't believe you can update a single object's value within a varray from plain SQL, as there is no way to reference the varray index. (The link Alessandro Rossi posted seems to support this, though not necessarily for that reason). I'd be interested to be proven wrong though, of course.

    \n

    I know you aren't keen on a PL/SQL approach but if you do have to then you could do this to just update that value:

    \n
    declare\n  l_object_list my_object_varray;\n  cursor c is\n    select l.id, l.object_list, t.*\n    from my_object_table l,\n    table(l.object_list) t\n    where t.value1 = 10\n    for update of l.object_list;\nbegin\n  for r in c loop\n    l_object_list := r.object_list;\n    for i in 1..l_object_list.count loop\n      if l_object_list(i).value1 = 10 then\n        l_object_list(i).value2 := 'obj 4 upd';\n      end if;\n    end loop;\n\n    update my_object_table\n    set object_list = l_object_list\n    where current of c;\n  end loop;\nend;\n/\n\nanonymous block completed\n\nselect l.id, t.* from my_object_table l, table(l.object_list) t;\n\n        ID     VALUE1 VALUE2         VALUE3\n---------- ---------- ---------- ----------\n         1          1 object 1           10 \n         1          2 object 2           20 \n         1          3 object 3           30 \n         2         10 obj 4 upd          10 \n         2         20 object 5           20 \n         2         30 object 6           30 \n
    \n

    SQL Fiddle.

    \n

    If you're updating other things as well then you might prefer a function that returns the object list with the relevant value updated:

    \n
    create or replace function get_updated_varray(p_object_list my_object_varray,\n  p_value1 number, p_new_value2 varchar2)\nreturn my_object_varray as\n  l_object_list my_object_varray;\nbegin\n  l_object_list := p_object_list;\n  for i in 1..l_object_list.count loop\n    if l_object_list(i).value1 = p_value1 then\n      l_object_list(i).value2 := p_new_value2;\n    end if;\n  end loop;\n\n  return l_object_list;\nend;\n/\n
    \n

    Then call that as part of an update; but you still can't update your in-line view directly:

    \n
    update (\n  select l.id, l.object_list\n  from my_object_table l, table(l.object_list) t\n  where t.value1 = 10\n)\nset object_list = get_updated_varray(object_list, 10, 'obj 4 upd');\n\nSQL Error: ORA-01779: cannot modify a column which maps to a non key-preserved table\n
    \n

    You need to update based on relevant the ID(s):

    \n
    update my_object_table\nset object_list = get_updated_varray(object_list, 10, 'obj 4 upd')\nwhere id in (\n  select l.id\n  from my_object_table l, table(l.object_list) t\n  where t.value1 = 10\n);\n\n1 rows updated.\n\nselect l.id, t.* from my_object_table l, table(l.object_list) t;\n\n        ID     VALUE1 VALUE2         VALUE3\n---------- ---------- ---------- ----------\n         1          1 object 1           10 \n         1          2 object 2           20 \n         1          3 object 3           30 \n         2         10 obj 4 upd          10 \n         2         20 object 5           20 \n         2         30 object 6           30 \n
    \n

    SQL Fiddle.

    \n

    If you wanted to hide the complexity even further you could create a view with an instead-of trigger that calls the function:

    \n
    create view my_object_view as\n  select l.id, t.* from my_object_table l, table(l.object_list) t\n/\n\ncreate or replace trigger my_object_view_trigger\ninstead of update on my_object_view\nbegin\n  update my_object_table\n  set object_list = get_updated_varray(object_list, :old.value1, :new.value2)\n  where id = :old.id;\nend;\n/\n
    \n

    Then the update is pretty much what you wanted, superficially at least:

    \n
    update my_object_view\nset value2 = 'obj 4 upd'\nwhere value1 = 10;\n\n1 rows updated.\n\nselect * from my_object_view;\n\n        ID     VALUE1 VALUE2         VALUE3\n---------- ---------- ---------- ----------\n         1          1 object 1           10 \n         1          2 object 2           20 \n         1          3 object 3           30 \n         2         10 obj 4 upd          10 \n         2         20 object 5           20 \n         2         30 object 6           30 \n
    \n

    SQL Fiddle.

    \n soup wrap:

    I don't believe you can update a single object's value within a varray from plain SQL, as there is no way to reference the varray index. (The link Alessandro Rossi posted seems to support this, though not necessarily for that reason). I'd be interested to be proven wrong though, of course.

    I know you aren't keen on a PL/SQL approach but if you do have to then you could do this to just update that value:

    declare
      l_object_list my_object_varray;
      cursor c is
        select l.id, l.object_list, t.*
        from my_object_table l,
        table(l.object_list) t
        where t.value1 = 10
        for update of l.object_list;
    begin
      for r in c loop
        l_object_list := r.object_list;
        for i in 1..l_object_list.count loop
          if l_object_list(i).value1 = 10 then
            l_object_list(i).value2 := 'obj 4 upd';
          end if;
        end loop;
    
        update my_object_table
        set object_list = l_object_list
        where current of c;
      end loop;
    end;
    /
    
    anonymous block completed
    
    select l.id, t.* from my_object_table l, table(l.object_list) t;
    
            ID     VALUE1 VALUE2         VALUE3
    ---------- ---------- ---------- ----------
             1          1 object 1           10 
             1          2 object 2           20 
             1          3 object 3           30 
             2         10 obj 4 upd          10 
             2         20 object 5           20 
             2         30 object 6           30 
    

    SQL Fiddle.

    If you're updating other things as well then you might prefer a function that returns the object list with the relevant value updated:

    create or replace function get_updated_varray(p_object_list my_object_varray,
      p_value1 number, p_new_value2 varchar2)
    return my_object_varray as
      l_object_list my_object_varray;
    begin
      l_object_list := p_object_list;
      for i in 1..l_object_list.count loop
        if l_object_list(i).value1 = p_value1 then
          l_object_list(i).value2 := p_new_value2;
        end if;
      end loop;
    
      return l_object_list;
    end;
    /
    

    Then call that as part of an update; but you still can't update your in-line view directly:

    update (
      select l.id, l.object_list
      from my_object_table l, table(l.object_list) t
      where t.value1 = 10
    )
    set object_list = get_updated_varray(object_list, 10, 'obj 4 upd');
    
    SQL Error: ORA-01779: cannot modify a column which maps to a non key-preserved table
    

    You need to update based on relevant the ID(s):

    update my_object_table
    set object_list = get_updated_varray(object_list, 10, 'obj 4 upd')
    where id in (
      select l.id
      from my_object_table l, table(l.object_list) t
      where t.value1 = 10
    );
    
    1 rows updated.
    
    select l.id, t.* from my_object_table l, table(l.object_list) t;
    
            ID     VALUE1 VALUE2         VALUE3
    ---------- ---------- ---------- ----------
             1          1 object 1           10 
             1          2 object 2           20 
             1          3 object 3           30 
             2         10 obj 4 upd          10 
             2         20 object 5           20 
             2         30 object 6           30 
    

    SQL Fiddle.

    If you wanted to hide the complexity even further you could create a view with an instead-of trigger that calls the function:

    create view my_object_view as
      select l.id, t.* from my_object_table l, table(l.object_list) t
    /
    
    create or replace trigger my_object_view_trigger
    instead of update on my_object_view
    begin
      update my_object_table
      set object_list = get_updated_varray(object_list, :old.value1, :new.value2)
      where id = :old.id;
    end;
    /
    

    Then the update is pretty much what you wanted, superficially at least:

    update my_object_view
    set value2 = 'obj 4 upd'
    where value1 = 10;
    
    1 rows updated.
    
    select * from my_object_view;
    
            ID     VALUE1 VALUE2         VALUE3
    ---------- ---------- ---------- ----------
             1          1 object 1           10 
             1          2 object 2           20 
             1          3 object 3           30 
             2         10 obj 4 upd          10 
             2         20 object 5           20 
             2         30 object 6           30 
    

    SQL Fiddle.

    qid & accept id: (25076117, 25076221) query: sqlite replace() function to perform a string replace soup:

    Just add a comma to all occurrences of 0.:

    \n
                   replace(TheColumn, '0.', ',0.')\n
    \n

    then remove the duplicates:

    \n
           replace(replace(TheColumn, '0.', ',0.'), ',,', ',')\n
    \n

    and the comma at the beginning:

    \n
    substr(replace(replace(TheColumn, '0.', ',0.'), ',,', ','), 2)\n
    \n soup wrap:

    Just add a comma to all occurrences of 0.:

                   replace(TheColumn, '0.', ',0.')
    

    then remove the duplicates:

           replace(replace(TheColumn, '0.', ',0.'), ',,', ',')
    

    and the comma at the beginning:

    substr(replace(replace(TheColumn, '0.', ',0.'), ',,', ','), 2)
    
    qid & accept id: (25095284, 25095313) query: Using LEFT JOIN to returns rows that don't have a match soup:
    SELECT a.auction_id\n  FROM auctions AS a\n  LEFT JOIN winners AS w\n    ON a.auction_id = w.auction_id\n WHERE a.owner_id = 1234567\n   AND a.is_draft = 0\n   AND a.creation_in_progress = 0\n   AND w.winner_id IS NULL\n
    \n

    This belongs in the WHERE clause:

    \n
       AND w.winner_id IS NULL\n
    \n

    Criteria on the outer joined table belongs in the ON clause when you want to ALLOW nulls. In this case, where you're filtering in on nulls, you put that criteria into the WHERE clause. Everything in the ON clause is designed to allow nulls.

    \n

    Here are some examples using data from a question I answered not long ago:

    \n

    Proper use of where x is null:\nhttp://sqlfiddle.com/#!2/8936b5/2/0

    \n

    Same thing but improperly placing that criteria into the ON clause:\nhttp://sqlfiddle.com/#!2/8936b5/3/0

    \n

    (notice the FUNCTIONAL difference, the result is not the same, because the queries are not functionally equivalent)

    \n soup wrap:
    SELECT a.auction_id
      FROM auctions AS a
      LEFT JOIN winners AS w
        ON a.auction_id = w.auction_id
     WHERE a.owner_id = 1234567
       AND a.is_draft = 0
       AND a.creation_in_progress = 0
       AND w.winner_id IS NULL
    

    This belongs in the WHERE clause:

       AND w.winner_id IS NULL
    

    Criteria on the outer joined table belongs in the ON clause when you want to ALLOW nulls. In this case, where you're filtering in on nulls, you put that criteria into the WHERE clause. Everything in the ON clause is designed to allow nulls.

    Here are some examples using data from a question I answered not long ago:

    Proper use of where x is null: http://sqlfiddle.com/#!2/8936b5/2/0

    Same thing but improperly placing that criteria into the ON clause: http://sqlfiddle.com/#!2/8936b5/3/0

    (notice the FUNCTIONAL difference, the result is not the same, because the queries are not functionally equivalent)

    qid & accept id: (25140883, 25141261) query: Converting XML in SQL Server soup:

    Try something like this.

    \n

    If you have a XML variable:

    \n
    declare @xml XML = '';\n\nselect \n  data.node.value('@en-US', 'varchar(11)') my_column\nfrom @xml.nodes('locale') data(node);\n
    \n

    In your case, for a table's column (sorry for not given this example first):

    \n
    create table dbo.example_xml\n(\n    my_column XML not null\n);\ngo\n\ninsert into dbo.example_xml\nvalues('');\ngo\n\nselect\n  my_column.value('(/locale/@en-US)[1]', 'varchar(11)') [en-US]\nfrom dbo.example_xml;\ngo\n
    \n

    Hope it helps.

    \n soup wrap:

    Try something like this.

    If you have a XML variable:

    declare @xml XML = '';
    
    select 
      data.node.value('@en-US', 'varchar(11)') my_column
    from @xml.nodes('locale') data(node);
    

    In your case, for a table's column (sorry for not given this example first):

    create table dbo.example_xml
    (
        my_column XML not null
    );
    go
    
    insert into dbo.example_xml
    values('');
    go
    
    select
      my_column.value('(/locale/@en-US)[1]', 'varchar(11)') [en-US]
    from dbo.example_xml;
    go
    

    Hope it helps.

    qid & accept id: (25144691, 25144760) query: MySQL counting and sorting rows returned from a query soup:

    Just add an aggregate function (e.g. COUNT() or SUM()) in the SELECT list, and add a GROUP BY clause to the query, and an ORDER BY clause to the query.

    \n
    SELECT U.username\n     , COUNT(Q.question_id)\n  FROM ...\n\n GROUP BY Q.author_id\n ORDER BY COUNT(Q.question_id) DESC\n
    \n
    \n

    Note that the predicate on the role column in the WHERE clause of your query negates the "outerness" of the LEFT JOIN operation. (With the LEFT JOIN, any rows from Q that don't find a matching row in U, will return NULL for all of the columns in U. Adding a predicate U.role = '0' in the WHERE clause will cause any rows with a NULL value in U.role to be excluded.

    \n
    \n

    This would return distinct values of username, along with a "count" of the questions related to that user:

    \n
    SELECT U.username\n     , COUNT(Q.question_id)\n  FROM p1209279x.questions Q\n  JOIN p1209279x.users U\n    ON U.user_id=Q.author_id\n WHERE Q.approved='Y'\n   AND Q.role='0'\n GROUP BY Q.author_id\n ORDER BY COUNT(Q.question_id) DESC\n
    \n soup wrap:

    Just add an aggregate function (e.g. COUNT() or SUM()) in the SELECT list, and add a GROUP BY clause to the query, and an ORDER BY clause to the query.

    SELECT U.username
         , COUNT(Q.question_id)
      FROM ...
    
     GROUP BY Q.author_id
     ORDER BY COUNT(Q.question_id) DESC
    

    Note that the predicate on the role column in the WHERE clause of your query negates the "outerness" of the LEFT JOIN operation. (With the LEFT JOIN, any rows from Q that don't find a matching row in U, will return NULL for all of the columns in U. Adding a predicate U.role = '0' in the WHERE clause will cause any rows with a NULL value in U.role to be excluded.


    This would return distinct values of username, along with a "count" of the questions related to that user:

    SELECT U.username
         , COUNT(Q.question_id)
      FROM p1209279x.questions Q
      JOIN p1209279x.users U
        ON U.user_id=Q.author_id
     WHERE Q.approved='Y'
       AND Q.role='0'
     GROUP BY Q.author_id
     ORDER BY COUNT(Q.question_id) DESC
    
    qid & accept id: (25207558, 25207609) query: Get first 100 records in a table soup:

    Try this to get the 100 records:

    \n
      select \np.attr_value product,\nm.attr_value model,\nu.attr_value usage,\nl.attr_value location\n    from table1 t1 join table2 t2 on t1.e_subid = t2.e_subid\n                   join table4 t4 on t4.loc_id = t1.loc_id\n                   join table3 p  on t2.e_cid = p.e_cid \n                   join table3 m  on t2.e_cid = m.e_cid \n                   join table3 u  on t2.e_cid = u.e_cid \n  Where\n      t4.attr_name = 'SiteName' \n      and p.attr_name  = 'Product'\n      and m.attr_name  = 'Model'\n      and u.attr_name  = 'Usage'\n      and ROWNUM <= 100\n      order by product,location;\n
    \n

    Also note that Oracle applies rownum to the result after it has been returned.

    \n

    However you may try to check if the value exists in the table using this:

    \n
    select case \n            when exists (select 1\n        from table1 t1 join table2 t2 on t1.e_subid = t2.e_subid\n                       join table4 t4 on t4.loc_id = t1.loc_id\n                       join table3 p  on t2.e_cid = p.e_cid \n                       join table3 m  on t2.e_cid = m.e_cid \n                       join table3 u  on t2.e_cid = u.e_cid \n      Where\n          t4.attr_name = 'SiteName' \n          and p.attr_name  = 'Product'\n          and m.attr_name  = 'Model'\n          and u.attr_name  = 'Usage'\n          order by product,location;\n) \n    then 'Y' \n            else 'N' \n        end as rec_exists\nfrom dual;\n
    \n soup wrap:

    Try this to get the 100 records:

      select 
    p.attr_value product,
    m.attr_value model,
    u.attr_value usage,
    l.attr_value location
        from table1 t1 join table2 t2 on t1.e_subid = t2.e_subid
                       join table4 t4 on t4.loc_id = t1.loc_id
                       join table3 p  on t2.e_cid = p.e_cid 
                       join table3 m  on t2.e_cid = m.e_cid 
                       join table3 u  on t2.e_cid = u.e_cid 
      Where
          t4.attr_name = 'SiteName' 
          and p.attr_name  = 'Product'
          and m.attr_name  = 'Model'
          and u.attr_name  = 'Usage'
          and ROWNUM <= 100
          order by product,location;
    

    Also note that Oracle applies rownum to the result after it has been returned.

    However you may try to check if the value exists in the table using this:

    select case 
                when exists (select 1
            from table1 t1 join table2 t2 on t1.e_subid = t2.e_subid
                           join table4 t4 on t4.loc_id = t1.loc_id
                           join table3 p  on t2.e_cid = p.e_cid 
                           join table3 m  on t2.e_cid = m.e_cid 
                           join table3 u  on t2.e_cid = u.e_cid 
          Where
              t4.attr_name = 'SiteName' 
              and p.attr_name  = 'Product'
              and m.attr_name  = 'Model'
              and u.attr_name  = 'Usage'
              order by product,location;
    ) 
        then 'Y' 
                else 'N' 
            end as rec_exists
    from dual;
    
    qid & accept id: (25238315, 25238476) query: select distinct of a column and order by date column but without showing the date column soup:

    Try this query:

    \n
    WITH Names AS (\n   SELECT\n      Name,\n      Seq = Dense_Rank() OVER (ORDER BY SomeDate)\n         - Dense_Rank() OVER (PARTITION BY Name ORDER BY SomeDate)\n   FROM\n      dbo.Names\n)\nSELECT Name\nFROM Names\nGROUP BY Name, Seq\nORDER BY Min(Seq)\n;\n
    \n

    Run this live in a SQL Fiddle

    \n

    This will return the A, B, A pattern you requested.

    \n

    You can't use a simple DISTINCT because you're asking to display a single value, but order by all the dates that the value may have associated with it. What if your data looks like this?

    \n
    Name  Date\n----  ----\nA     2014-01-01\nB     2014-02-01\nB     2014-03-01\nA     2014-04-01\n
    \n

    How do you decide whether to put A first, or B first, based one some theoretical ordering by the date?

    \n

    That is why I had to do the above subtraction of windowing functions, which should order things how you want.

    \n

    Notes

    \n

    I call this technique a "simulated PREORDER BY". Dense_Rank does not offer any way to preorder the rows before ranking based on ordering. If you could do Dense_Rank() OVER (PREORDER BY Date ORDER BY Name) to indicate that you want to order by Date first, but don't want it to be part of the resulting rank calculation, you'd be set! However, that doesn't exist. After some study a while back I hit on the idea to use a combination of windowing functions to accomplish the purpose, and the above query represents that result.

    \n

    Note that you must also GROUP BY the Name, not just the resulting subtracted windowing expressions, in order for everything to work correctly, because the expression, while unique to the other column (in this case, Name), can result in duplicate values across the entire set (two different value Names can have the same expression result). You can assign a new rank or other windowing function if there is a desire for a value that can be ordered by individually.

    \n soup wrap:

    Try this query:

    WITH Names AS (
       SELECT
          Name,
          Seq = Dense_Rank() OVER (ORDER BY SomeDate)
             - Dense_Rank() OVER (PARTITION BY Name ORDER BY SomeDate)
       FROM
          dbo.Names
    )
    SELECT Name
    FROM Names
    GROUP BY Name, Seq
    ORDER BY Min(Seq)
    ;
    

    Run this live in a SQL Fiddle

    This will return the A, B, A pattern you requested.

    You can't use a simple DISTINCT because you're asking to display a single value, but order by all the dates that the value may have associated with it. What if your data looks like this?

    Name  Date
    ----  ----
    A     2014-01-01
    B     2014-02-01
    B     2014-03-01
    A     2014-04-01
    

    How do you decide whether to put A first, or B first, based one some theoretical ordering by the date?

    That is why I had to do the above subtraction of windowing functions, which should order things how you want.

    Notes

    I call this technique a "simulated PREORDER BY". Dense_Rank does not offer any way to preorder the rows before ranking based on ordering. If you could do Dense_Rank() OVER (PREORDER BY Date ORDER BY Name) to indicate that you want to order by Date first, but don't want it to be part of the resulting rank calculation, you'd be set! However, that doesn't exist. After some study a while back I hit on the idea to use a combination of windowing functions to accomplish the purpose, and the above query represents that result.

    Note that you must also GROUP BY the Name, not just the resulting subtracted windowing expressions, in order for everything to work correctly, because the expression, while unique to the other column (in this case, Name), can result in duplicate values across the entire set (two different value Names can have the same expression result). You can assign a new rank or other windowing function if there is a desire for a value that can be ordered by individually.

    qid & accept id: (25259434, 25260812) query: Simple fetch ASP prepared statement soup:

    this will not work in classic asp:

    \n
    Dim cmdPrep1 As New ADODB.Command\n
    \n

    you have to use server.createobject like so:

    \n
    dim cmdPrep1 : set cmdPrep1 = server.createobject("ADODB.Command")\n\ncmdPrep1.ActiveConnection = cn\ncmdPrep1.CommandType = adCmdText\ncmdPrep1.CommandText = "SELECT ID,NAME FROM MEMBERS WHERE ID =?"\n\n\ncmdPrep1.parameters.Append cmd.createParameter( "ID", adInteger, , , Request.Form("nameOfIDField") )\n\ndim rs : set rs = cmdPrep1.execute\n
    \n

    now you have an ADODB.Recordset in your variable rs.

    \n soup wrap:

    this will not work in classic asp:

    Dim cmdPrep1 As New ADODB.Command
    

    you have to use server.createobject like so:

    dim cmdPrep1 : set cmdPrep1 = server.createobject("ADODB.Command")
    
    cmdPrep1.ActiveConnection = cn
    cmdPrep1.CommandType = adCmdText
    cmdPrep1.CommandText = "SELECT ID,NAME FROM MEMBERS WHERE ID =?"
    
    
    cmdPrep1.parameters.Append cmd.createParameter( "ID", adInteger, , , Request.Form("nameOfIDField") )
    
    dim rs : set rs = cmdPrep1.execute
    

    now you have an ADODB.Recordset in your variable rs.

    qid & accept id: (25275552, 25277245) query: MySQL UPDATE - SET field in column to 1, all other fields to 0, with one query soup:

    I think you want this logic:

    \n
    UPDATE table\n    SET frontpage = (case when poll_id = '555' then '1' else '0' end)\n    WHERE user_id = '999';\n
    \n

    As a note: if the constants should really be integers, then drop the single quotes. In fact, you can then simplify the query to:

    \n
    UPDATE table\n    SET frontpage = (poll_id = 555)\n    WHERE user_id = 999;\n
    \n soup wrap:

    I think you want this logic:

    UPDATE table
        SET frontpage = (case when poll_id = '555' then '1' else '0' end)
        WHERE user_id = '999';
    

    As a note: if the constants should really be integers, then drop the single quotes. In fact, you can then simplify the query to:

    UPDATE table
        SET frontpage = (poll_id = 555)
        WHERE user_id = 999;
    
    qid & accept id: (25292138, 25292502) query: How to convert SQL Server Query into Access soup:

    A direct translation into Access would be:

    \n
    select * from tblClient\nwhere company & dba1 & dba2 & dba3 like '*jbl*'\n
    \n

    EDIT:\nTo make an exact match, you could do:

    \n
    select * from tblClient\nwhere '|' & company & '|' & dba1 & '|' & dba2 & '|' & dba3 & '|' like '*|' & 'jbl' & '|*'\n
    \n soup wrap:

    A direct translation into Access would be:

    select * from tblClient
    where company & dba1 & dba2 & dba3 like '*jbl*'
    

    EDIT: To make an exact match, you could do:

    select * from tblClient
    where '|' & company & '|' & dba1 & '|' & dba2 & '|' & dba3 & '|' like '*|' & 'jbl' & '|*'
    
    qid & accept id: (25319348, 25326206) query: Unpivot Multiple Columns in MySQL soup:

    For this suggestion I have created a simple 50 row table called TransPoser, there may already be a table of integers available in MySQL or in your db, but you want something similar that will give your number 1 to N for those numbered columns.

    \n

    Then, using that table, cross join to your non-normalized table (I call it BadTable) but restrict this to the first row. Then using a set of case expressions we pivot those date strings into a column. It would be possible to convert to a proper date as we do this if needed (I would suggest it, but haven't included it).

    \n

    This small transposition is then used as a derived table in the main query.

    \n

    The main query ignores that first row, but also uses a cross join to force all original rows into the 50 rows (or 4 as we see in this example). This Cartesian product is then joined back to the derived table discussed above to supply the dates. Then it is another set of case expressions that transpose the percentages into a column, aligned to the date and various codes.

    \n

    Example result (from sample data), blank lines added manually:

    \n
    | N |  CODE | DESC | CODE_0 | DESC_0 |   THEDATE | PERCENTAGE |\n|---|-------|------|--------|--------|-----------|------------|\n| 1 | CTR07 | Risk |     P1 | Phase1 | 29-Nov-13 |        0.2 |\n| 1 | CTR07 | Risk |     P1 | Phase1 | 29-Nov-13 |        0.2 |\n| 1 | CTR07 | Risk |     P1 | Phase1 | 29-Nov-13 |        0.2 |\n| 1 | CTR08 | Oper |     P1 | Phase1 | 29-Nov-13 |        0.6 |\n| 1 | CTR08 | Oper |     P1 | Phase1 | 29-Nov-13 |        0.6 |\n| 1 | CTR08 | Oper |     P1 | Phase1 | 29-Nov-13 |        0.6 |\n\n| 2 | CTR07 | Risk |     P1 | Phase1 |  6-Dec-13 |        0.4 |\n| 2 | CTR07 | Risk |     P1 | Phase1 |  6-Dec-13 |        0.4 |\n| 2 | CTR07 | Risk |     P1 | Phase1 |  6-Dec-13 |        0.4 |\n| 2 | CTR08 | Oper |     P1 | Phase1 |  6-Dec-13 |        0.6 |\n| 2 | CTR08 | Oper |     P1 | Phase1 |  6-Dec-13 |        0.6 |\n| 2 | CTR08 | Oper |     P1 | Phase1 |  6-Dec-13 |        0.6 |\n\n| 3 | CTR07 | Risk |     P1 | Phase1 | 13-Dec-13 |        0.6 |\n| 3 | CTR07 | Risk |     P1 | Phase1 | 13-Dec-13 |        0.6 |\n| 3 | CTR07 | Risk |     P1 | Phase1 | 13-Dec-13 |        0.6 |\n| 3 | CTR08 | Oper |     P1 | Phase1 | 13-Dec-13 |        0.9 |\n| 3 | CTR08 | Oper |     P1 | Phase1 | 13-Dec-13 |        0.9 |\n| 3 | CTR08 | Oper |     P1 | Phase1 | 13-Dec-13 |        0.9 |\n\n| 4 | CTR07 | Risk |     P1 | Phase1 | 20-Dec-13 |        1.1 |\n| 4 | CTR07 | Risk |     P1 | Phase1 | 20-Dec-13 |        1.1 |\n| 4 | CTR07 | Risk |     P1 | Phase1 | 20-Dec-13 |        1.1 |\n| 4 | CTR08 | Oper |     P1 | Phase1 | 20-Dec-13 |        2.7 |\n| 4 | CTR08 | Oper |     P1 | Phase1 | 20-Dec-13 |        2.7 |\n| 4 | CTR08 | Oper |     P1 | Phase1 | 20-Dec-13 |        2.7 |\n
    \n

    The query:

    \n
    select\n       n.n\n     , b.Code\n     , b.Desc\n     , b.Code_0\n     , b.Desc_0\n     , T.theDate\n     , case\n            when n.n =  1 then `1`\n            when n.n =  2 then `2`\n            when n.n =  3 then `3`\n            when n.n =  4 then `4`\n         /* when n.n =  5 then `5` */\n         /* when n.n = 50 then `50`*/\n       end as Percentage\nfrom BadTable as B\ncross join (select N from TransPoser where N < 5) as N\ninner join (\n            /* transpose just the date row */\n            /* join back vis the number given to each row */\n            select\n                    n.n\n                  , case\n                        when n.n =  1 then `1`\n                        when n.n =  2 then `2`\n                        when n.n =  3 then `3`\n                        when n.n =  4 then `4`\n                     /* when n.n =  5 then `5` */\n                     /* when n.n = 50 then `50`*/\n                   end as theDate\n            from BadTable as B\n            cross join (select N from TransPoser where N < 5) as N\n            where b.code is null\n            and b.Period = 'Date'\n           ) as T on N.N = T.N\nwhere b.code is NOT null\nand b.Period <> 'Date'\norder by\n       n.n\n     , b.code\n;\n
    \n

    for the above see this SQLFIDDLE

    \n

    It really isn't fair to expect a fully prepared executable deliverable as the result of a question IMHO - it is "stretching the friendship". But to morph the above query into a dynamic query isn't too hard. it's just a bit "tedious" as the syntax is a bit tricky. I'm not that experienced with MySQL but this is how I would do it:

    \n
    set @numcols := 4;\nset @casevar := '';\n\nset @casevar := (\n                  select \n                  group_concat(@casevar\n                                       ,'when n.n =  '\n                                       , n.n\n                                       ,' then `'\n                                       , n.n\n                                       ,'`'\n                                      SEPARATOR ' ')\n                  from TransPoser as n\n                  where n.n <= @numcols\n                 )\n;\n\n\nset @sqlvar := concat(\n          'SELECT n.n , b.Code , b.Desc , b.Code_0 , b.Desc_0 , T.theDate , CASE '\n        , @casevar\n        , ' END AS Percentage FROM BadTable AS B CROSS JOIN (SELECT N FROM  TransPoser WHERE N <='\n        , @numcols\n        , ') AS N INNER JOIN ( SELECT n.n , CASE '\n        , @casevar                                                                                                       \n        , ' END AS theDate FROM BadTable AS B CROSS JOIN (SELECT N FROM  TransPoser WHERE N <='\n        , @numcols\n        , ') AS N WHERE b.code IS NULL '\n        , ' AND b.Period = ''Date'' ) AS T ON N.N = T.N WHERE b.code IS NOT NULL AND b.Period <> ''Date'' ORDER BY n.n , b.code ' \n        );\n\nPREPARE stmt FROM @sqlvar;\nEXECUTE stmt;\n
    \n

    Demo of the dynamic approach

    \n soup wrap:

    For this suggestion I have created a simple 50 row table called TransPoser, there may already be a table of integers available in MySQL or in your db, but you want something similar that will give your number 1 to N for those numbered columns.

    Then, using that table, cross join to your non-normalized table (I call it BadTable) but restrict this to the first row. Then using a set of case expressions we pivot those date strings into a column. It would be possible to convert to a proper date as we do this if needed (I would suggest it, but haven't included it).

    This small transposition is then used as a derived table in the main query.

    The main query ignores that first row, but also uses a cross join to force all original rows into the 50 rows (or 4 as we see in this example). This Cartesian product is then joined back to the derived table discussed above to supply the dates. Then it is another set of case expressions that transpose the percentages into a column, aligned to the date and various codes.

    Example result (from sample data), blank lines added manually:

    | N |  CODE | DESC | CODE_0 | DESC_0 |   THEDATE | PERCENTAGE |
    |---|-------|------|--------|--------|-----------|------------|
    | 1 | CTR07 | Risk |     P1 | Phase1 | 29-Nov-13 |        0.2 |
    | 1 | CTR07 | Risk |     P1 | Phase1 | 29-Nov-13 |        0.2 |
    | 1 | CTR07 | Risk |     P1 | Phase1 | 29-Nov-13 |        0.2 |
    | 1 | CTR08 | Oper |     P1 | Phase1 | 29-Nov-13 |        0.6 |
    | 1 | CTR08 | Oper |     P1 | Phase1 | 29-Nov-13 |        0.6 |
    | 1 | CTR08 | Oper |     P1 | Phase1 | 29-Nov-13 |        0.6 |
    
    | 2 | CTR07 | Risk |     P1 | Phase1 |  6-Dec-13 |        0.4 |
    | 2 | CTR07 | Risk |     P1 | Phase1 |  6-Dec-13 |        0.4 |
    | 2 | CTR07 | Risk |     P1 | Phase1 |  6-Dec-13 |        0.4 |
    | 2 | CTR08 | Oper |     P1 | Phase1 |  6-Dec-13 |        0.6 |
    | 2 | CTR08 | Oper |     P1 | Phase1 |  6-Dec-13 |        0.6 |
    | 2 | CTR08 | Oper |     P1 | Phase1 |  6-Dec-13 |        0.6 |
    
    | 3 | CTR07 | Risk |     P1 | Phase1 | 13-Dec-13 |        0.6 |
    | 3 | CTR07 | Risk |     P1 | Phase1 | 13-Dec-13 |        0.6 |
    | 3 | CTR07 | Risk |     P1 | Phase1 | 13-Dec-13 |        0.6 |
    | 3 | CTR08 | Oper |     P1 | Phase1 | 13-Dec-13 |        0.9 |
    | 3 | CTR08 | Oper |     P1 | Phase1 | 13-Dec-13 |        0.9 |
    | 3 | CTR08 | Oper |     P1 | Phase1 | 13-Dec-13 |        0.9 |
    
    | 4 | CTR07 | Risk |     P1 | Phase1 | 20-Dec-13 |        1.1 |
    | 4 | CTR07 | Risk |     P1 | Phase1 | 20-Dec-13 |        1.1 |
    | 4 | CTR07 | Risk |     P1 | Phase1 | 20-Dec-13 |        1.1 |
    | 4 | CTR08 | Oper |     P1 | Phase1 | 20-Dec-13 |        2.7 |
    | 4 | CTR08 | Oper |     P1 | Phase1 | 20-Dec-13 |        2.7 |
    | 4 | CTR08 | Oper |     P1 | Phase1 | 20-Dec-13 |        2.7 |
    

    The query:

    select
           n.n
         , b.Code
         , b.Desc
         , b.Code_0
         , b.Desc_0
         , T.theDate
         , case
                when n.n =  1 then `1`
                when n.n =  2 then `2`
                when n.n =  3 then `3`
                when n.n =  4 then `4`
             /* when n.n =  5 then `5` */
             /* when n.n = 50 then `50`*/
           end as Percentage
    from BadTable as B
    cross join (select N from TransPoser where N < 5) as N
    inner join (
                /* transpose just the date row */
                /* join back vis the number given to each row */
                select
                        n.n
                      , case
                            when n.n =  1 then `1`
                            when n.n =  2 then `2`
                            when n.n =  3 then `3`
                            when n.n =  4 then `4`
                         /* when n.n =  5 then `5` */
                         /* when n.n = 50 then `50`*/
                       end as theDate
                from BadTable as B
                cross join (select N from TransPoser where N < 5) as N
                where b.code is null
                and b.Period = 'Date'
               ) as T on N.N = T.N
    where b.code is NOT null
    and b.Period <> 'Date'
    order by
           n.n
         , b.code
    ;
    

    for the above see this SQLFIDDLE

    It really isn't fair to expect a fully prepared executable deliverable as the result of a question IMHO - it is "stretching the friendship". But to morph the above query into a dynamic query isn't too hard. it's just a bit "tedious" as the syntax is a bit tricky. I'm not that experienced with MySQL but this is how I would do it:

    set @numcols := 4;
    set @casevar := '';
    
    set @casevar := (
                      select 
                      group_concat(@casevar
                                           ,'when n.n =  '
                                           , n.n
                                           ,' then `'
                                           , n.n
                                           ,'`'
                                          SEPARATOR ' ')
                      from TransPoser as n
                      where n.n <= @numcols
                     )
    ;
    
    
    set @sqlvar := concat(
              'SELECT n.n , b.Code , b.Desc , b.Code_0 , b.Desc_0 , T.theDate , CASE '
            , @casevar
            , ' END AS Percentage FROM BadTable AS B CROSS JOIN (SELECT N FROM  TransPoser WHERE N <='
            , @numcols
            , ') AS N INNER JOIN ( SELECT n.n , CASE '
            , @casevar                                                                                                       
            , ' END AS theDate FROM BadTable AS B CROSS JOIN (SELECT N FROM  TransPoser WHERE N <='
            , @numcols
            , ') AS N WHERE b.code IS NULL '
            , ' AND b.Period = ''Date'' ) AS T ON N.N = T.N WHERE b.code IS NOT NULL AND b.Period <> ''Date'' ORDER BY n.n , b.code ' 
            );
    
    PREPARE stmt FROM @sqlvar;
    EXECUTE stmt;
    

    Demo of the dynamic approach

    qid & accept id: (25321698, 25321779) query: How to split a mysql field into two and compare string between both splited fields soup:

    Use LEFT() and RIGHT() since the length on your values is fixed and use STR_TO_DATE() to convert your string to date. Here is the example:

    \n
    SELECT financial_year\nFROM financial_years\nWHERE STR_TO_DATE('03-05-2011','%d-%m-%Y') >= DATE( LEFT(financial_year,10) )\nAND STR_TO_DATE('03-05-2011','%d-%m-%Y') <= DATE( RIGHT(financial_year,10) );\n
    \n

    If the data type of financial_year is VARCHAR() you should use STR_TO_DATE() too like on this one

    \n
    STR_TO_DATE(LEFT(financial_year,10),'%d-%m-%Y') \n
    \n

    and

    \n
    STR_TO_DATE(RIGHT(financial_year,10),'%d-%m-%Y')\n
    \n soup wrap:

    Use LEFT() and RIGHT() since the length on your values is fixed and use STR_TO_DATE() to convert your string to date. Here is the example:

    SELECT financial_year
    FROM financial_years
    WHERE STR_TO_DATE('03-05-2011','%d-%m-%Y') >= DATE( LEFT(financial_year,10) )
    AND STR_TO_DATE('03-05-2011','%d-%m-%Y') <= DATE( RIGHT(financial_year,10) );
    

    If the data type of financial_year is VARCHAR() you should use STR_TO_DATE() too like on this one

    STR_TO_DATE(LEFT(financial_year,10),'%d-%m-%Y') 
    

    and

    STR_TO_DATE(RIGHT(financial_year,10),'%d-%m-%Y')
    
    qid & accept id: (25361410, 25361526) query: Drop auto generated constraint name soup:

    Your con_name variable is out of scope within the DDL statement you're executing; you're trying to drop a constraint called con_name, not one named with the value that holds - as you suspected. You can't use a bind variable here so you'll need to concatenate the name:

    \n
    DECLARE\n  con_name all_constraints.constraint_name%type;\nBEGIN\n  select constraint_name into con_name\n  from all_constraints\n  where table_name = 'MY_TABLE' and constraint_type = 'P';\n\n  EXECUTE immediate 'ALTER TABLE MY_TABLE drop constraint ' || con_name;\n\n  EXECUTE immediate 'ALTER TABLE MY_TABLE ADD CONSTRAINT MT_PK PRIMARY KEY (REV, ID)';\nEND;\n/\n
    \n

    As Nicholas Krasnov pointed out in a comment, you don't need to do this at all; you can drop the primary key without specifying its name, without using dynamic SQL or a PL/SQL block:

    \n
    ALTER TABLE MY_TABLE DROP PRIMARY KEY;\nALTER TABLE MY_TABLE ADD CONSTRAINT MT_PK PRIMARY KEY (REV, ID);\n
    \n

    Hopefully you don't already have any tables with foreign key constraints against this PK.

    \n soup wrap:

    Your con_name variable is out of scope within the DDL statement you're executing; you're trying to drop a constraint called con_name, not one named with the value that holds - as you suspected. You can't use a bind variable here so you'll need to concatenate the name:

    DECLARE
      con_name all_constraints.constraint_name%type;
    BEGIN
      select constraint_name into con_name
      from all_constraints
      where table_name = 'MY_TABLE' and constraint_type = 'P';
    
      EXECUTE immediate 'ALTER TABLE MY_TABLE drop constraint ' || con_name;
    
      EXECUTE immediate 'ALTER TABLE MY_TABLE ADD CONSTRAINT MT_PK PRIMARY KEY (REV, ID)';
    END;
    /
    

    As Nicholas Krasnov pointed out in a comment, you don't need to do this at all; you can drop the primary key without specifying its name, without using dynamic SQL or a PL/SQL block:

    ALTER TABLE MY_TABLE DROP PRIMARY KEY;
    ALTER TABLE MY_TABLE ADD CONSTRAINT MT_PK PRIMARY KEY (REV, ID);
    

    Hopefully you don't already have any tables with foreign key constraints against this PK.

    qid & accept id: (25380801, 25380913) query: Moving or inserting data to other SQL table with format soup:

    The easiest way to do this is with union all:

    \n
    select col0, col1, col2, col5\nfrom oldtable\nunion all\nselect col0, col1, col3, col4\nfrom oldtable\nwhere col3 is not null;\n
    \n

    If you want to put this into a new table, use either insert or select into. For instance:

    \n
    select col0, col1, col3, col4\ninto newtable\nfrom (select col0, col1, col2 as col3, col5 as col4\n      from oldtable\n      union all\n      select col0, col1, col3, col4\n      from oldtable\n      where col3 is not null\n     ) t\n
    \n soup wrap:

    The easiest way to do this is with union all:

    select col0, col1, col2, col5
    from oldtable
    union all
    select col0, col1, col3, col4
    from oldtable
    where col3 is not null;
    

    If you want to put this into a new table, use either insert or select into. For instance:

    select col0, col1, col3, col4
    into newtable
    from (select col0, col1, col2 as col3, col5 as col4
          from oldtable
          union all
          select col0, col1, col3, col4
          from oldtable
          where col3 is not null
         ) t
    
    qid & accept id: (25390857, 25391039) query: Replace partial value inside row soup:

    The easiest way to do this is to convert your existing URLs to something else, run the original query, and then revert them all back again.

    \n

    This query will replace all instances of url.com/images to [PLACEHOLDER].

    \n
    UPDATE wp_posts\nSET post_content = REPLACE(post_content,'url.com/images','[PLACEHOLDER]')\nWHERE post_content LIKE '%url.com/images%';\n
    \n

    Now run your original query to append /images to the url.com:

    \n
    UPDATE wp_posts\nSET post_content = REPLACE(post_content,'url.com','url.com/images')\nWHERE post_content LIKE '%url.com%';\n
    \n

    And now you're free to move the [PLACEHOLDER] back:

    \n
    UPDATE wp_posts\nSET post_content = REPLACE(post_content,'[PLACEHOLDER]','url.com/images')\nWHERE post_content LIKE '%[PLACEHOLDER]%';\n
    \n

    All in one lump, for copy & paste ease:

    \n
    UPDATE wp_posts\nSET post_content = REPLACE(post_content,'url.com/images','[PLACEHOLDER]')\nWHERE post_content LIKE '%url.com/images%';\nUPDATE wp_posts\nSET post_content = REPLACE(post_content,'url.com','url.com/images')\nWHERE post_content LIKE '%url.com%';\nUPDATE wp_posts\nSET post_content = REPLACE(post_content,'[PLACEHOLDER]','url.com/images')\nWHERE post_content LIKE '%[PLACEHOLDER]%';\n
    \n soup wrap:

    The easiest way to do this is to convert your existing URLs to something else, run the original query, and then revert them all back again.

    This query will replace all instances of url.com/images to [PLACEHOLDER].

    UPDATE wp_posts
    SET post_content = REPLACE(post_content,'url.com/images','[PLACEHOLDER]')
    WHERE post_content LIKE '%url.com/images%';
    

    Now run your original query to append /images to the url.com:

    UPDATE wp_posts
    SET post_content = REPLACE(post_content,'url.com','url.com/images')
    WHERE post_content LIKE '%url.com%';
    

    And now you're free to move the [PLACEHOLDER] back:

    UPDATE wp_posts
    SET post_content = REPLACE(post_content,'[PLACEHOLDER]','url.com/images')
    WHERE post_content LIKE '%[PLACEHOLDER]%';
    

    All in one lump, for copy & paste ease:

    UPDATE wp_posts
    SET post_content = REPLACE(post_content,'url.com/images','[PLACEHOLDER]')
    WHERE post_content LIKE '%url.com/images%';
    UPDATE wp_posts
    SET post_content = REPLACE(post_content,'url.com','url.com/images')
    WHERE post_content LIKE '%url.com%';
    UPDATE wp_posts
    SET post_content = REPLACE(post_content,'[PLACEHOLDER]','url.com/images')
    WHERE post_content LIKE '%[PLACEHOLDER]%';
    
    qid & accept id: (25420950, 25421574) query: How do I combine 2 records with a single field into 1 row with 2 fields (Oracle 11g)? soup:

    You need to use pivot:

    \n
    with t(id, d) as (\n  select 1, 'field1 = test2' from dual union all\n  select 2, 'field1 = test3' from dual \n)\nselect *\n  from t\npivot (max (d) for id in (1, 2))\n
    \n

    If you don't have the id field you can generate it, but you will have XML type:

    \n
    with t(d) as (\n  select 'field1 = test2' from dual union all\n  select 'field1 = test3' from dual \n), t1(id, d) as (\n  select ROW_NUMBER() OVER(ORDER BY d), d from t\n)\nselect *\n  from t1\npivot xml (max (d) for id in (select id from t1))\n
    \n soup wrap:

    You need to use pivot:

    with t(id, d) as (
      select 1, 'field1 = test2' from dual union all
      select 2, 'field1 = test3' from dual 
    )
    select *
      from t
    pivot (max (d) for id in (1, 2))
    

    If you don't have the id field you can generate it, but you will have XML type:

    with t(d) as (
      select 'field1 = test2' from dual union all
      select 'field1 = test3' from dual 
    ), t1(id, d) as (
      select ROW_NUMBER() OVER(ORDER BY d), d from t
    )
    select *
      from t1
    pivot xml (max (d) for id in (select id from t1))
    
    qid & accept id: (25428684, 25428786) query: MySQL Select from three tables soup:

    something like this?

    \n

    QUERY:

    \n
    SELECT country, profession, MAX(money) AS money \nFROM\n(   SELECT u.country, g.profession, SUM(um.money) AS money\n    FROM user_money um\n    JOIN users u ON u.id = um.user_id\n    JOIN groups g ON g.id = um.group_id\n    GROUP BY g.profession, u.country\n    ORDER BY um.money DESC\n) t\nGROUP BY country\nORDER BY money DESC\n
    \n

    SEE DEMO

    \n

    OUTPUT:

    \n
    +---------------+------------+-------+\n| country       | profession | money |\n+---------------+------------+-------+\n| Luxembourg    | Hacker     |  200  |\n| Albania       | Hacker     |  120  |\n| United States | Boss       |  55   |\n+---------------+------------+-------+\n
    \n soup wrap:

    something like this?

    QUERY:

    SELECT country, profession, MAX(money) AS money 
    FROM
    (   SELECT u.country, g.profession, SUM(um.money) AS money
        FROM user_money um
        JOIN users u ON u.id = um.user_id
        JOIN groups g ON g.id = um.group_id
        GROUP BY g.profession, u.country
        ORDER BY um.money DESC
    ) t
    GROUP BY country
    ORDER BY money DESC
    

    SEE DEMO

    OUTPUT:

    +---------------+------------+-------+
    | country       | profession | money |
    +---------------+------------+-------+
    | Luxembourg    | Hacker     |  200  |
    | Albania       | Hacker     |  120  |
    | United States | Boss       |  55   |
    +---------------+------------+-------+
    
    qid & accept id: (25472241, 25472731) query: mysql most popular articles in most popular categories soup:

    To do this in MySQL you have to mimic the row_number() over (partition by category) functionality that would otherwise be available in other databases.

    \n

    I've tested out the query below using some sample data here:

    \n

    Fidde:

    \n

    http://sqlfiddle.com/#!9/2b8d9/1/0

    \n

    Query:

    \n
    select id, category_id\nfrom(\nselect x.*,\n       @row_number:=case when @category_id=x.category_id then @row_number+1 else 1 end as row_number,\n       @category_id:=x.category_id as grp\n  from (select art.id, art.category_id, count(*) as num_art_views\n          from articles art\n          join (select art.category_id, count(*)\n                 from view_counts cnt\n                 join articles art\n                   on cnt.article_id = art.id\n                group by art.category_id\n                order by 2 desc limit 5) topcats\n            on art.category_id = topcats.category_id\n          join view_counts cnt\n            on art.id = cnt.article_id\n         group by art.id, art.category_id\n         order by art.category_id, num_art_views desc) x\n cross join (select @row_number := 0, @category_id := '') as r\n) x where row_number <= 5\n
    \n

    For some clarification, this will show the top 5 articles within the top 5 categories.

    \n

    Using LIMIT was sufficient to get the top 5 categories, but to get the top 5 articles WITHIN each category, you have to mimic the PARTITION BY of other databases by using a variable that restarts at each change in category.

    \n

    It might help to understand if you run the just the inner portion, see fiddle here:\nhttp://sqlfiddle.com/#!9/2b8d9/2/0

    \n

    The output at that point is:

    \n
    |        ID | CATEGORY_ID | NUM_ART_VIEWS | ROW_NUMBER |    GRP |\n|-----------|-------------|---------------|------------|--------|\n| article16 |       autos |             2 |          1 |  autos |\n| article14 |      planes |             2 |          1 | planes |\n| article12 |       sport |             4 |          1 |  sport |\n|  article3 |       sport |             3 |          2 |  sport |\n|  article4 |       sport |             3 |          3 |  sport |\n|  article1 |       sport |             3 |          4 |  sport |\n|  article2 |       sport |             3 |          5 |  sport |\n|  article5 |       sport |             2 |          6 |  sport |\n| article15 |      trains |             2 |          1 | trains |\n| article13 |          tv |             6 |          1 |     tv |\n|  article9 |          tv |             3 |          2 |     tv |\n|  article6 |          tv |             3 |          3 |     tv |\n|  article7 |          tv |             3 |          4 |     tv |\n|  article8 |          tv |             3 |          5 |     tv |\n| article10 |          tv |             2 |          6 |     tv |\n
    \n

    You can easily exclude anything not <= 5 at that point (which is what the above query does).

    \n soup wrap:

    To do this in MySQL you have to mimic the row_number() over (partition by category) functionality that would otherwise be available in other databases.

    I've tested out the query below using some sample data here:

    Fidde:

    http://sqlfiddle.com/#!9/2b8d9/1/0

    Query:

    select id, category_id
    from(
    select x.*,
           @row_number:=case when @category_id=x.category_id then @row_number+1 else 1 end as row_number,
           @category_id:=x.category_id as grp
      from (select art.id, art.category_id, count(*) as num_art_views
              from articles art
              join (select art.category_id, count(*)
                     from view_counts cnt
                     join articles art
                       on cnt.article_id = art.id
                    group by art.category_id
                    order by 2 desc limit 5) topcats
                on art.category_id = topcats.category_id
              join view_counts cnt
                on art.id = cnt.article_id
             group by art.id, art.category_id
             order by art.category_id, num_art_views desc) x
     cross join (select @row_number := 0, @category_id := '') as r
    ) x where row_number <= 5
    

    For some clarification, this will show the top 5 articles within the top 5 categories.

    Using LIMIT was sufficient to get the top 5 categories, but to get the top 5 articles WITHIN each category, you have to mimic the PARTITION BY of other databases by using a variable that restarts at each change in category.

    It might help to understand if you run the just the inner portion, see fiddle here: http://sqlfiddle.com/#!9/2b8d9/2/0

    The output at that point is:

    |        ID | CATEGORY_ID | NUM_ART_VIEWS | ROW_NUMBER |    GRP |
    |-----------|-------------|---------------|------------|--------|
    | article16 |       autos |             2 |          1 |  autos |
    | article14 |      planes |             2 |          1 | planes |
    | article12 |       sport |             4 |          1 |  sport |
    |  article3 |       sport |             3 |          2 |  sport |
    |  article4 |       sport |             3 |          3 |  sport |
    |  article1 |       sport |             3 |          4 |  sport |
    |  article2 |       sport |             3 |          5 |  sport |
    |  article5 |       sport |             2 |          6 |  sport |
    | article15 |      trains |             2 |          1 | trains |
    | article13 |          tv |             6 |          1 |     tv |
    |  article9 |          tv |             3 |          2 |     tv |
    |  article6 |          tv |             3 |          3 |     tv |
    |  article7 |          tv |             3 |          4 |     tv |
    |  article8 |          tv |             3 |          5 |     tv |
    | article10 |          tv |             2 |          6 |     tv |
    

    You can easily exclude anything not <= 5 at that point (which is what the above query does).

    qid & accept id: (25522070, 25522607) query: Creating a table from a Comma Separated List in Oracle (> 11g) - Input string limit 4000 chars soup:
      \n
    1. Can I make the function work with input strings greater than 4000 characters?\nYes, you can use for example CLOB

    2. \n
    3. Is there a more effective way of achieving the same result?\nI saw in the comments of the blog a good answer, which is about a recursive solution.

    4. \n
    \n

    just make some datatype changes for making it to work e.g.:

    \n
      \n
    • change the varchar2_table type to CLOB

      \n
      TYPE varchar2_table IS TABLE OF CLOB INDEX BY BINARY_INTEGER;\n
    • \n
    • change the VARCHAR2 datatype to CLOB in all p_delimstring occurences

    • \n
    • change original SUBSTR functions to DBMS_LOB.SUBSTR\n(if you need more info about that: http://docs.oracle.com/cd/A91202_01/901_doc/appdev.901/a89852/dbms_23b.htm)

      \n
      CREATE OR REPLACE PACKAGE parse AS\n  /*\n  || Package of utility procedures for parsing delimited or fixed position strings into tables\n  || of individual values, and vice versa.\n  */\n  TYPE varchar2_table IS TABLE OF CLOB INDEX BY BINARY_INTEGER;\n  PROCEDURE delimstring_to_table\n    ( p_delimstring IN CLOB\n    , p_table OUT varchar2_table\n    , p_nfields OUT INTEGER\n    , p_delim IN VARCHAR2 DEFAULT ','\n    );\n  PROCEDURE table_to_delimstring\n    ( p_table IN varchar2_table\n    , p_delimstring OUT CLOB\n    , p_delim IN VARCHAR2 DEFAULT ','\n    );\nEND parse;\n/\nCREATE OR REPLACE PACKAGE BODY parse AS\n  PROCEDURE delimstring_to_table\n    ( p_delimstring IN CLOB\n    , p_table OUT varchar2_table\n    , p_nfields OUT INTEGER\n    , p_delim IN VARCHAR2 DEFAULT ','\n    )\n  IS\n    v_string CLOB := p_delimstring;\n    v_nfields PLS_INTEGER := 1;\n    v_table varchar2_table;\n    v_delimpos PLS_INTEGER := INSTR(p_delimstring, p_delim);\n    v_delimlen PLS_INTEGER := LENGTH(p_delim);\n  BEGIN\n    WHILE v_delimpos > 0\n    LOOP\n      v_table(v_nfields) := DBMS_LOB.SUBSTR(v_string,1,v_delimpos-1);\n      v_string := DBMS_LOB.SUBSTR(v_string,v_delimpos+v_delimlen);\n      v_nfields := v_nfields+1;\n      v_delimpos := INSTR(v_string, p_delim);\n    END LOOP;\n    v_table(v_nfields) := v_string;\n    p_table := v_table;\n    p_nfields := v_nfields;\n  END delimstring_to_table;\n  PROCEDURE table_to_delimstring\n    ( p_table IN varchar2_table\n    , p_delimstring OUT CLOB\n    , p_delim IN VARCHAR2 DEFAULT ','\n    )\n  IS\n    v_nfields PLS_INTEGER := p_table.COUNT;\n    v_string CLOB;\n  BEGIN\n    FOR i IN 1..v_nfields\n    LOOP\n      v_string := v_string || p_table(i);\n      IF i != v_nfields THEN\n        v_string := v_string || p_delim;\n      END IF;\n    END LOOP;\n    p_delimstring := v_string;\n  END table_to_delimstring;\nEND parse;\n/\n
    • \n
    \n soup wrap:
    1. Can I make the function work with input strings greater than 4000 characters? Yes, you can use for example CLOB

    2. Is there a more effective way of achieving the same result? I saw in the comments of the blog a good answer, which is about a recursive solution.

    just make some datatype changes for making it to work e.g.:

    • change the varchar2_table type to CLOB

      TYPE varchar2_table IS TABLE OF CLOB INDEX BY BINARY_INTEGER;
      
    • change the VARCHAR2 datatype to CLOB in all p_delimstring occurences

    • change original SUBSTR functions to DBMS_LOB.SUBSTR (if you need more info about that: http://docs.oracle.com/cd/A91202_01/901_doc/appdev.901/a89852/dbms_23b.htm)

      CREATE OR REPLACE PACKAGE parse AS
        /*
        || Package of utility procedures for parsing delimited or fixed position strings into tables
        || of individual values, and vice versa.
        */
        TYPE varchar2_table IS TABLE OF CLOB INDEX BY BINARY_INTEGER;
        PROCEDURE delimstring_to_table
          ( p_delimstring IN CLOB
          , p_table OUT varchar2_table
          , p_nfields OUT INTEGER
          , p_delim IN VARCHAR2 DEFAULT ','
          );
        PROCEDURE table_to_delimstring
          ( p_table IN varchar2_table
          , p_delimstring OUT CLOB
          , p_delim IN VARCHAR2 DEFAULT ','
          );
      END parse;
      /
      CREATE OR REPLACE PACKAGE BODY parse AS
        PROCEDURE delimstring_to_table
          ( p_delimstring IN CLOB
          , p_table OUT varchar2_table
          , p_nfields OUT INTEGER
          , p_delim IN VARCHAR2 DEFAULT ','
          )
        IS
          v_string CLOB := p_delimstring;
          v_nfields PLS_INTEGER := 1;
          v_table varchar2_table;
          v_delimpos PLS_INTEGER := INSTR(p_delimstring, p_delim);
          v_delimlen PLS_INTEGER := LENGTH(p_delim);
        BEGIN
          WHILE v_delimpos > 0
          LOOP
            v_table(v_nfields) := DBMS_LOB.SUBSTR(v_string,1,v_delimpos-1);
            v_string := DBMS_LOB.SUBSTR(v_string,v_delimpos+v_delimlen);
            v_nfields := v_nfields+1;
            v_delimpos := INSTR(v_string, p_delim);
          END LOOP;
          v_table(v_nfields) := v_string;
          p_table := v_table;
          p_nfields := v_nfields;
        END delimstring_to_table;
        PROCEDURE table_to_delimstring
          ( p_table IN varchar2_table
          , p_delimstring OUT CLOB
          , p_delim IN VARCHAR2 DEFAULT ','
          )
        IS
          v_nfields PLS_INTEGER := p_table.COUNT;
          v_string CLOB;
        BEGIN
          FOR i IN 1..v_nfields
          LOOP
            v_string := v_string || p_table(i);
            IF i != v_nfields THEN
              v_string := v_string || p_delim;
            END IF;
          END LOOP;
          p_delimstring := v_string;
        END table_to_delimstring;
      END parse;
      /
      
    qid & accept id: (25531666, 25531725) query: Using WHERE and ORDER BY together in Oracle 10g soup:

    remove the AND

    \n
    select * from employees \nWHERE job_id NOT like '%CLERK' \norder by last_name\n
    \n

    Based on comments, with pseudo code

    \n
    select * from employees \nWHERE job_id != 'CLERKS'\nAND DateAppointedFielName BETWEEN StartDate AND EndDate \norder by last_name\n
    \n soup wrap:

    remove the AND

    select * from employees 
    WHERE job_id NOT like '%CLERK' 
    order by last_name
    

    Based on comments, with pseudo code

    select * from employees 
    WHERE job_id != 'CLERKS'
    AND DateAppointedFielName BETWEEN StartDate AND EndDate 
    order by last_name
    
    qid & accept id: (25537492, 25537540) query: MySQL - Always return exactly n records soup:

    You can do this using union all and limit:

    \n
    (SELECT Diameter\n FROM  `TreeDiameters` \n WHERE TreeID = ?\n) union all\n(select NULL as Diameter\n from (select 1 as n union all select 2 union all select 3 union all select 4 union all\n       select 5 union all select 6\n      ) n \n)\nORDER BY Diameter DESC\nLIMIT 0, 6;\n
    \n

    MySQL puts NULL values last with a descending sort. But you can also be specific:

    \n
    ORDER BY (Diameter is not null) DESC, Diameter DESC\n
    \n soup wrap:

    You can do this using union all and limit:

    (SELECT Diameter
     FROM  `TreeDiameters` 
     WHERE TreeID = ?
    ) union all
    (select NULL as Diameter
     from (select 1 as n union all select 2 union all select 3 union all select 4 union all
           select 5 union all select 6
          ) n 
    )
    ORDER BY Diameter DESC
    LIMIT 0, 6;
    

    MySQL puts NULL values last with a descending sort. But you can also be specific:

    ORDER BY (Diameter is not null) DESC, Diameter DESC
    
    qid & accept id: (25538698, 25539049) query: Batch SQL Server Results by Max Number of Rows soup:

    First use row_number partitioned by personid to get a ranking for each row that resets back to 1 whenever a new personid is encountered. Then you can divide that by 3 (or whatever number you want for batch size) and use the a floor function to flatten out the resulting numbers into integers. You now have a batch ID for each row, but it still resets back to 1 when it reaches a new personID, so you're not done. You can then do a dense_rank() that ranks by personid plus our new "batchid_person_specific" column and get a global batchid for all rows.

    \n

    Sql Fiddle here: http://sqlfiddle.com/#!6/3c75d/18

    \n

    The result looks like this:

    \n
    with qwry as (\nSELECT  \nROW_NUMBER() OVER (PARTITION BY PersonId order by TeamPersonId) as rownum_nofloor\n, floor((ROW_NUMBER() OVER (PARTITION BY PersonId order by TeamPersonId)-1)/3)+1 as batchid_person_specific\n, *\nFROM TeamPersonMap \n  )\nselect \nDENSE_RANK() OVER (ORDER BY PersonId, batchid_person_specific) as BatchGroupId_Final\n,* from qwry\nORDER BY PersonId\n
    \n

    [Results][2]:

    \n
    | BATCHGROUPID_FINAL | ROWNUM_NOFLOOR | BATCHID_PERSON_SPECIFIC | TEAMPERSONID | TEAMID | PERSONID |\n|--------------------|----------------|-------------------------|--------------|--------|----------|\n|                  1 |              1 |                       1 |            1 |      1 |      101 |\n|                  1 |              2 |                       1 |            6 |      2 |      101 |\n|                  1 |              3 |                       1 |           11 |      3 |      101 |\n|                  2 |              4 |                       2 |           16 |      4 |      101 |\n|                  2 |              5 |                       2 |           21 |      5 |      101 |\n|                  3 |              1 |                       1 |            2 |      1 |      102 |\n|                  3 |              2 |                       1 |            7 |      2 |      102 |\n|                  3 |              3 |                       1 |           12 |      3 |      102 |\n|                  4 |              4 |                       2 |           17 |      4 |      102 |\n|                  4 |              5 |                       2 |           22 |      5 |      102 |\n|                  5 |              1 |                       1 |            3 |      1 |      103 |\n|                  5 |              2 |                       1 |            8 |      2 |      103 |\n|                  5 |              3 |                       1 |           13 |      3 |      103 |\n|                  6 |              4 |                       2 |           18 |      4 |      103 |\n|                  6 |              5 |                       2 |           23 |      5 |      103 |\n|                  7 |              1 |                       1 |            4 |      1 |      104 |\n|                  7 |              2 |                       1 |            9 |      2 |      104 |\n|                  7 |              3 |                       1 |           14 |      3 |      104 |\n|                  8 |              4 |                       2 |           19 |      4 |      104 |\n|                  8 |              5 |                       2 |           24 |      5 |      104 |\n|                  9 |              1 |                       1 |            5 |      1 |      105 |\n|                  9 |              2 |                       1 |           10 |      2 |      105 |\n|                  9 |              3 |                       1 |           15 |      3 |      105 |\n|                 10 |              4 |                       2 |           20 |      4 |      105 |\n|                 10 |              5 |                       2 |           25 |      5 |      105 |\n
    \n soup wrap:

    First use row_number partitioned by personid to get a ranking for each row that resets back to 1 whenever a new personid is encountered. Then you can divide that by 3 (or whatever number you want for batch size) and use the a floor function to flatten out the resulting numbers into integers. You now have a batch ID for each row, but it still resets back to 1 when it reaches a new personID, so you're not done. You can then do a dense_rank() that ranks by personid plus our new "batchid_person_specific" column and get a global batchid for all rows.

    Sql Fiddle here: http://sqlfiddle.com/#!6/3c75d/18

    The result looks like this:

    with qwry as (
    SELECT  
    ROW_NUMBER() OVER (PARTITION BY PersonId order by TeamPersonId) as rownum_nofloor
    , floor((ROW_NUMBER() OVER (PARTITION BY PersonId order by TeamPersonId)-1)/3)+1 as batchid_person_specific
    , *
    FROM TeamPersonMap 
      )
    select 
    DENSE_RANK() OVER (ORDER BY PersonId, batchid_person_specific) as BatchGroupId_Final
    ,* from qwry
    ORDER BY PersonId
    

    [Results][2]:

    | BATCHGROUPID_FINAL | ROWNUM_NOFLOOR | BATCHID_PERSON_SPECIFIC | TEAMPERSONID | TEAMID | PERSONID |
    |--------------------|----------------|-------------------------|--------------|--------|----------|
    |                  1 |              1 |                       1 |            1 |      1 |      101 |
    |                  1 |              2 |                       1 |            6 |      2 |      101 |
    |                  1 |              3 |                       1 |           11 |      3 |      101 |
    |                  2 |              4 |                       2 |           16 |      4 |      101 |
    |                  2 |              5 |                       2 |           21 |      5 |      101 |
    |                  3 |              1 |                       1 |            2 |      1 |      102 |
    |                  3 |              2 |                       1 |            7 |      2 |      102 |
    |                  3 |              3 |                       1 |           12 |      3 |      102 |
    |                  4 |              4 |                       2 |           17 |      4 |      102 |
    |                  4 |              5 |                       2 |           22 |      5 |      102 |
    |                  5 |              1 |                       1 |            3 |      1 |      103 |
    |                  5 |              2 |                       1 |            8 |      2 |      103 |
    |                  5 |              3 |                       1 |           13 |      3 |      103 |
    |                  6 |              4 |                       2 |           18 |      4 |      103 |
    |                  6 |              5 |                       2 |           23 |      5 |      103 |
    |                  7 |              1 |                       1 |            4 |      1 |      104 |
    |                  7 |              2 |                       1 |            9 |      2 |      104 |
    |                  7 |              3 |                       1 |           14 |      3 |      104 |
    |                  8 |              4 |                       2 |           19 |      4 |      104 |
    |                  8 |              5 |                       2 |           24 |      5 |      104 |
    |                  9 |              1 |                       1 |            5 |      1 |      105 |
    |                  9 |              2 |                       1 |           10 |      2 |      105 |
    |                  9 |              3 |                       1 |           15 |      3 |      105 |
    |                 10 |              4 |                       2 |           20 |      4 |      105 |
    |                 10 |              5 |                       2 |           25 |      5 |      105 |
    
    qid & accept id: (25543723, 25544510) query: How to efficiently define daily constant? soup:

    You can use environment variables too.

    \n

    When you retrieve your "constants" you set it in environment:

    \n
    import os\nos.environ['MY_DAILY_CONST_1'] = 'dailyconst1'\nos.environ['MY_DAILY_CONST_2'] = 'dailyconst2'\n...\n
    \n

    And when you have to access it:

    \n
    import os\nmyconst1 = os.environ['MY_DAILY_CONST_1']\n...\n
    \n soup wrap:

    You can use environment variables too.

    When you retrieve your "constants" you set it in environment:

    import os
    os.environ['MY_DAILY_CONST_1'] = 'dailyconst1'
    os.environ['MY_DAILY_CONST_2'] = 'dailyconst2'
    ...
    

    And when you have to access it:

    import os
    myconst1 = os.environ['MY_DAILY_CONST_1']
    ...
    
    qid & accept id: (25547827, 25548615) query: Query to find foreign keys on database schema soup:

    You may use INFORMATION_SCHEMA for this:

    \n
    SELECT \n  * \nFROM  \n  INFORMATION_SCHEMA.TABLE_CONSTRAINTS \nWHERE \n  CONSTRAINT_TYPE='FOREIGN KEY'\n
    \n

    Possible types of constraint may be:

    \n
      \n
    • PRIMARY KEY for primary keys
    • \n
    • FOREIGN KEY for foreign keys
    • \n
    • UNIQUE for unique constraints
    • \n
    \n

    So you're interested in FOREIGN KEY type. This will show you which table on which column has the constraint, but won't show you targeted constraint column and table. To find them, you need to use another table, INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS which has such information, so, basically, to reconstruct relation between tables, you'll need:

    \n
    SELECT \n  t.TABLE_SCHEMA, \n  t.TABLE_NAME, \n  r.REFERENCED_TABLE_NAME \nFROM  \n  INFORMATION_SCHEMA.TABLE_CONSTRAINTS AS t \n    JOIN INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS AS r \n    ON t.CONSTRAINT_NAME=r.CONSTRAINT_NAME \nWHERE \n  t.CONSTRAINT_TYPE='FOREIGN KEY'\n
    \n

    But that's, again, is missing columns (because it doesn't belongs to those tables) and will show only relations via FK between tables. To reconstruct full relation (i.e. with columns involved) you'll need to refer to KEY_COLUMN_USAGE table:

    \n
    SELECT \n  TABLE_SCHEMA, \n  TABLE_NAME, \n  COLUMN_NAME, \n  REFERENCED_TABLE_SCHEMA, \n  REFERENCED_TABLE_NAME, \n  REFERENCED_COLUMN_NAME \nFROM \n  INFORMATION_SCHEMA.KEY_COLUMN_USAGE \nWHERE \n  REFERENCED_TABLE_SCHEMA IS NOT NULL\n
    \n

    This query will show all relations where referenced entity is not null, and, since it's applicable only in FK case - it's an answer to the question of finding FK relations. It's quite universal, but I've provided methods above since it may be useful to get info about PK or unique constraints too.

    \n soup wrap:

    You may use INFORMATION_SCHEMA for this:

    SELECT 
      * 
    FROM  
      INFORMATION_SCHEMA.TABLE_CONSTRAINTS 
    WHERE 
      CONSTRAINT_TYPE='FOREIGN KEY'
    

    Possible types of constraint may be:

    • PRIMARY KEY for primary keys
    • FOREIGN KEY for foreign keys
    • UNIQUE for unique constraints

    So you're interested in FOREIGN KEY type. This will show you which table on which column has the constraint, but won't show you targeted constraint column and table. To find them, you need to use another table, INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS which has such information, so, basically, to reconstruct relation between tables, you'll need:

    SELECT 
      t.TABLE_SCHEMA, 
      t.TABLE_NAME, 
      r.REFERENCED_TABLE_NAME 
    FROM  
      INFORMATION_SCHEMA.TABLE_CONSTRAINTS AS t 
        JOIN INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS AS r 
        ON t.CONSTRAINT_NAME=r.CONSTRAINT_NAME 
    WHERE 
      t.CONSTRAINT_TYPE='FOREIGN KEY'
    

    But that's, again, is missing columns (because it doesn't belongs to those tables) and will show only relations via FK between tables. To reconstruct full relation (i.e. with columns involved) you'll need to refer to KEY_COLUMN_USAGE table:

    SELECT 
      TABLE_SCHEMA, 
      TABLE_NAME, 
      COLUMN_NAME, 
      REFERENCED_TABLE_SCHEMA, 
      REFERENCED_TABLE_NAME, 
      REFERENCED_COLUMN_NAME 
    FROM 
      INFORMATION_SCHEMA.KEY_COLUMN_USAGE 
    WHERE 
      REFERENCED_TABLE_SCHEMA IS NOT NULL
    

    This query will show all relations where referenced entity is not null, and, since it's applicable only in FK case - it's an answer to the question of finding FK relations. It's quite universal, but I've provided methods above since it may be useful to get info about PK or unique constraints too.

    qid & accept id: (25560497, 25560525) query: Split Mysql Location entry into 2 columns? soup:

    Executing the query doesn't set the columns. To create the columns do:

    \n
    alter table users_profiles add column latitude decimal(10, 4);\nalter table users_profiles add column longitude decimal(10, 4);\n
    \n

    To assign them use update:

    \n
    update users_profiles\n    set latitude = cast(SUBSTRING_INDEX(`location`, ',', 1) as decimal(10, 4)),\n        longitude = cast(SUBSTRING_INDEX(location, ',', -1) as decimal(10, 4));\n
    \n

    The cast() operations are, strictly speaking, unnecessary. I like to be explicit about casts between strings and other types, in case something unusual happens in code. It can be hard to spot problems with implicit casts.

    \n soup wrap:

    Executing the query doesn't set the columns. To create the columns do:

    alter table users_profiles add column latitude decimal(10, 4);
    alter table users_profiles add column longitude decimal(10, 4);
    

    To assign them use update:

    update users_profiles
        set latitude = cast(SUBSTRING_INDEX(`location`, ',', 1) as decimal(10, 4)),
            longitude = cast(SUBSTRING_INDEX(location, ',', -1) as decimal(10, 4));
    

    The cast() operations are, strictly speaking, unnecessary. I like to be explicit about casts between strings and other types, in case something unusual happens in code. It can be hard to spot problems with implicit casts.

    qid & accept id: (25570210, 25601074) query: Identify Duplicate Xml Nodes soup:

    So, I managed to figure out what I needed to do. It's a little clunky though.

    \n

    First, you need to wrap the Xml Select statement in another select against the Unit table, in order to ensure that we end up with xml representing only that unit.

    \n
    Select\nId,\n(\n  Select\n    Action, \n    TriggerType,\n    IU.TypeId,\n    IU.Message,\n    (\n        Select C.Value, I.QuestionId, I.Sequence\n        From UnitCondition C\n          Inner Join Item I on C.ItemId = I.Id\n        Where C.UnitId = IU.Id\n        Order by C.Value, I.QuestionId, I.Sequence\n        For XML RAW('Condition'), TYPE\n    ) as Conditions\n  from UnitType T\n    Inner Join Unit IU on T.Id = IU.TypeId\n  WHERE IU.Id = U.Id\n  For XML RAW ('Unit')\n)\nFrom Unit U\n
    \n

    Then, you can wrap this in another select, grouping the xml up by content.

    \n
    Select content, count(*) as cnt\nFrom\n  (\n    Select\n      Id,\n      (\n        Select\n          Action, \n          TriggerType,\n          IU.TypeId,\n          IU.Message,\n          (\n              Select C.Value, C.ItemId, I.QuestionId, I.Sequence\n              From UnitCondition C\n                Inner Join Item I on C.ItemId = I.Id\n              Where C.UnitId = IU.Id\n              Order by C.Value, I.QuestionId, I.Sequence\n              For XML RAW('Condition'), TYPE\n          ) as Conditions\n        from UnitType T\n          Inner Join Unit IU on T.Id = IU.TypeId\n        WHERE IU.Id = U.Id\n        For XML RAW ('Unit')\n      ) as content\n    From Unit U\n  ) as data\ngroup by content\nhaving count(*) > 1\n
    \n

    This will allow you to group entire units where the whole content is identical.

    \n

    One thing to watch out for though, is that to test "uniqueness", you need to guarantee that the data on the inner Xml selection(s) is always the same. To that end, you should apply ordering on the relevant data (i.e. the data in the xml) to ensure consistency. What order you apply doesn't really matter, so long as two identical collections will output in the same order.

    \n soup wrap:

    So, I managed to figure out what I needed to do. It's a little clunky though.

    First, you need to wrap the Xml Select statement in another select against the Unit table, in order to ensure that we end up with xml representing only that unit.

    Select
    Id,
    (
      Select
        Action, 
        TriggerType,
        IU.TypeId,
        IU.Message,
        (
            Select C.Value, I.QuestionId, I.Sequence
            From UnitCondition C
              Inner Join Item I on C.ItemId = I.Id
            Where C.UnitId = IU.Id
            Order by C.Value, I.QuestionId, I.Sequence
            For XML RAW('Condition'), TYPE
        ) as Conditions
      from UnitType T
        Inner Join Unit IU on T.Id = IU.TypeId
      WHERE IU.Id = U.Id
      For XML RAW ('Unit')
    )
    From Unit U
    

    Then, you can wrap this in another select, grouping the xml up by content.

    Select content, count(*) as cnt
    From
      (
        Select
          Id,
          (
            Select
              Action, 
              TriggerType,
              IU.TypeId,
              IU.Message,
              (
                  Select C.Value, C.ItemId, I.QuestionId, I.Sequence
                  From UnitCondition C
                    Inner Join Item I on C.ItemId = I.Id
                  Where C.UnitId = IU.Id
                  Order by C.Value, I.QuestionId, I.Sequence
                  For XML RAW('Condition'), TYPE
              ) as Conditions
            from UnitType T
              Inner Join Unit IU on T.Id = IU.TypeId
            WHERE IU.Id = U.Id
            For XML RAW ('Unit')
          ) as content
        From Unit U
      ) as data
    group by content
    having count(*) > 1
    

    This will allow you to group entire units where the whole content is identical.

    One thing to watch out for though, is that to test "uniqueness", you need to guarantee that the data on the inner Xml selection(s) is always the same. To that end, you should apply ordering on the relevant data (i.e. the data in the xml) to ensure consistency. What order you apply doesn't really matter, so long as two identical collections will output in the same order.

    qid & accept id: (25579264, 25579522) query: MySQL: Get the MIN value of a table from all columns & rows soup:

    Here's one solution:

    \n
    SELECT least(MIN(nullif(sgl_ro,0))\n            ,MIN(nullif(sgl_bb,0))\n            ,MIN(nullif(sgl_hb,0))\n            ,MIN(nullif(sgl_fb,0)) ) as min_rate\nFROM room_rates\nWHERE hotel_id='1'\n;\n
    \n

    EDIT: Use NULL instead of 'NULL'

    \n

    'NULL' is a string and MySQL have very weird ideas on how to cast between types:

    \n
    select case when 0 = 'NULL' \n            then 'ohoy' \n            else 'sailor' \n       end \nfrom room_rates;\n\nohoy\nohoy\nohoy\n
    \n

    I.e. you solution will work fine by removing the ' from NULL:

    \n
    SELECT\nLEAST(\nMIN(IF(sgl_ro=0,NULL,sgl_ro))\n,MIN(IF(sgl_bb=0,NULL,sgl_bb))\n,MIN(IF(sgl_hb=0,NULL,sgl_hb))\n,MIN(IF(sgl_fb=0,NULL,sgl_fb))\n) AS MinRate\nFROM room_rates\nWHERE hotel_id='1'\n;\n\nMINRATE\n9\n
    \n

    Edit: Comparison between DBMS:

    \n

    I tested the following scenario for all DBMS availible in sqlfiddle + DB2 10.5:

    \n
    create table t(x int);\ninsert into t(x) values (1);\nselect case when 0 = 'NULL' \n        then 'ohoy' \n        else 'sailor' \n   end \nfrom t;\n
    \n

    All mysql versions returned 'ohoy'

    \n

    sql.js returned 'sailor'

    \n

    all others (including DB2 10.5) considered the query to be illegal.

    \n

    Edit: handle situation where all columns in a row (or all rows for a column) = 0

    \n
    select min(least(coalesce(nullif(sgl_ro,0), 2147483647)\n                ,coalesce(nullif(sgl_bb,0), 2147483647)\n                ,coalesce(nullif(sgl_hb,0), 2147483647) \n                ,coalesce(nullif(sgl_fb,0), 2147483647) ) )  \nFROM room_rates\nWHERE hotel_id='1'\n  AND coalesce(nullif(sgl_ro,0), nullif(sgl_bb,0)\n              ,nullif(sgl_hb,0), nullif(sgl_fb,0)) IS NOT NULL;   \n
    \n soup wrap:

    Here's one solution:

    SELECT least(MIN(nullif(sgl_ro,0))
                ,MIN(nullif(sgl_bb,0))
                ,MIN(nullif(sgl_hb,0))
                ,MIN(nullif(sgl_fb,0)) ) as min_rate
    FROM room_rates
    WHERE hotel_id='1'
    ;
    

    EDIT: Use NULL instead of 'NULL'

    'NULL' is a string and MySQL have very weird ideas on how to cast between types:

    select case when 0 = 'NULL' 
                then 'ohoy' 
                else 'sailor' 
           end 
    from room_rates;
    
    ohoy
    ohoy
    ohoy
    

    I.e. you solution will work fine by removing the ' from NULL:

    SELECT
    LEAST(
    MIN(IF(sgl_ro=0,NULL,sgl_ro))
    ,MIN(IF(sgl_bb=0,NULL,sgl_bb))
    ,MIN(IF(sgl_hb=0,NULL,sgl_hb))
    ,MIN(IF(sgl_fb=0,NULL,sgl_fb))
    ) AS MinRate
    FROM room_rates
    WHERE hotel_id='1'
    ;
    
    MINRATE
    9
    

    Edit: Comparison between DBMS:

    I tested the following scenario for all DBMS availible in sqlfiddle + DB2 10.5:

    create table t(x int);
    insert into t(x) values (1);
    select case when 0 = 'NULL' 
            then 'ohoy' 
            else 'sailor' 
       end 
    from t;
    

    All mysql versions returned 'ohoy'

    sql.js returned 'sailor'

    all others (including DB2 10.5) considered the query to be illegal.

    Edit: handle situation where all columns in a row (or all rows for a column) = 0

    select min(least(coalesce(nullif(sgl_ro,0), 2147483647)
                    ,coalesce(nullif(sgl_bb,0), 2147483647)
                    ,coalesce(nullif(sgl_hb,0), 2147483647) 
                    ,coalesce(nullif(sgl_fb,0), 2147483647) ) )  
    FROM room_rates
    WHERE hotel_id='1'
      AND coalesce(nullif(sgl_ro,0), nullif(sgl_bb,0)
                  ,nullif(sgl_hb,0), nullif(sgl_fb,0)) IS NOT NULL;   
    
    qid & accept id: (25585674, 25585719) query: Query for updating a table value based on the total of a column found in multiple tables soup:

    You can use a join in your update query with a union set

    \n
    UPDATE main_trans m  \njoin\n(SELECT id,SUM(prc) prc\nFROM (\nSELECT id,SUM(prc) prc FROM sub_trans_a WHERE id = 'TR01'\nunion all\nSELECT id,SUM(prc) prc FROM sub_trans_b WHERE id = 'TR01'\n) t1\n) t\non(t.id = m.id)\nSET m.tot = t.prc \nWHERE m.id = 'TR01'\n
    \n

    Also if you have same structure for sub_trans_a and sub_trans_a so why 2 tables why not just a single table or with a single column for the type as type a or type b

    \n

    See Demo

    \n

    Or if you want to update your whole main_trans table without providing id values you can do so by adding a group by in query

    \n
    UPDATE main_trans m  \njoin\n(SELECT id,SUM(prc) prc\nFROM (\nSELECT id,SUM(prc) prc FROM sub_trans_a group by id\nunion all\nSELECT id,SUM(prc) prc FROM sub_trans_b group by id\n) t1  group by id\n) t\non(t.id = m.id)\nSET m.tot = t.prc \n
    \n

    See Demo 2

    \n

    Edit a good suggestion by Andomar you can simplify inner query as

    \n
    UPDATE main_trans m  \njoin\n(SELECT id,SUM(prc) prc\nFROM (\nSELECT id,prc FROM sub_trans_a\nunion all\nSELECT id,prc FROM sub_trans_b \n) t1 WHERE id = 'TR01'\n) t\non(t.id = m.id)\nSET m.tot = t.prc \nWHERE m.id = 'TR01'\n
    \n soup wrap:

    You can use a join in your update query with a union set

    UPDATE main_trans m  
    join
    (SELECT id,SUM(prc) prc
    FROM (
    SELECT id,SUM(prc) prc FROM sub_trans_a WHERE id = 'TR01'
    union all
    SELECT id,SUM(prc) prc FROM sub_trans_b WHERE id = 'TR01'
    ) t1
    ) t
    on(t.id = m.id)
    SET m.tot = t.prc 
    WHERE m.id = 'TR01'
    

    Also if you have same structure for sub_trans_a and sub_trans_a so why 2 tables why not just a single table or with a single column for the type as type a or type b

    See Demo

    Or if you want to update your whole main_trans table without providing id values you can do so by adding a group by in query

    UPDATE main_trans m  
    join
    (SELECT id,SUM(prc) prc
    FROM (
    SELECT id,SUM(prc) prc FROM sub_trans_a group by id
    union all
    SELECT id,SUM(prc) prc FROM sub_trans_b group by id
    ) t1  group by id
    ) t
    on(t.id = m.id)
    SET m.tot = t.prc 
    

    See Demo 2

    Edit a good suggestion by Andomar you can simplify inner query as

    UPDATE main_trans m  
    join
    (SELECT id,SUM(prc) prc
    FROM (
    SELECT id,prc FROM sub_trans_a
    union all
    SELECT id,prc FROM sub_trans_b 
    ) t1 WHERE id = 'TR01'
    ) t
    on(t.id = m.id)
    SET m.tot = t.prc 
    WHERE m.id = 'TR01'
    
    qid & accept id: (25652248, 25652374) query: In Oracle SQL, how do I UPDATE columns specified by a priority list? soup:
    update table1 t1\n   set roleid = 11\n where roleid = 10 and\n       (case when userid = 1 then 1 when userid = 2 then 2 when userid = 3 then 3 else 4 end) =\n         (select min(case when userid = 1 then 1 when userid = 2 then 2 when userid = 3 then 3 else 4 end)\n            from table1\n           where projectid = t1.projectid);\n
    \n

    EDIT:

    \n
    SQL> create table table1 (projectid number, userid number, roleid number);\n\nTable created.\n\nSQL> insert into table1 values (101, 1, 10);\n\n1 row created.\n\nSQL> insert into table1 values (101, 2, 10);\n\n1 row created.\n\nSQL> insert into table1 values (102, 2, 10);\n\n1 row created.\n\nSQL> insert into table1 values (102, 3, 10);\n\n1 row created.\n\nSQL> insert into table1 values (103, 1, 10);\n\n1 row created.\n\nSQL> select * from table1;\n\n PROJECTID     USERID     ROLEID\n---------- ---------- ----------\n       101          1         10\n       101          2         10\n       102          2         10\n       102          3         10\n       103          1         10\n\nSQL> update table1 t1\n  2     set roleid = 11\n  3   where roleid = 10 and\n  4         (case when userid = 1 then 1 when userid = 2 then 2 when userid = 3 then 3 else 4 end) = \n  5           (select min(case when userid = 1 then 1 when userid = 2 then 2 when userid = 3 \nthen 3 else 4 end)\n  5                     from table1\n  6                    where projectid = t1.projectid);\n\n3 rows updated.\n\nSQL> select * from table1;           \n\n PROJECTID     USERID     ROLEID\n---------- ---------- ----------\n       101          1         11\n       101          2         10\n       102          2         11\n       102          3         10\n       103          1         11\n
    \n soup wrap:
    update table1 t1
       set roleid = 11
     where roleid = 10 and
           (case when userid = 1 then 1 when userid = 2 then 2 when userid = 3 then 3 else 4 end) =
             (select min(case when userid = 1 then 1 when userid = 2 then 2 when userid = 3 then 3 else 4 end)
                from table1
               where projectid = t1.projectid);
    

    EDIT:

    SQL> create table table1 (projectid number, userid number, roleid number);
    
    Table created.
    
    SQL> insert into table1 values (101, 1, 10);
    
    1 row created.
    
    SQL> insert into table1 values (101, 2, 10);
    
    1 row created.
    
    SQL> insert into table1 values (102, 2, 10);
    
    1 row created.
    
    SQL> insert into table1 values (102, 3, 10);
    
    1 row created.
    
    SQL> insert into table1 values (103, 1, 10);
    
    1 row created.
    
    SQL> select * from table1;
    
     PROJECTID     USERID     ROLEID
    ---------- ---------- ----------
           101          1         10
           101          2         10
           102          2         10
           102          3         10
           103          1         10
    
    SQL> update table1 t1
      2     set roleid = 11
      3   where roleid = 10 and
      4         (case when userid = 1 then 1 when userid = 2 then 2 when userid = 3 then 3 else 4 end) = 
      5           (select min(case when userid = 1 then 1 when userid = 2 then 2 when userid = 3 
    then 3 else 4 end)
      5                     from table1
      6                    where projectid = t1.projectid);
    
    3 rows updated.
    
    SQL> select * from table1;           
    
     PROJECTID     USERID     ROLEID
    ---------- ---------- ----------
           101          1         11
           101          2         10
           102          2         11
           102          3         10
           103          1         11
    
    qid & accept id: (25687106, 25721023) query: PL/SQL: Any trick to avoid cloning of objects? soup:

    Based on Alex suggestion (use an associative array), I have created a package that encapsulates objects, so we can use them in an abstract way, as if they were references:

    \n
    create or replace type cla as object        -- complex class\n(\n    name varchar2(10)\n);\n\n\ncreate or replace package eo as     -- package to encapsulate objects\n    type ao_t                       -- type for hash (associative array)\n        is table of cla\n        index by varchar2(100);\n    o ao_t;                         -- hash of objects\nend;\n\n\ndeclare\n    o1 varchar2(100);\n    o2 varchar2(100);\nbegin\n    o1 := 'o1';                         -- objects are hash indexes now\n    eo.o(o1) := new cla('hi');          -- store new object into the hash\n    o2 := o1;                           -- assign object == assign index\n    eo.o(o1).name := 'bye';             -- change object attribute\n\n    dbms_output.put_line('eo.o(o1).name: ' || eo.o(o1).name);\n    dbms_output.put_line('eo.o(o2).name: ' || eo.o(o2).name);   -- equal?\nend;\n
    \n

    Now 'bye' is written twice, as expected with object references. The trick is that both o1 and o2 contain the same index (~reference) to the same object. The syntax is a bit more complex, but still very similar to standard object manipulation when accessing both attributes and methods.

    \n

    Assigning an object to other is exactly as standard object assigning:

    \n
    o2 := o1;\n
    \n

    Same for using an object as a function argument:

    \n
    afunc(o1);\n
    \n

    Internally, afunc() will just use o1 with the same special syntax to access methods or attributes (and no special syntax to assign):

    \n
    eo.o(o1).attrib := 5;\neo.o(o1).method('nice');\no3 := o1;\n
    \n

    The only requirement to use this trick is to add a hash (type and variable) to the eo package for each class we want to encapsulate.

    \n
    \n

    Update: The index value based on the variable name:

    \n
    o1 := 'o1';\n
    \n

    could be a problem if, for example, we create the object in a funcion, since the function would have to know all values used in the rest of the program in order to avoid repeating a value. A solution is to take the value from the hash size:

    \n
    o1 := eo.o.count;\n
    \n

    That takes us into other problem: The hash content is persitent (since it is into a package), so more and more objects will be added to the hash as we create objects (even if the objects are created by the same function). A solution is to remove the object from the hash when we are done with the object:

    \n
    eo.o(o1) = null;\n
    \n

    So the fixed program would be:

    \n
    create or replace type cla as object        -- complex class\n(\n    name varchar2(10)\n);\n\n\ncreate or replace package eo as     -- package to encapsulate objects\n    type ao_t                       -- type for hash (associative array)\n        is table of cla\n        index by varchar2(100);\n    o ao_t;                         -- hash of objects\nend;\n\n\ndeclare\n    o1 varchar2(100);\n    o2 varchar2(100);\nbegin\n    o1 := eo.o.count;                   -- index based on hash size\n    eo.o(o1) := new cla('hi');          -- store new object into the hash\n    o2 := o1;                           -- assign object == assign index\n    eo.o(o1).name := 'bye';             -- change object attribute\n\n    dbms_output.put_line('eo.o(o1).name: ' || eo.o(o1).name);\n    dbms_output.put_line('eo.o(o2).name: ' || eo.o(o2).name);   -- equal?\n\n    eo.o(o1) = null;                    -- remove object\n    eo.o(o2) = null;                    -- remove object (redundant)\nend;\n
    \n soup wrap:

    Based on Alex suggestion (use an associative array), I have created a package that encapsulates objects, so we can use them in an abstract way, as if they were references:

    create or replace type cla as object        -- complex class
    (
        name varchar2(10)
    );
    
    
    create or replace package eo as     -- package to encapsulate objects
        type ao_t                       -- type for hash (associative array)
            is table of cla
            index by varchar2(100);
        o ao_t;                         -- hash of objects
    end;
    
    
    declare
        o1 varchar2(100);
        o2 varchar2(100);
    begin
        o1 := 'o1';                         -- objects are hash indexes now
        eo.o(o1) := new cla('hi');          -- store new object into the hash
        o2 := o1;                           -- assign object == assign index
        eo.o(o1).name := 'bye';             -- change object attribute
    
        dbms_output.put_line('eo.o(o1).name: ' || eo.o(o1).name);
        dbms_output.put_line('eo.o(o2).name: ' || eo.o(o2).name);   -- equal?
    end;
    

    Now 'bye' is written twice, as expected with object references. The trick is that both o1 and o2 contain the same index (~reference) to the same object. The syntax is a bit more complex, but still very similar to standard object manipulation when accessing both attributes and methods.

    Assigning an object to other is exactly as standard object assigning:

    o2 := o1;
    

    Same for using an object as a function argument:

    afunc(o1);
    

    Internally, afunc() will just use o1 with the same special syntax to access methods or attributes (and no special syntax to assign):

    eo.o(o1).attrib := 5;
    eo.o(o1).method('nice');
    o3 := o1;
    

    The only requirement to use this trick is to add a hash (type and variable) to the eo package for each class we want to encapsulate.


    Update: The index value based on the variable name:

    o1 := 'o1';
    

    could be a problem if, for example, we create the object in a funcion, since the function would have to know all values used in the rest of the program in order to avoid repeating a value. A solution is to take the value from the hash size:

    o1 := eo.o.count;
    

    That takes us into other problem: The hash content is persitent (since it is into a package), so more and more objects will be added to the hash as we create objects (even if the objects are created by the same function). A solution is to remove the object from the hash when we are done with the object:

    eo.o(o1) = null;
    

    So the fixed program would be:

    create or replace type cla as object        -- complex class
    (
        name varchar2(10)
    );
    
    
    create or replace package eo as     -- package to encapsulate objects
        type ao_t                       -- type for hash (associative array)
            is table of cla
            index by varchar2(100);
        o ao_t;                         -- hash of objects
    end;
    
    
    declare
        o1 varchar2(100);
        o2 varchar2(100);
    begin
        o1 := eo.o.count;                   -- index based on hash size
        eo.o(o1) := new cla('hi');          -- store new object into the hash
        o2 := o1;                           -- assign object == assign index
        eo.o(o1).name := 'bye';             -- change object attribute
    
        dbms_output.put_line('eo.o(o1).name: ' || eo.o(o1).name);
        dbms_output.put_line('eo.o(o2).name: ' || eo.o(o2).name);   -- equal?
    
        eo.o(o1) = null;                    -- remove object
        eo.o(o2) = null;                    -- remove object (redundant)
    end;
    
    qid & accept id: (25734598, 25734718) query: Get all posts for specific tag with SQL soup:

    I assume you are happy to send two requests to the database.

    \n

    First, get all the posts for a given tag:

    \n
    SELECT * FROM blog_posts bp \nWHERE EXISTS (SELECT * FROM blog_tags bt INNER JOIN\n               tags t ON t.id = bt.tag_id\n              WHERE bp.id = bt.post_id\n               AND t.tag = @SearchTag)\n
    \n

    Second, you want to tags, I guess, linked to the one you are looking for via posts:

    \n
    SELECT * FROM tags t\nWHERE EXISTS ( -- Here we link two tags via blog_tags\n               SELECT * FROM blog_tags bt1 INNER JOIN\n               blog_tags bt2 ON bt1.post_id = bt2.post_id\n                     AND bt1.tag_id != bt2.tag_id INNER JOIN\n               tags t ON t.id = bt1.tag_id\n               WHERE t.tag = @SearchTag\n                  AND t.id = bt2.tag_id\n)\n
    \n soup wrap:

    I assume you are happy to send two requests to the database.

    First, get all the posts for a given tag:

    SELECT * FROM blog_posts bp 
    WHERE EXISTS (SELECT * FROM blog_tags bt INNER JOIN
                   tags t ON t.id = bt.tag_id
                  WHERE bp.id = bt.post_id
                   AND t.tag = @SearchTag)
    

    Second, you want to tags, I guess, linked to the one you are looking for via posts:

    SELECT * FROM tags t
    WHERE EXISTS ( -- Here we link two tags via blog_tags
                   SELECT * FROM blog_tags bt1 INNER JOIN
                   blog_tags bt2 ON bt1.post_id = bt2.post_id
                         AND bt1.tag_id != bt2.tag_id INNER JOIN
                   tags t ON t.id = bt1.tag_id
                   WHERE t.tag = @SearchTag
                      AND t.id = bt2.tag_id
    )
    
    qid & accept id: (25790263, 25791396) query: How to convert 2d table into 3d table using SQL soup:

    As Sean Lange, said, use a pivot clause, assuming you're on 11g or higher:

    \n
    select *\nfrom classes\npivot (max(class_size) as class_size\n  for (class) in ('I' as i, 'II' as ii, 'III' as iii))\norder by school;\n\nSCHOOL I_CLASS_SIZE II_CLASS_SIZE III_CLASS_SIZE\n------ ------------ ------------- --------------\nS1               23            12             54 \nS2               57            12             81 \nS3               12            25             65 \n
    \n

    SQL Fiddle

    \n

    If you're still on an earlier version that doesn't support pivot then you can use a manual approach to do the same thing:

    \n
    select school,\n  max(case when class = 'I' then class_size end) as i,\n  max(case when class = 'II' then class_size end) as ii,\n  max(case when class = 'III' then class_size end) as iii\nfrom classes\ngroup by school\norder by school;\n\nSCHOOL          I         II        III\n------ ---------- ---------- ----------\nS1             23         12         54 \nS2             57         12         81 \nS3             12         25         65 \n
    \n

    SQL Fiddle.

    \n

    To show the total for each school as well, just add a sum:

    \n
    select school,\n  max(case when class = 'I' then class_size end) as i,\n  max(case when class = 'II' then class_size end) as ii,\n  max(case when class = 'III' then class_size end) as iii,\n  sum(class_size) as total\nfrom classes\ngroup by school\norder by school;\n
    \n

    SQL Fiddle.

    \n

    To sum the columns too, you could use rollup():

    \n
    select school,\n  max(case when class = 'I' then class_size end) as i,\n  max(case when class = 'II' then class_size end) as ii,\n  max(case when class = 'III' then class_size end) as iii,\n  sum(class_size) as total\nfrom classes\ngroup by rollup(school)\norder by school;\n\nSCHOOL          I         II        III      TOTAL\n------ ---------- ---------- ---------- ----------\nS1             23         12         54         89 \nS2             57         12         81        150 \nS3             12         25         65        102 \n               57         25         81        341 \n
    \n

    SQL Fiddle. But it might be something you should do in your client/application. SQL*Plus can do this automatically with its compute command, for example.

    \n soup wrap:

    As Sean Lange, said, use a pivot clause, assuming you're on 11g or higher:

    select *
    from classes
    pivot (max(class_size) as class_size
      for (class) in ('I' as i, 'II' as ii, 'III' as iii))
    order by school;
    
    SCHOOL I_CLASS_SIZE II_CLASS_SIZE III_CLASS_SIZE
    ------ ------------ ------------- --------------
    S1               23            12             54 
    S2               57            12             81 
    S3               12            25             65 
    

    SQL Fiddle

    If you're still on an earlier version that doesn't support pivot then you can use a manual approach to do the same thing:

    select school,
      max(case when class = 'I' then class_size end) as i,
      max(case when class = 'II' then class_size end) as ii,
      max(case when class = 'III' then class_size end) as iii
    from classes
    group by school
    order by school;
    
    SCHOOL          I         II        III
    ------ ---------- ---------- ----------
    S1             23         12         54 
    S2             57         12         81 
    S3             12         25         65 
    

    SQL Fiddle.

    To show the total for each school as well, just add a sum:

    select school,
      max(case when class = 'I' then class_size end) as i,
      max(case when class = 'II' then class_size end) as ii,
      max(case when class = 'III' then class_size end) as iii,
      sum(class_size) as total
    from classes
    group by school
    order by school;
    

    SQL Fiddle.

    To sum the columns too, you could use rollup():

    select school,
      max(case when class = 'I' then class_size end) as i,
      max(case when class = 'II' then class_size end) as ii,
      max(case when class = 'III' then class_size end) as iii,
      sum(class_size) as total
    from classes
    group by rollup(school)
    order by school;
    
    SCHOOL          I         II        III      TOTAL
    ------ ---------- ---------- ---------- ----------
    S1             23         12         54         89 
    S2             57         12         81        150 
    S3             12         25         65        102 
                   57         25         81        341 
    

    SQL Fiddle. But it might be something you should do in your client/application. SQL*Plus can do this automatically with its compute command, for example.

    qid & accept id: (25839647, 25840404) query: Write query SQLite with selectionArgs soup:

    if you insist on using selectionArgs you can do it like below:

    \n

    first check your arguments and then build your query according to it for example:

    \n
        String[] myArray = new String[] { "31", "" ,"3", ""};\n\n    List myQueryValue = new ArrayList();\n\n    String myQueryParam = "";\n\n    if(!myArray[0].equal("")){\n\n       myQueryParam =  myQueryParam + "UID = ? AND ";\n       myQueryValue.add(myArray[0]);\n    }\n\n    if(!myArray[1].equal("")){\n\n         myQueryParam =  myQueryParam + "Age > ? AND "; \n         myQueryValue.add(myArray[1]);\n    }\n\n   if(!myArray[2].equal("")){\n\n         myQueryParam =  myQueryParam + "Room = ? AND ";\n         myQueryValue.add(myArray[2]);\n    }\n\n    if(!myArray[3].equal("")){\n\n         myQueryParam =  myQueryParam + "AND Adre = ?";\n         myQueryValue.add(myArray[3]);\n    }\n
    \n

    and at the end

    \n
    String[] finalValue = new String[ myQueryValue.size() ];\nmyQueryValue.toArray( finalValue );\nCursor cur = sqlite_obj.query(TableName, null, myQueryParam, finalValue , null, null, null, null);\n
    \n

    you can also use loop to create your values and query param.

    \n soup wrap:

    if you insist on using selectionArgs you can do it like below:

    first check your arguments and then build your query according to it for example:

        String[] myArray = new String[] { "31", "" ,"3", ""};
    
        List myQueryValue = new ArrayList();
    
        String myQueryParam = "";
    
        if(!myArray[0].equal("")){
    
           myQueryParam =  myQueryParam + "UID = ? AND ";
           myQueryValue.add(myArray[0]);
        }
    
        if(!myArray[1].equal("")){
    
             myQueryParam =  myQueryParam + "Age > ? AND "; 
             myQueryValue.add(myArray[1]);
        }
    
       if(!myArray[2].equal("")){
    
             myQueryParam =  myQueryParam + "Room = ? AND ";
             myQueryValue.add(myArray[2]);
        }
    
        if(!myArray[3].equal("")){
    
             myQueryParam =  myQueryParam + "AND Adre = ?";
             myQueryValue.add(myArray[3]);
        }
    

    and at the end

    String[] finalValue = new String[ myQueryValue.size() ];
    myQueryValue.toArray( finalValue );
    Cursor cur = sqlite_obj.query(TableName, null, myQueryParam, finalValue , null, null, null, null);
    

    you can also use loop to create your values and query param.

    qid & accept id: (25916350, 25917126) query: How to change VARCHAR type to DATETIME using ALTER in Postgresql? soup:

    You want the USING clause to ALTER TABLE ... ALTER COLUMN ... TYPE, and the to_timestamp function.

    \n
    ALTER TABLE mytable \n  ALTER COLUMN thecolumn \n   TYPE TIMESTAMP WITH TIME ZONE \n     USING to_timestamp(thecolumn, 'YYYY-MM-DD HH24:MI:SS');\n
    \n

    In this case as the data looks like it's already a valid timestamp you can probably simplify it with a cast instead:

    \n
    ALTER TABLE mytable \n  ALTER COLUMN thecolumn \n   TYPE TIMESTAMP WITH TIME ZONE \n     USING to_timestamp(thecolumn::timestamp with time zone);\n
    \n

    You will note that I've used the type name "timestamp with time zone" instead of "datetime". That's because in PostgreSQL, datetime is just an alias for timestamp without time zone... but in most cases you actually want to use timestamp with time zone instead. To learn more about timestamps, see the manual.

    \n soup wrap:

    You want the USING clause to ALTER TABLE ... ALTER COLUMN ... TYPE, and the to_timestamp function.

    ALTER TABLE mytable 
      ALTER COLUMN thecolumn 
       TYPE TIMESTAMP WITH TIME ZONE 
         USING to_timestamp(thecolumn, 'YYYY-MM-DD HH24:MI:SS');
    

    In this case as the data looks like it's already a valid timestamp you can probably simplify it with a cast instead:

    ALTER TABLE mytable 
      ALTER COLUMN thecolumn 
       TYPE TIMESTAMP WITH TIME ZONE 
         USING to_timestamp(thecolumn::timestamp with time zone);
    

    You will note that I've used the type name "timestamp with time zone" instead of "datetime". That's because in PostgreSQL, datetime is just an alias for timestamp without time zone... but in most cases you actually want to use timestamp with time zone instead. To learn more about timestamps, see the manual.

    qid & accept id: (25940181, 25940389) query: Contains at least a count of different character in a set soup:

    A simple solution would be a pattern like this:

    \n
    (.*[abcxyz]){3}\n
    \n

    This will match zero or more of any character, followed by one of a, b, c, x, y, or z, all of which must appear at least 3 times in the subject string.

    \n

    To match only strings that contain different letters, you could use a negative lookahead ((?!…)) and a backreference (\N):

    \n
    (.*([abcxyz])(?!.*\2)){3}\n
    \n

    This will match zero or more of any character, followed by one of a, b, c, x, y, or z, as long as another instance of that character does not appear later in the string (i.e. it will match the last instance of that character in the string), all of which must appear at least 3 times in the subject string.

    \n

    Of course, you can change the {3} to anything you like, but note that will not work if you need to specify a maximum number of times these characters can appear in your string, only the minimum.

    \n soup wrap:

    A simple solution would be a pattern like this:

    (.*[abcxyz]){3}
    

    This will match zero or more of any character, followed by one of a, b, c, x, y, or z, all of which must appear at least 3 times in the subject string.

    To match only strings that contain different letters, you could use a negative lookahead ((?!…)) and a backreference (\N):

    (.*([abcxyz])(?!.*\2)){3}
    

    This will match zero or more of any character, followed by one of a, b, c, x, y, or z, as long as another instance of that character does not appear later in the string (i.e. it will match the last instance of that character in the string), all of which must appear at least 3 times in the subject string.

    Of course, you can change the {3} to anything you like, but note that will not work if you need to specify a maximum number of times these characters can appear in your string, only the minimum.

    qid & accept id: (25941109, 25942474) query: Access query to include all values, including Null soup:

    Try using a wildcard instead:

    \n
    In (IIf([Forms]![FormQuery]![Completed]=True,"COMPLETED",""),\n    IIf([Forms]![FormQuery]![Cancelled]=True,"Cancelled",""),\n    IIf([Forms]![FormQuery]![All]=True,[Permits]![Status],"*"))\n
    \n

    You can test this... just put this in your query criteria field:

    \n
    IIf(True,"*","")\n
    \n

    Run it with False instead of True... experiment.

    \n

    I recommend you change your method. Use a parameter query but avoid the IN() statement. General how-to at http://accessmvp.com/thedbguy/articles/parameterquerybasics.html.

    \n

    Alternatively, use VBA. General how-to at http://answers.microsoft.com/en-us/office/forum/office_2007-access/checkbox-filter-form-for-query/ab65c120-6356-e011-8dfc-68b599b31bf5

    \n

    Either one is more typical and I believe easier to trouble shoot and maintain.

    \n soup wrap:

    Try using a wildcard instead:

    In (IIf([Forms]![FormQuery]![Completed]=True,"COMPLETED",""),
        IIf([Forms]![FormQuery]![Cancelled]=True,"Cancelled",""),
        IIf([Forms]![FormQuery]![All]=True,[Permits]![Status],"*"))
    

    You can test this... just put this in your query criteria field:

    IIf(True,"*","")
    

    Run it with False instead of True... experiment.

    I recommend you change your method. Use a parameter query but avoid the IN() statement. General how-to at http://accessmvp.com/thedbguy/articles/parameterquerybasics.html.

    Alternatively, use VBA. General how-to at http://answers.microsoft.com/en-us/office/forum/office_2007-access/checkbox-filter-form-for-query/ab65c120-6356-e011-8dfc-68b599b31bf5

    Either one is more typical and I believe easier to trouble shoot and maintain.

    qid & accept id: (25961419, 25961436) query: I want to get the maximum value from my S_ID column which is declared as varchar type soup:

    You can get the maximum value by using this construct:

    \n
    select s_id\nfrom stock_detail\norder by length(s_id) desc, s_id desc\nlimit 1;\n
    \n

    This puts the longer values first.

    \n

    If you want to use max(), then you need to deconstruct the number. Something like:

    \n
    select concat('S_', max(replace(s_id, 'S_', '') + 0))\nfrom stock_detail;\n
    \n

    This allows you to get a numeric maximum value rather than a character maximum value, which is the root of your problem.

    \n soup wrap:

    You can get the maximum value by using this construct:

    select s_id
    from stock_detail
    order by length(s_id) desc, s_id desc
    limit 1;
    

    This puts the longer values first.

    If you want to use max(), then you need to deconstruct the number. Something like:

    select concat('S_', max(replace(s_id, 'S_', '') + 0))
    from stock_detail;
    

    This allows you to get a numeric maximum value rather than a character maximum value, which is the root of your problem.

    qid & accept id: (25992186, 25992580) query: cast list of strings as int list in sql query / stored procedure soup:

    You can create a Table-Valued Function which takes the nVarChar and creates a new record for each value, where you tell it the delimiter. My example here returns a table with a single Value column, you can then use this as a sub query for your IN Selection :

    \n
    Create  FUNCTION [dbo].[fnSplitVariable]\n(\n    @List nvarchar(2000),\n    @delimiter nvarchar(5)\n)  \nRETURNS @RtnValue table \n(\n\n    Id int identity(1,1),\n    Variable varchar(15),\n    Value nvarchar(100)\n) \nAS  \nBEGIN\nDeclare @Count int\nset @Count = 1\n    While (Charindex(@delimiter,@List)>0)\n    Begin \n        Insert Into @RtnValue (Value, Variable)\n        Select \n            Value = ltrim(rtrim(Substring(@List,1,Charindex(@delimiter,@List)-1))),\n        Variable = 'V' + convert(varchar,@Count)\n            Set @List = Substring(@List,Charindex(@delimiter,@List)+len(@delimiter),len(@List))\n        Set @Count = @Count + 1\n    End  \n\n    Insert Into @RtnValue (Value, Variable)\n        Select Value = ltrim(rtrim(@List)), Variable = 'V' + convert(varchar,@Count)\n\n        Return\nEND\n
    \n

    Then in your where statement you could do the following:

    \n
    WHERE (b.CityID IN (Select Value from fnSplitVariable(@CityIDs, ','))\n
    \n

    I have included your original Procedure, and updated it to use the function above:

    \n
    ALTER PROCEDURE [dbo].[SearchResume]\n     @KeywordSearch nvarchar(500),\n     @GreaterThanDate datetime,\n     @CityIDs nvarchar(500),\n     @ProvinceIDs nvarchar(500),\n     @CountryIDs nvarchar(500),\n     @IndustryIDs nvarchar(500)\n\nAS\nBEGIN\n\nDECLARE @sql as nvarchar(4000)\n\nSET @sql = N'\n       DECLARE      @KeywordSearch nvarchar(500),\n                    @CityIDs nvarchar(500),\n                    @ProvinceIDs nvarchar(500),\n                    @CountryIDs nvarchar(500),\n                    @IndustryIDs nvarchar(500) \n\n       SET @KeywordSearch = '''+@KeywordSearch+'''\n       SET @CityIDs = '''+@CityIDs+'''\n       SET @ProvinceIDs = '''+@ProvinceIDs+'''\n       SET @CountryIDs = '''+@CountryIDs+'''\n       SET @IndustryIDs = '''+@IndustryIDs+'''\nSELECT DISTINCT\n                UserID,\n                ResumeID,\n                CASE  a.Confidential WHEN 1 THEN ''Confidential'' ELSE LastName + '','' +      FirstName END as ''Name'',\n                a.Description ''ResumeTitle'',\n                CurrentTitle,\n                ModifiedDate,\n                CurrentEmployerName,\n                PersonalDescription,\n                CareerObjectives,\n                CASE ISNULL(b.SalaryRangeID, ''0'') WHEN ''0'' THEN CAST(SalarySpecific as   nvarchar(8)) ELSE c.Description END ''Salary'',\n                e.Description ''EducationLevel'',\n                f.Description ''CareerLevel'',\n                g.Description ''JobType'',\n                h.Description ''Relocate'',\n                i.Description + ''-'' + j.Description + ''-'' + k.Description ''Location''\n            FROM dbo.Resume a JOIN dbo.Candidate b ON a.CandidateID = b.CandidateID\n            LEFT OUTER JOIN SalaryRange c ON b.SalaryRangeID = c.SalaryRangeID\n            JOIN EducationLevel e ON b.EducationLevelID = e.EducationLevelID\n            JOIN CareerLevel f ON b.CareerLevelID = f.CareerLevelID\n            JOIN JobType g ON b.JobTypeID = g.JobTypeID\n            JOIN WillingToRelocate h ON b.WillingToRelocateID = h.WillingToRelocateID\n            JOIN City i ON b.CityID = i.CityID\n            JOIN StateProvince j ON j.StateProvinceID = b.StateProvinceID\n            JOIN Country k ON k.CountryID = b.CountryID\n            WHERE ( (ModifiedDate > ''' + CAST(@GreaterThanDate as nvarchar(55)) + ''')\n\n\n                    '\nIF (LEN(@CityIDs) >0)\nBEGIN\n    SET @sql = @sql + N'AND (b.CityID IN (Select Value from fnSplitVariable(@CityIDs,'','')  ))'\nEND\nIF (LEN(@ProvinceIDs) >0)\nBEGIN\n    SET @sql = @sql + N'AND (b.StateProvinceID IN (Select Value from    fnSplitVariable(@ProvinceIDs,'','') ))'\nEND\nIF (LEN(@CountryIDs) >0)\nBEGIN\n    SET @sql = @sql + N'AND (b.CountryID IN (Select Value from fnSplitVariable(@CountryIDs,'','')    ))'\nEND\nIF (LEN(@IndustryIDs) >0)\nBEGIN\n    SET @sql = @sql + N'AND (b.IndustryPreferenceID IN (Select Value from fnSplitVariable(@IndustryIDs,'','') ))'\nEND\n\nIF (LEN(@KeywordSearch) > 0)\nBEGIN\n    SET @sql = @sql + N' AND (' + @KeywordSearch + ')'\nEND\n\nSET @sql = @sql + N') ORDER BY ModifiedDate desc'\n\n--select @sql\nexec sp_executesql @sql\n\nEND\n
    \n soup wrap:

    You can create a Table-Valued Function which takes the nVarChar and creates a new record for each value, where you tell it the delimiter. My example here returns a table with a single Value column, you can then use this as a sub query for your IN Selection :

    Create  FUNCTION [dbo].[fnSplitVariable]
    (
        @List nvarchar(2000),
        @delimiter nvarchar(5)
    )  
    RETURNS @RtnValue table 
    (
    
        Id int identity(1,1),
        Variable varchar(15),
        Value nvarchar(100)
    ) 
    AS  
    BEGIN
    Declare @Count int
    set @Count = 1
        While (Charindex(@delimiter,@List)>0)
        Begin 
            Insert Into @RtnValue (Value, Variable)
            Select 
                Value = ltrim(rtrim(Substring(@List,1,Charindex(@delimiter,@List)-1))),
            Variable = 'V' + convert(varchar,@Count)
                Set @List = Substring(@List,Charindex(@delimiter,@List)+len(@delimiter),len(@List))
            Set @Count = @Count + 1
        End  
    
        Insert Into @RtnValue (Value, Variable)
            Select Value = ltrim(rtrim(@List)), Variable = 'V' + convert(varchar,@Count)
    
            Return
    END
    

    Then in your where statement you could do the following:

    WHERE (b.CityID IN (Select Value from fnSplitVariable(@CityIDs, ','))
    

    I have included your original Procedure, and updated it to use the function above:

    ALTER PROCEDURE [dbo].[SearchResume]
         @KeywordSearch nvarchar(500),
         @GreaterThanDate datetime,
         @CityIDs nvarchar(500),
         @ProvinceIDs nvarchar(500),
         @CountryIDs nvarchar(500),
         @IndustryIDs nvarchar(500)
    
    AS
    BEGIN
    
    DECLARE @sql as nvarchar(4000)
    
    SET @sql = N'
           DECLARE      @KeywordSearch nvarchar(500),
                        @CityIDs nvarchar(500),
                        @ProvinceIDs nvarchar(500),
                        @CountryIDs nvarchar(500),
                        @IndustryIDs nvarchar(500) 
    
           SET @KeywordSearch = '''+@KeywordSearch+'''
           SET @CityIDs = '''+@CityIDs+'''
           SET @ProvinceIDs = '''+@ProvinceIDs+'''
           SET @CountryIDs = '''+@CountryIDs+'''
           SET @IndustryIDs = '''+@IndustryIDs+'''
    SELECT DISTINCT
                    UserID,
                    ResumeID,
                    CASE  a.Confidential WHEN 1 THEN ''Confidential'' ELSE LastName + '','' +      FirstName END as ''Name'',
                    a.Description ''ResumeTitle'',
                    CurrentTitle,
                    ModifiedDate,
                    CurrentEmployerName,
                    PersonalDescription,
                    CareerObjectives,
                    CASE ISNULL(b.SalaryRangeID, ''0'') WHEN ''0'' THEN CAST(SalarySpecific as   nvarchar(8)) ELSE c.Description END ''Salary'',
                    e.Description ''EducationLevel'',
                    f.Description ''CareerLevel'',
                    g.Description ''JobType'',
                    h.Description ''Relocate'',
                    i.Description + ''-'' + j.Description + ''-'' + k.Description ''Location''
                FROM dbo.Resume a JOIN dbo.Candidate b ON a.CandidateID = b.CandidateID
                LEFT OUTER JOIN SalaryRange c ON b.SalaryRangeID = c.SalaryRangeID
                JOIN EducationLevel e ON b.EducationLevelID = e.EducationLevelID
                JOIN CareerLevel f ON b.CareerLevelID = f.CareerLevelID
                JOIN JobType g ON b.JobTypeID = g.JobTypeID
                JOIN WillingToRelocate h ON b.WillingToRelocateID = h.WillingToRelocateID
                JOIN City i ON b.CityID = i.CityID
                JOIN StateProvince j ON j.StateProvinceID = b.StateProvinceID
                JOIN Country k ON k.CountryID = b.CountryID
                WHERE ( (ModifiedDate > ''' + CAST(@GreaterThanDate as nvarchar(55)) + ''')
    
    
                        '
    IF (LEN(@CityIDs) >0)
    BEGIN
        SET @sql = @sql + N'AND (b.CityID IN (Select Value from fnSplitVariable(@CityIDs,'','')  ))'
    END
    IF (LEN(@ProvinceIDs) >0)
    BEGIN
        SET @sql = @sql + N'AND (b.StateProvinceID IN (Select Value from    fnSplitVariable(@ProvinceIDs,'','') ))'
    END
    IF (LEN(@CountryIDs) >0)
    BEGIN
        SET @sql = @sql + N'AND (b.CountryID IN (Select Value from fnSplitVariable(@CountryIDs,'','')    ))'
    END
    IF (LEN(@IndustryIDs) >0)
    BEGIN
        SET @sql = @sql + N'AND (b.IndustryPreferenceID IN (Select Value from fnSplitVariable(@IndustryIDs,'','') ))'
    END
    
    IF (LEN(@KeywordSearch) > 0)
    BEGIN
        SET @sql = @sql + N' AND (' + @KeywordSearch + ')'
    END
    
    SET @sql = @sql + N') ORDER BY ModifiedDate desc'
    
    --select @sql
    exec sp_executesql @sql
    
    END
    
    qid & accept id: (26001924, 26003864) query: Where array does not contain value Postgres soup:

    A simple, "brute-force" method would be to cast the array to text and check:

    \n
    SELECT title, short_url, categories, winning_offer_amount\nFROM   auctions\nWHERE  ended_at IS NOT NULL\nAND    categories::text NOT LIKE '% > %';  -- including blanks?\n
    \n

    A clean and elegant solution with unnest() in a NOT EXISTS semi-join:

    \n
    SELECT title, short_url, categories, winning_offer_amount\nFROM   auctions a\nWHERE  ended_at IS NOT NULL\nAND    NOT EXISTS (\n   SELECT 1\n   FROM   unnest(a.categories) AS cat\n   WHERE  cat LIKE '% > %'\n   );\n
    \n

    SQL Fiddle.

    \n soup wrap:

    A simple, "brute-force" method would be to cast the array to text and check:

    SELECT title, short_url, categories, winning_offer_amount
    FROM   auctions
    WHERE  ended_at IS NOT NULL
    AND    categories::text NOT LIKE '% > %';  -- including blanks?
    

    A clean and elegant solution with unnest() in a NOT EXISTS semi-join:

    SELECT title, short_url, categories, winning_offer_amount
    FROM   auctions a
    WHERE  ended_at IS NOT NULL
    AND    NOT EXISTS (
       SELECT 1
       FROM   unnest(a.categories) AS cat
       WHERE  cat LIKE '% > %'
       );
    

    SQL Fiddle.

    qid & accept id: (26004152, 26004258) query: Combining 2 fields into 1 field soup:

    It's slightly ambiguous, but it sounds like you want to union all the two results together:

    \n
    select\n    c.CustomerID ID,\n    c.CustomerName Cname,\n    o.TotalAmt,\n    o.OrderType\nfrom\n    Customers c\n        left join\n    AM_Orders o\n        on c.CustomerID = o.CustomerID\nunion all    \nselect\n    c.CustomerID ID,\n    c.CustomerName Cname,\n    o.TotalAmt,\n    o.OrderType\nfrom\n    Customers c\n        left join\n    PM_Orders o\n        on c.CustomerID = o.CustomerID\norder by\n    ID;\n
    \n

    or as Tab suggeted, union first then join. This might deal better with cases where there's an entry in one table but not the other:

    \n
    ;with all_orders as (\n    select\n        o.CustomerID,\n        o.TotalAmt,\n        o.OrderType\n    from\n        AM_Orders o\n    union all\n    select\n        o.CustomerID,\n        o.TotalAmt,\n        o.OrderType\n    from\n        PM_Orders o\n) select\n    c.CustomerID ID,\n    c.CustomerName Cname,\n    a.TotalAmt,\n    a.OrderType\nfrom\n    Customers c\n        left join\n    all_orders a\n        on c.CustomerID = a.CustomerID\norder by\n    ID;\n
    \n soup wrap:

    It's slightly ambiguous, but it sounds like you want to union all the two results together:

    select
        c.CustomerID ID,
        c.CustomerName Cname,
        o.TotalAmt,
        o.OrderType
    from
        Customers c
            left join
        AM_Orders o
            on c.CustomerID = o.CustomerID
    union all    
    select
        c.CustomerID ID,
        c.CustomerName Cname,
        o.TotalAmt,
        o.OrderType
    from
        Customers c
            left join
        PM_Orders o
            on c.CustomerID = o.CustomerID
    order by
        ID;
    

    or as Tab suggeted, union first then join. This might deal better with cases where there's an entry in one table but not the other:

    ;with all_orders as (
        select
            o.CustomerID,
            o.TotalAmt,
            o.OrderType
        from
            AM_Orders o
        union all
        select
            o.CustomerID,
            o.TotalAmt,
            o.OrderType
        from
            PM_Orders o
    ) select
        c.CustomerID ID,
        c.CustomerName Cname,
        a.TotalAmt,
        a.OrderType
    from
        Customers c
            left join
        all_orders a
            on c.CustomerID = a.CustomerID
    order by
        ID;
    
    qid & accept id: (26019476, 26020101) query: H2 equivalent to Oracle's user soup:

    Isn't USER a function in H2?

    \n
    SELECT USER()\n
    \n

    will return the current user. Works as expected as a default value for a column:

    \n
    create table MY_TABLE(\n  CREATED_BY Varchar2(100) DEFAULT USER() NOT NULL,\n  value Varchar2(10)\n)\nINSERT INTO MY_TABLE (value) VALUES ('XXX');\n
    \n

    As an other user:

    \n
    INSERT INTO MY_TABLE (value) VALUES ('YYY');\nSELECT * FROM MY_TABLE;\n
    \n

    Result:

    \n
    CREATED_BY      VALUE  \nSA              XXX\nSYLVAIN         YYY\n
    \n soup wrap:

    Isn't USER a function in H2?

    SELECT USER()
    

    will return the current user. Works as expected as a default value for a column:

    create table MY_TABLE(
      CREATED_BY Varchar2(100) DEFAULT USER() NOT NULL,
      value Varchar2(10)
    )
    INSERT INTO MY_TABLE (value) VALUES ('XXX');
    

    As an other user:

    INSERT INTO MY_TABLE (value) VALUES ('YYY');
    SELECT * FROM MY_TABLE;
    

    Result:

    CREATED_BY      VALUE  
    SA              XXX
    SYLVAIN         YYY
    
    qid & accept id: (26046622, 26046922) query: T-SQL: efficiently DELETE records in right table that are not in left table when using RIGHT JOIN soup:
    DELETE FROM [FACT]\nWHERE NOT EXISTS (SELECT 1\n                  FROM [DIMENSION]\n                  WHERE [FACT].[FK] = [DIMENSION].[PK]\n                   AND  [FACT].[TYPE] LIKE 'LAB%')\n
    \n

    Since these are FACT and DIM tables I think you will be deleting Large amount of data, otherwise you wouldn't care much about the performance. Another thing you can consider when delete large amount of data is, Deleting it in Smaller chunks. By doing something as below

    \n
    DECLARE @Deleted_Rows INT;\nSET @Deleted_Rows = 1;\n\n\nWHILE (@Deleted_Rows > 0)\n  BEGIN\n   -- Delete some small number of rows at a time\n    DELETE TOP (10000) FROM [FACT]\n    WHERE NOT EXISTS (SELECT 1\n                      FROM [DIMENSION]\n                      WHERE [FACT].[FK] = [DIMENSION].[PK]\n                       AND  [FACT].[TYPE] LIKE 'LAB%')\n\n  SET @Deleted_Rows = @@ROWCOUNT;\nEND\n
    \n soup wrap:
    DELETE FROM [FACT]
    WHERE NOT EXISTS (SELECT 1
                      FROM [DIMENSION]
                      WHERE [FACT].[FK] = [DIMENSION].[PK]
                       AND  [FACT].[TYPE] LIKE 'LAB%')
    

    Since these are FACT and DIM tables I think you will be deleting Large amount of data, otherwise you wouldn't care much about the performance. Another thing you can consider when delete large amount of data is, Deleting it in Smaller chunks. By doing something as below

    DECLARE @Deleted_Rows INT;
    SET @Deleted_Rows = 1;
    
    
    WHILE (@Deleted_Rows > 0)
      BEGIN
       -- Delete some small number of rows at a time
        DELETE TOP (10000) FROM [FACT]
        WHERE NOT EXISTS (SELECT 1
                          FROM [DIMENSION]
                          WHERE [FACT].[FK] = [DIMENSION].[PK]
                           AND  [FACT].[TYPE] LIKE 'LAB%')
    
      SET @Deleted_Rows = @@ROWCOUNT;
    END
    
    qid & accept id: (26063286, 26064572) query: Matching First and Last Name on two different tables soup:

    Provided you use a 3rd Table to hold you Long/Short Names as so.

    \n
    CREATE TABLE TableNames\n    ([Id] int, [OfficialName] varchar(7), [Alias] varchar(7))\n;\n\nINSERT INTO TableNames\n    ([Id], [OfficialName], [Alias])\nVALUES\n    (1, 'Andrew', 'Andy'),\n    (2, 'Andrew', 'Andrew'),\n    (3, 'William', 'Bill'),\n    (4, 'William', 'William'),\n    (5, 'David', 'Dave'),\n    (6, 'David', 'David')\n
    \n

    The following query should give you what you are looking for.

    \n
    SELECT *\nFROM (\n    SELECT TableA.Id AS T1_Id\n        ,CompanyId AS T1_CompanyId\n        ,FirstName AS T1_FirstName\n        ,LastName AS T1_LastName\n        ,TableNames.OfficialName AS OfficialName\n    FROM tableA\n    INNER JOIN tableNames ON TableA.FirstName = TableNames.Alias\n    ) T1\n    ,(\n        SELECT tableB.Id AS T2_Id\n            ,CompanyId AS T2_CompanyId\n            ,FirstName AS T2_FirstName\n            ,LastName AS T2_LastName\n            ,TableNames.OfficialName AS OfficialName\n        FROM tableB\n        INNER JOIN tableNames ON TableB.FirstName = TableNames.Alias\n        ) T2\nWHERE T1.T1_CompanyId = T2.T2_CompanyId\n    AND T1.OfficialName = T2.OfficialName\n    AND T1.T1_LastName = T2.T2_LastName\n
    \n

    I set up my solution sqlfiddle at http://sqlfiddle.com/#!3/64514/2

    \n

    I hope this helps.

    \n soup wrap:

    Provided you use a 3rd Table to hold you Long/Short Names as so.

    CREATE TABLE TableNames
        ([Id] int, [OfficialName] varchar(7), [Alias] varchar(7))
    ;
    
    INSERT INTO TableNames
        ([Id], [OfficialName], [Alias])
    VALUES
        (1, 'Andrew', 'Andy'),
        (2, 'Andrew', 'Andrew'),
        (3, 'William', 'Bill'),
        (4, 'William', 'William'),
        (5, 'David', 'Dave'),
        (6, 'David', 'David')
    

    The following query should give you what you are looking for.

    SELECT *
    FROM (
        SELECT TableA.Id AS T1_Id
            ,CompanyId AS T1_CompanyId
            ,FirstName AS T1_FirstName
            ,LastName AS T1_LastName
            ,TableNames.OfficialName AS OfficialName
        FROM tableA
        INNER JOIN tableNames ON TableA.FirstName = TableNames.Alias
        ) T1
        ,(
            SELECT tableB.Id AS T2_Id
                ,CompanyId AS T2_CompanyId
                ,FirstName AS T2_FirstName
                ,LastName AS T2_LastName
                ,TableNames.OfficialName AS OfficialName
            FROM tableB
            INNER JOIN tableNames ON TableB.FirstName = TableNames.Alias
            ) T2
    WHERE T1.T1_CompanyId = T2.T2_CompanyId
        AND T1.OfficialName = T2.OfficialName
        AND T1.T1_LastName = T2.T2_LastName
    

    I set up my solution sqlfiddle at http://sqlfiddle.com/#!3/64514/2

    I hope this helps.

    qid & accept id: (26063793, 26065069) query: Finding duplicate records from table and deleting all but one with latest date soup:

    Start with a SELECT query which identifies the rows you want deleted.

    \n
    SELECT y.CreatedBy, y.FileId, y.FileName, y.CreationDate\nFROM YourTable AS y\nWHERE\n    y.CreationDate <  \n        DMax(\n            "CreationDate",\n            "YourTable",\n            "FileName='" & y.FileName & "'"\n            );\n
    \n

    After you verify that query identifies the correct rows, convert it to a DELETE query.

    \n
    DELETE\nFROM YourTable AS y\nWHERE\n    y.CreationDate <  \n        DMax(\n            "CreationDate",\n            "YourTable",\n            "FileName='" & y.FileName & "'"\n            );\n
    \n soup wrap:

    Start with a SELECT query which identifies the rows you want deleted.

    SELECT y.CreatedBy, y.FileId, y.FileName, y.CreationDate
    FROM YourTable AS y
    WHERE
        y.CreationDate <  
            DMax(
                "CreationDate",
                "YourTable",
                "FileName='" & y.FileName & "'"
                );
    

    After you verify that query identifies the correct rows, convert it to a DELETE query.

    DELETE
    FROM YourTable AS y
    WHERE
        y.CreationDate <  
            DMax(
                "CreationDate",
                "YourTable",
                "FileName='" & y.FileName & "'"
                );
    
    qid & accept id: (26088814, 26088920) query: How to select every Monday date and every Friday date in the year soup:

    Here's one way (you might need to check which day of the week is setup to be the first, here I have Sunday as the first day of the week)

    \n

    You can use a table with many rows (more than 365) to CROSS JOIN to in order to get a run of dates (a tally table).

    \n

    My sys columns has over 800 rows in, you could use any other table or even CROSS JOIN a table onto itself to multiply up the number of rows

    \n

    Here I used the row_number function to get a running count of rows and incremented the date by 1 day for each row:

    \n
    select \ndateadd(d, row_number() over (order by name), cast('31 Dec 2013' as datetime)) as dt \nfrom sys.columns a\n
    \n

    With the result set of dates now, it's trivial to check the day of week using datepart()

    \n
    SELECT\n    dt, \n    datename(dw, dt) \nFROM \n    (\n        select \n            dateadd(d, row_number() over (order by name), cast('31 Dec 2013' as datetime)) as dt \n            from \n            sys.columns a\n    ) as dates \nWHERE \n(datepart(dw, dates.dt) = 2 OR datepart(dw, dates.dt) = 6)\nAND dt >= '01 Jan 2014' AND dt < '01 Jan 2015'\n
    \n

    Edit:

    \n

    Here's an example SqlFiddle

    \n

    http://sqlfiddle.com/#!6/d41d8/21757

    \n

    Edit 2:

    \n

    If you want them on the same row, days of the week at least are constant, you know Friday is always 4 days after Monday so do the same but only look for Mondays, then just add 4 days to the Monday...

    \n
    SELECT\n    dt as MonDate, \n    datename(dw, dt) as MonDateName,\n    dateadd(d, 4, dt) as FriDate,\n    datename(dw, dateadd(d, 4, dt)) as FriDateName\nFROM \n    (\n        select \n            dateadd(d, row_number() over (order by name), cast('31 Dec 2013' as datetime)) as dt \n            from \n            sys.columns a\n    ) as dates \nWHERE \ndatepart(dw, dates.dt) = 2\nAND dt >= '01 Jan 2014' AND dt < '01 Jan 2015'\nAND dt >= '01 Jan 2014' AND dt < '01 Jan 2015'\n
    \n

    Example SqlFiddle for this:

    \n

    http://sqlfiddle.com/#!6/d41d8/21764

    \n

    (note that only a few rows come back because sys.columns is quite small on the SqlFiddle server, try another system table if this is a problem)

    \n soup wrap:

    Here's one way (you might need to check which day of the week is setup to be the first, here I have Sunday as the first day of the week)

    You can use a table with many rows (more than 365) to CROSS JOIN to in order to get a run of dates (a tally table).

    My sys columns has over 800 rows in, you could use any other table or even CROSS JOIN a table onto itself to multiply up the number of rows

    Here I used the row_number function to get a running count of rows and incremented the date by 1 day for each row:

    select 
    dateadd(d, row_number() over (order by name), cast('31 Dec 2013' as datetime)) as dt 
    from sys.columns a
    

    With the result set of dates now, it's trivial to check the day of week using datepart()

    SELECT
        dt, 
        datename(dw, dt) 
    FROM 
        (
            select 
                dateadd(d, row_number() over (order by name), cast('31 Dec 2013' as datetime)) as dt 
                from 
                sys.columns a
        ) as dates 
    WHERE 
    (datepart(dw, dates.dt) = 2 OR datepart(dw, dates.dt) = 6)
    AND dt >= '01 Jan 2014' AND dt < '01 Jan 2015'
    

    Edit:

    Here's an example SqlFiddle

    http://sqlfiddle.com/#!6/d41d8/21757

    Edit 2:

    If you want them on the same row, days of the week at least are constant, you know Friday is always 4 days after Monday so do the same but only look for Mondays, then just add 4 days to the Monday...

    SELECT
        dt as MonDate, 
        datename(dw, dt) as MonDateName,
        dateadd(d, 4, dt) as FriDate,
        datename(dw, dateadd(d, 4, dt)) as FriDateName
    FROM 
        (
            select 
                dateadd(d, row_number() over (order by name), cast('31 Dec 2013' as datetime)) as dt 
                from 
                sys.columns a
        ) as dates 
    WHERE 
    datepart(dw, dates.dt) = 2
    AND dt >= '01 Jan 2014' AND dt < '01 Jan 2015'
    AND dt >= '01 Jan 2014' AND dt < '01 Jan 2015'
    

    Example SqlFiddle for this:

    http://sqlfiddle.com/#!6/d41d8/21764

    (note that only a few rows come back because sys.columns is quite small on the SqlFiddle server, try another system table if this is a problem)

    qid & accept id: (26102456, 26102572) query: I need a check constraint on two columns, at least one must be not null soup:

    This can be done with a check constraint that verifies null value and matches the result with or

    \n
    create table #t (i int\n               , j int\n               , constraint chk_null check (i is not null or j is not null))\n
    \n

    The following are the test cases

    \n
    insert into #t values (null, null) --> error\ninsert into #t values (1, null) --> ok\ninsert into #t values (null, 1) --> ok\ninsert into #t values (1, 1) --> ok\n
    \n soup wrap:

    This can be done with a check constraint that verifies null value and matches the result with or

    create table #t (i int
                   , j int
                   , constraint chk_null check (i is not null or j is not null))
    

    The following are the test cases

    insert into #t values (null, null) --> error
    insert into #t values (1, null) --> ok
    insert into #t values (null, 1) --> ok
    insert into #t values (1, 1) --> ok
    
    qid & accept id: (26162762, 26163259) query: sql select query from a single table, results separated by intervals soup:

    Given a tableyour_table with columnsts timestamp/datetime, val int one option if you want to group by minute would be to deduct the seconds part of the date and group by that.

    \n

    The same concept should be possible to use for other intervals.

    \n

    Using MS SQL it would be:

    \n
    select \n    dateadd(second, -DATEPART(second,ts),ts) as ts, \n    SUM(val) as v_sum \nfrom your_table\ngroup by dateadd(second, -DATEPART(second,ts),ts)\n
    \n

    I think the Postgresql could be this:

    \n
    SELECT \n  date_trunc('minute', ts),\n  sum(val) v_sum \nFROM\n  your_table\nGROUP BY date_trunc('minute', ts)\nORDER BY 1\n
    \n

    I tried the MSSQL version and got the desired result, but as SQL Fiddle is down at the moment I couldn't try the PG version. and also the PG version, which seems to work.

    \n soup wrap:

    Given a tableyour_table with columnsts timestamp/datetime, val int one option if you want to group by minute would be to deduct the seconds part of the date and group by that.

    The same concept should be possible to use for other intervals.

    Using MS SQL it would be:

    select 
        dateadd(second, -DATEPART(second,ts),ts) as ts, 
        SUM(val) as v_sum 
    from your_table
    group by dateadd(second, -DATEPART(second,ts),ts)
    

    I think the Postgresql could be this:

    SELECT 
      date_trunc('minute', ts),
      sum(val) v_sum 
    FROM
      your_table
    GROUP BY date_trunc('minute', ts)
    ORDER BY 1
    

    I tried the MSSQL version and got the desired result, but as SQL Fiddle is down at the moment I couldn't try the PG version. and also the PG version, which seems to work.

    qid & accept id: (26167223, 26167258) query: How can I count a column with values soup:
    select columnname, count(*)\nfrom YourTable\ngroup by columnName\n
    \n

    or

    \n
    select \nsum(case when columnname='present' then =1 end) 'present',\nsum(case when columnname='absent' then =1 end) 'absent',\nsum(case when columnname='leave' then =1 end) 'leave'\nfrom myTable\n
    \n soup wrap:
    select columnname, count(*)
    from YourTable
    group by columnName
    

    or

    select 
    sum(case when columnname='present' then =1 end) 'present',
    sum(case when columnname='absent' then =1 end) 'absent',
    sum(case when columnname='leave' then =1 end) 'leave'
    from myTable
    
    qid & accept id: (26208027, 26208078) query: Select from two tables without using an OR soup:

    If the requirement is to not use an OR, you could use UNION instead. Since you filter the department on its number, not on its name, you do not need the second table at all:

    \n
    SELECT name FROM employee WHERE salary > 20000\n    UNION\nSELECT name FROM employee WHERE dNumber = 1\n
    \n

    If you wanted to filter the department by name, a join or a subquery would be required:

    \n
    SELECT name FROM employee WHERE salary > 20000\n    UNION\nSELECT name FROM employee e\nJOIN department d ON e.dNumber=d.departmentNumber\nWHERE departmentName = 'math'\n
    \n soup wrap:

    If the requirement is to not use an OR, you could use UNION instead. Since you filter the department on its number, not on its name, you do not need the second table at all:

    SELECT name FROM employee WHERE salary > 20000
        UNION
    SELECT name FROM employee WHERE dNumber = 1
    

    If you wanted to filter the department by name, a join or a subquery would be required:

    SELECT name FROM employee WHERE salary > 20000
        UNION
    SELECT name FROM employee e
    JOIN department d ON e.dNumber=d.departmentNumber
    WHERE departmentName = 'math'
    
    qid & accept id: (26227103, 26228641) query: SQL query inner join tables, print to HTML \n \n \n \n\n
    \n

    Having said that, the real problem is your database structure. Storing lists is one of those things that seems like it will make life easier, but almost always creates more problems than it solves. For example, using the current structure - how you would you identify all users that have the role of "Administrators" and "Guest" but not "SuperUser"?

    \n

    While there are some kludgey techniques to get around some of the inherent limitations of storing lists, you should really change the table structure if at all possible. Lists are more prone to data integrity issues. Also, due to the reliance on string functions, it frequently requires convoluted SQL queries that are unable to utilize db indexes, and consequently do not scale well.

    \n

    As I mentioned in the comments, a better structure is to create a third table: MemberRole. Store each memberID + roleID combination as a separate row. That structure would offer greater flexibility and reliability. See braketsage's answer for an example. Though the "Edit user" query joins could be simplified a bit. I removed some of the logic for clarity. However, as braketsage noted in the original post, you may want to add additional filters to limit which roles the current user can assign - for security reasons. Otherwise, any user could assign any permissions.

    \n

    Note: I added a boolean flag that I like to use in my apps. Using an OUTER JOIN and CASE statement you can create a boolean column called IsAssigned that indicates whether or not each role is assigned to the selected user. That flag comes in handy for pre-selecting list items (or checkboxes) on the edit screen.

    \n
    SELECT  ur.roleID\n        , ur.roleTitle\n        , ur.UserID\n        , ur.UserName\n        , CASE WHEN p.UserID IS NOT NULL THEN 1 ELSE 0 END AS IsAssigned\nFROM   (\n          SELECT u.UserID\n                 , u.UserName\n                 , r.RoleID\n                 , r.RoleTitle\n          FROM   Users u CROSS JOIN Roles r\n          WHERE  u.UserID = \n\n       ) \n       ur LEFT JOIN Permissions p\n                ON p.RoleID = ur.RoleID\n                AND p.UserID = ur.UserID\n
    \n

    NB: Be sure to read up on CROSS JOIN

    \n

    That said, for readability I often just run two queries: one to get the user information and another to get the assigned roles. It is an extra db call, but slightly less data pulled back, so the extra query is not too big a deal.

    \n soup wrap:

    (Too long for comments)

    I even tried going step by step and creating an array:

    Ignoring db structure for a moment, creating an array is a waste of time :) If the roles are stored as a list, then all you are doing is taking the list, creating a single element array and converting it back into the same list again. So get rid of the array. It is does not serve any purpose.

    Just loop through the getRole query and use list functions as braketsage suggested. Dan makes a good point about checkboxes, but I will use your original code to better illustrate:

    
    

    Having said that, the real problem is your database structure. Storing lists is one of those things that seems like it will make life easier, but almost always creates more problems than it solves. For example, using the current structure - how you would you identify all users that have the role of "Administrators" and "Guest" but not "SuperUser"?

    While there are some kludgey techniques to get around some of the inherent limitations of storing lists, you should really change the table structure if at all possible. Lists are more prone to data integrity issues. Also, due to the reliance on string functions, it frequently requires convoluted SQL queries that are unable to utilize db indexes, and consequently do not scale well.

    As I mentioned in the comments, a better structure is to create a third table: MemberRole. Store each memberID + roleID combination as a separate row. That structure would offer greater flexibility and reliability. See braketsage's answer for an example. Though the "Edit user" query joins could be simplified a bit. I removed some of the logic for clarity. However, as braketsage noted in the original post, you may want to add additional filters to limit which roles the current user can assign - for security reasons. Otherwise, any user could assign any permissions.

    Note: I added a boolean flag that I like to use in my apps. Using an OUTER JOIN and CASE statement you can create a boolean column called IsAssigned that indicates whether or not each role is assigned to the selected user. That flag comes in handy for pre-selecting list items (or checkboxes) on the edit screen.

    SELECT  ur.roleID
            , ur.roleTitle
            , ur.UserID
            , ur.UserName
            , CASE WHEN p.UserID IS NOT NULL THEN 1 ELSE 0 END AS IsAssigned
    FROM   (
              SELECT u.UserID
                     , u.UserName
                     , r.RoleID
                     , r.RoleTitle
              FROM   Users u CROSS JOIN Roles r
              WHERE  u.UserID = 
    
           ) 
           ur LEFT JOIN Permissions p
                    ON p.RoleID = ur.RoleID
                    AND p.UserID = ur.UserID
    

    NB: Be sure to read up on CROSS JOIN

    That said, for readability I often just run two queries: one to get the user information and another to get the assigned roles. It is an extra db call, but slightly less data pulled back, so the extra query is not too big a deal.

    qid & accept id: (26285750, 26285881) query: Android sql element to listview soup:

    To achieve this you have to build a custom adapter and inflate your custom row layout. Using ArrayAdapter won't work

    \n

    So, your custom adapter class could be somthing like:

    \n
        public class CustomAdapter extends BaseAdapter {\n        private final Activity activity;\n        private final List list;\n\n        public CustomAdapter(Activity activity, ArrayList list) {\n            this.activity = activity;\n            this.list = list;\n        }\n\n    @Override\n    public int getCount() {\n        return list.size();\n    }\n\n    @Override\n    public Object getItem(int arg0) {\n        // TODO Auto-generated method stub\n        return null;\n    }\n\n    @Override\n    public long getItemId(int arg0) {\n        // TODO Auto-generated method stub\n        return 0;\n    }\n\n        @Override\n        public View getView(int position, View convertView, ViewGroup parent) {\n            View rowView = convertView;\n            ViewHolder view;\n\n            if(rowView == null)\n            {\n                // Get a new instance of the row layout view\n                LayoutInflater inflater = activity.getLayoutInflater();\n                rowView = inflater.inflate(R.layout.rowlayout, null);\n\n                // Hold the view objects in an object, that way the don't need to be "re-  finded"\n                view = new ViewHolder();\n                view.person_name= (TextView) rowView.findViewById(R.id.name);\n                view.person_address= (TextView) rowView.findViewById(R.id.address);\n\n                rowView.setTag(view);\n            } else {\n                view = (ViewHolder) rowView.getTag();\n            }\n\n            /** Set data to your Views. */\n            Person item = list.get(position);\n            view.person_name.setText(item.getTickerSymbol());\n            view.person_address.setText(item.getQuote().toString());\n\n            return rowView;\n        }\n\n        protected static class ViewHolder{\n            protected TextView person_name;\n            protected TextView person_address;\n        }\n    }\n
    \n

    And your Person.java class could as simple as I describe below:

    \n
    public class Person {\n    private String name;\n    private String address;\n\n    public Person(String name, String address) {\n        this.name = name;\n        this.address = address;\n    }\n    public void setName(String name) {\n        this.name= name;\n    }\n    public String getName() {\n        return name;\n    }\n    public void setAddress(String address) {\n        this.address= address;\n    }\n    public String getAddress() {\n        return address;\n    }\n}\n
    \n

    Now, in you main activity just bind you list with some data, like;

    \n
    /** Declare and initialize list of Person. */\nArrayList list = new ArrayList();\n\n/** Add some restaurants to the list. */\nlist.add(new Person("name1", "address1"));\nlist.add(new Person("name2", "address2"));\nlist.add(new Person("name3", "address3"));\nlist.add(new Person("name4", "address4"));\nlist.add(new Person("name5", "address5"));\nlist.add(new Person("name6", "address6"));\nAt this point you're able to set the custom adapter to your list\n\nListView lv = (ListView) findViewById(R.id.mylist);\n\nCustomAdapter adapter = new CustomAdapter(YourMainActivityName.this, list);\nlv.setAdapter(adapter);\n
    \n soup wrap:

    To achieve this you have to build a custom adapter and inflate your custom row layout. Using ArrayAdapter won't work

    So, your custom adapter class could be somthing like:

        public class CustomAdapter extends BaseAdapter {
            private final Activity activity;
            private final List list;
    
            public CustomAdapter(Activity activity, ArrayList list) {
                this.activity = activity;
                this.list = list;
            }
    
        @Override
        public int getCount() {
            return list.size();
        }
    
        @Override
        public Object getItem(int arg0) {
            // TODO Auto-generated method stub
            return null;
        }
    
        @Override
        public long getItemId(int arg0) {
            // TODO Auto-generated method stub
            return 0;
        }
    
            @Override
            public View getView(int position, View convertView, ViewGroup parent) {
                View rowView = convertView;
                ViewHolder view;
    
                if(rowView == null)
                {
                    // Get a new instance of the row layout view
                    LayoutInflater inflater = activity.getLayoutInflater();
                    rowView = inflater.inflate(R.layout.rowlayout, null);
    
                    // Hold the view objects in an object, that way the don't need to be "re-  finded"
                    view = new ViewHolder();
                    view.person_name= (TextView) rowView.findViewById(R.id.name);
                    view.person_address= (TextView) rowView.findViewById(R.id.address);
    
                    rowView.setTag(view);
                } else {
                    view = (ViewHolder) rowView.getTag();
                }
    
                /** Set data to your Views. */
                Person item = list.get(position);
                view.person_name.setText(item.getTickerSymbol());
                view.person_address.setText(item.getQuote().toString());
    
                return rowView;
            }
    
            protected static class ViewHolder{
                protected TextView person_name;
                protected TextView person_address;
            }
        }
    

    And your Person.java class could as simple as I describe below:

    public class Person {
        private String name;
        private String address;
    
        public Person(String name, String address) {
            this.name = name;
            this.address = address;
        }
        public void setName(String name) {
            this.name= name;
        }
        public String getName() {
            return name;
        }
        public void setAddress(String address) {
            this.address= address;
        }
        public String getAddress() {
            return address;
        }
    }
    

    Now, in you main activity just bind you list with some data, like;

    /** Declare and initialize list of Person. */
    ArrayList list = new ArrayList();
    
    /** Add some restaurants to the list. */
    list.add(new Person("name1", "address1"));
    list.add(new Person("name2", "address2"));
    list.add(new Person("name3", "address3"));
    list.add(new Person("name4", "address4"));
    list.add(new Person("name5", "address5"));
    list.add(new Person("name6", "address6"));
    At this point you're able to set the custom adapter to your list
    
    ListView lv = (ListView) findViewById(R.id.mylist);
    
    CustomAdapter adapter = new CustomAdapter(YourMainActivityName.this, list);
    lv.setAdapter(adapter);
    
    qid & accept id: (26313466, 26313536) query: How to get Friends UniqID with Sql Query soup:

    You need to choose the opposite ID in the select clause. You could either union 2 queries together, or use a case statement to pick:

    \n

    (note, untested code)

    \n
    select\n    case when SenderID=@ID then ReciverID else SenderID end as OtherPersonID\nfrom\n    Friend_Table\nwhere\n    ReqStatus='True'\n    and (SenderID=@ID or ReciverID=@ID)\n
    \n

    As a union:

    \n
    select ReciverID as OtherPersonID from Friend_Table where (ReqStatus='True' and SenderID=@ID)\nunion\nselect SenderID as OtherPersonID from Friend_Table where (ReqStatus='True' and ReciverID=@ID)\n
    \n

    Also, the correct spelling is actually 'Receiver'.

    \n soup wrap:

    You need to choose the opposite ID in the select clause. You could either union 2 queries together, or use a case statement to pick:

    (note, untested code)

    select
        case when SenderID=@ID then ReciverID else SenderID end as OtherPersonID
    from
        Friend_Table
    where
        ReqStatus='True'
        and (SenderID=@ID or ReciverID=@ID)
    

    As a union:

    select ReciverID as OtherPersonID from Friend_Table where (ReqStatus='True' and SenderID=@ID)
    union
    select SenderID as OtherPersonID from Friend_Table where (ReqStatus='True' and ReciverID=@ID)
    

    Also, the correct spelling is actually 'Receiver'.

    qid & accept id: (26332939, 26333053) query: Get minimum hours and maximum hours from mysql soup:

    I'm guessing this is what you want:

    \n

    SQL Fiddle

    \n

    MySQL 5.5.32 Schema Setup:

    \n
    CREATE TABLE facility\n    (`Id_facility` int, `time_start` varchar(8), `time_end` varchar(8))\n;\n\nINSERT INTO facility\n    (`Id_facility`, `time_start`, `time_end`)\nVALUES\n    (1, '07:00:00', '19:00:00'),\n    (2, '08:00:00', '20:00:00')\n;\n
    \n

    Query 1:

    \n
    SELECT MIN( time_start), MAX( time_end) FROM facility\n
    \n

    Results:

    \n
    | MIN( TIME_START) | MAX( TIME_END) |\n|------------------|----------------|\n|         07:00:00 |       20:00:00 |\n
    \n soup wrap:

    I'm guessing this is what you want:

    SQL Fiddle

    MySQL 5.5.32 Schema Setup:

    CREATE TABLE facility
        (`Id_facility` int, `time_start` varchar(8), `time_end` varchar(8))
    ;
    
    INSERT INTO facility
        (`Id_facility`, `time_start`, `time_end`)
    VALUES
        (1, '07:00:00', '19:00:00'),
        (2, '08:00:00', '20:00:00')
    ;
    

    Query 1:

    SELECT MIN( time_start), MAX( time_end) FROM facility
    

    Results:

    | MIN( TIME_START) | MAX( TIME_END) |
    |------------------|----------------|
    |         07:00:00 |       20:00:00 |
    
    qid & accept id: (26341790, 26342043) query: Select unique barcode with max timestamp soup:

    This should work:

    \n
    SELECT   [Barcode], max([TimeStamp])\nFROM     [InventoryLocatorDB].[dbo].[Inventory]\nGROUP BY [Barcode]\n
    \n

    Demo

    \n

    EDIT

    \n
    SELECT [Barcode], [Products], [TimeStamp]\nFROM   [InventoryLocatorDB].[dbo].[Inventory] AS I\nWHERE  [TimeStamp] = (SELECT MAX([TimeStamp])\n                      FROM   [InventoryLocatorDB].[dbo].[Inventory]\n                      WHERE  [Barcode] = I.[Barcode])\n
    \n

    The query retains tuples with the same BarCode / TimeStamp. Depending on the granularity of TimeStamp this may not be valid.

    \n

    Demo 2

    \n

    There are many ways to "filter" the above result.

    \n

    E.g. only one tuple per BarCode, latest TimeStamp, greatest value of Products:

    \n
    SELECT [Barcode], [Products], [TimeStamp]\nFROM   [InventoryLocatorDB].[dbo].[Inventory] AS I\nWHERE  [TimeStamp] = (SELECT MAX([TimeStamp])\n                      FROM   [InventoryLocatorDB].[dbo].[Inventory]\n                      WHERE  [Barcode] = I.[Barcode]) AND\n       [Products]  = (SELECT MAX([Products])\n                      FROM   [InventoryLocatorDB].[dbo].[Inventory]\n                      WHERE  [Barcode] = I.[Barcode] and [TimeStamp] = I.[TimeStamp])\n
    \n

    Demo 3

    \n soup wrap:

    This should work:

    SELECT   [Barcode], max([TimeStamp])
    FROM     [InventoryLocatorDB].[dbo].[Inventory]
    GROUP BY [Barcode]
    

    Demo

    EDIT

    SELECT [Barcode], [Products], [TimeStamp]
    FROM   [InventoryLocatorDB].[dbo].[Inventory] AS I
    WHERE  [TimeStamp] = (SELECT MAX([TimeStamp])
                          FROM   [InventoryLocatorDB].[dbo].[Inventory]
                          WHERE  [Barcode] = I.[Barcode])
    

    The query retains tuples with the same BarCode / TimeStamp. Depending on the granularity of TimeStamp this may not be valid.

    Demo 2

    There are many ways to "filter" the above result.

    E.g. only one tuple per BarCode, latest TimeStamp, greatest value of Products:

    SELECT [Barcode], [Products], [TimeStamp]
    FROM   [InventoryLocatorDB].[dbo].[Inventory] AS I
    WHERE  [TimeStamp] = (SELECT MAX([TimeStamp])
                          FROM   [InventoryLocatorDB].[dbo].[Inventory]
                          WHERE  [Barcode] = I.[Barcode]) AND
           [Products]  = (SELECT MAX([Products])
                          FROM   [InventoryLocatorDB].[dbo].[Inventory]
                          WHERE  [Barcode] = I.[Barcode] and [TimeStamp] = I.[TimeStamp])
    

    Demo 3

    qid & accept id: (26381169, 26381261) query: SQL copy from one table insert into another - using a where clause soup:

    Just add the WHERE condition in your SELECT part like

    \n
    INSERT INTO Notifications (imageUri, DOB) \nSELECT d.imageUri, d.birthDate \nFROM Details d\nJOIN Notifications n\nON d._id = n.PrimaryId\n
    \n

    Moreover, I think you are actually looking for doing an UPDATE and not INSERT since you said in your post

    \n
    \n

    For each record in Details I have seven more records in another table\n called as Notifications

    \n
    \n

    In that case, if you INSERT then you will add new records with null in rest of the field but not update old records. Moreover, for your case you don't need a WHERE clause rather a JOIN clause and specify the condition.

    \n
    update Notifications\nset imageUri = (select imageUri\n                     from Details WHERE _id = Notifications.PrimaryId), \nDOB = (select DOB\n            from Details WHERE _id = Notifications.PrimaryId)  \n
    \n soup wrap:

    Just add the WHERE condition in your SELECT part like

    INSERT INTO Notifications (imageUri, DOB) 
    SELECT d.imageUri, d.birthDate 
    FROM Details d
    JOIN Notifications n
    ON d._id = n.PrimaryId
    

    Moreover, I think you are actually looking for doing an UPDATE and not INSERT since you said in your post

    For each record in Details I have seven more records in another table called as Notifications

    In that case, if you INSERT then you will add new records with null in rest of the field but not update old records. Moreover, for your case you don't need a WHERE clause rather a JOIN clause and specify the condition.

    update Notifications
    set imageUri = (select imageUri
                         from Details WHERE _id = Notifications.PrimaryId), 
    DOB = (select DOB
                from Details WHERE _id = Notifications.PrimaryId)  
    
    qid & accept id: (26383863, 26384956) query: SQL Selecting one row, displaying several soup:

    It seems that what you need to do is UNPIVOT your table. Depending of the version of SQL Server one could use the UNPIVOT statement. I prefer UNIONing the results instead of using the UNPIVOT syntax.\nSomething like the following should work for you.

    \n
    DECLARE @TestResults TABLE (\n    BatchID Int,\n    TestType CHAR(1),\n    TestOne SMALLMONEY,\n    TestTwo SMALLMONEY,\n    TestThree SMALLMONEY,\n    TestFour SMALLMONEY\n)\nINSERT INTO @TestResults\nSELECT 1, 'A', 1.2, 0, 16, 8.2 UNION\nSELECT 2, 'A', 1.3, 1, 15, 7.4\n\nSELECT BatchID, TestType, TestOne, TestTwo, TestThree, TestFour FROM @TestResults\n
    \n

    This will return your current results.

    \n
    BatchID     TestType TestOne     TestTwo     TestThree   TestFour\n----------- -------- ----------- ----------- ----------- -----------\n1           A        1.20        0.00        16.00       8.20\n2           A        1.30        1.00        15.00       7.40\n
    \n

    Try the following query to UNPIVOT your data.

    \n
    SELECT BatchID, 1 AS Test, TestOne AS Result FROM @TestResults UNION ALL\nSELECT BatchID, 2 AS Test, TestTwo FROM @TestResults  UNION ALL\nSELECT BatchID, 3 AS Test, TestThree FROM @TestResults  UNION ALL\nSELECT BatchID, 4 AS Test, TestFour  FROM @TestResults \nORDER BY BatchID, Test\n
    \n

    This should return the desired results.

    \n
    BatchID     Test        Result\n----------- ----------- -----------\n1           1           1.20\n1           2           0.00\n1           3           16.00\n1           4           8.20\n2           1           1.30\n2           2           1.00\n2           3           15.00\n2           4           7.40\n
    \n soup wrap:

    It seems that what you need to do is UNPIVOT your table. Depending of the version of SQL Server one could use the UNPIVOT statement. I prefer UNIONing the results instead of using the UNPIVOT syntax. Something like the following should work for you.

    DECLARE @TestResults TABLE (
        BatchID Int,
        TestType CHAR(1),
        TestOne SMALLMONEY,
        TestTwo SMALLMONEY,
        TestThree SMALLMONEY,
        TestFour SMALLMONEY
    )
    INSERT INTO @TestResults
    SELECT 1, 'A', 1.2, 0, 16, 8.2 UNION
    SELECT 2, 'A', 1.3, 1, 15, 7.4
    
    SELECT BatchID, TestType, TestOne, TestTwo, TestThree, TestFour FROM @TestResults
    

    This will return your current results.

    BatchID     TestType TestOne     TestTwo     TestThree   TestFour
    ----------- -------- ----------- ----------- ----------- -----------
    1           A        1.20        0.00        16.00       8.20
    2           A        1.30        1.00        15.00       7.40
    

    Try the following query to UNPIVOT your data.

    SELECT BatchID, 1 AS Test, TestOne AS Result FROM @TestResults UNION ALL
    SELECT BatchID, 2 AS Test, TestTwo FROM @TestResults  UNION ALL
    SELECT BatchID, 3 AS Test, TestThree FROM @TestResults  UNION ALL
    SELECT BatchID, 4 AS Test, TestFour  FROM @TestResults 
    ORDER BY BatchID, Test
    

    This should return the desired results.

    BatchID     Test        Result
    ----------- ----------- -----------
    1           1           1.20
    1           2           0.00
    1           3           16.00
    1           4           8.20
    2           1           1.30
    2           2           1.00
    2           3           15.00
    2           4           7.40
    
    qid & accept id: (26398902, 26399290) query: How to transform XML data into SQL Server table (part 2) soup:

    This unpacks your specific data into the result set you've asked for, but how reusable this is depends a lot on what other pieces of XML you might want to unpack:

    \n
    declare @inp xml = '\n \n  \n    \n      
    6000911384
    \n \n \n \n
    \n
    \n
    \n
    '\n\nselect\n n.value('@at','varchar(10)') + SUBSTRING(n.value('@at','varchar(30)'),20,5) as AT,\n n.value('@lifespan','varchar(20)') as lifespan,\n n.value('receive[1]/isomsg[1]/@direction','varchar(10)') as direction,\n n.value('receive[1]/isomsg[1]/field[@id="0"][1]/@value','varchar(10)') as id_0,\n n.value('receive[1]/isomsg[1]/field[@id="3"][1]/@value','varchar(10)') as id_3,\n n.value('receive[1]/isomsg[1]/field[@id="11"][1]/@value','varchar(10)') as id_11\nfrom @inp.nodes('/root/log') n(n)\n
    \n

    Result:

    \n
    AT              lifespan             direction  id_0       id_3       id_11\n--------------- -------------------- ---------- ---------- ---------- ----------\nWed Oct 15 2014 2279ms               IN         0800       980000     000852\n
    \n soup wrap:

    This unpacks your specific data into the result set you've asked for, but how reusable this is depends a lot on what other pieces of XML you might want to unpack:

    declare @inp xml = '
     
      
        
          
    6000911384
    ' select n.value('@at','varchar(10)') + SUBSTRING(n.value('@at','varchar(30)'),20,5) as AT, n.value('@lifespan','varchar(20)') as lifespan, n.value('receive[1]/isomsg[1]/@direction','varchar(10)') as direction, n.value('receive[1]/isomsg[1]/field[@id="0"][1]/@value','varchar(10)') as id_0, n.value('receive[1]/isomsg[1]/field[@id="3"][1]/@value','varchar(10)') as id_3, n.value('receive[1]/isomsg[1]/field[@id="11"][1]/@value','varchar(10)') as id_11 from @inp.nodes('/root/log') n(n)

    Result:

    AT              lifespan             direction  id_0       id_3       id_11
    --------------- -------------------- ---------- ---------- ---------- ----------
    Wed Oct 15 2014 2279ms               IN         0800       980000     000852
    
    qid & accept id: (26429966, 26446355) query: Partition by ignoring some columns soup:

    Rows can be excluded from the row number by using two case statements. The first one creates two separate partition by, one for excluded rows and one\n for rows you care about. The outer case then displays null for the excluded rows.

    \n
    select\n    case\n        when status = 'Exclude' then\n            null\n        else\n            row_number() over\n            (\n                partition by case when status = 'Exclude' then 0 else 1 end\n                order by numb\n            )\n    end new_rownumber,\n    data.*\nfrom\n(\n    select 1 numb, 'Bill' name, 'blah1' text, 'GOOD'    status from dual union all\n    select 1 numb, 'Bill' name, 'blah2' text, 'Exclude' status from dual union all\n    select 2 numb, 'Jack' name, 'blah3' text, 'GOOD'    status from dual union all\n    select 2 numb, 'Jack' name, 'blah4' text, 'Exclude' status from dual union all\n    select 3 numb, 'Will' name, 'blah5' text, 'GOOD'    status from dual union all\n    select 3 numb, 'Will' name, 'blah6' text, 'Exclude' status from dual union all\n    select 4 numb, 'Andy' name, 'blah7' text, 'GOOD'    status from dual union all\n    select 4 numb, 'Andy' name, 'blah8' text, 'GOOD'    status from dual \n) data\norder by numb, status desc;\n
    \n

    The results don't exactly match. The example uses 1 twice for the new row number - is that a mistake?

    \n
    NEW_ROWNUMBER   NUMB   NAME   TEST    STATUS\n-------------   ----   ----   ----    ------\n1               1      Bill    blah1  GOOD\n                1      Bill    blah2  Exclude\n2               2      Jack    blah3  GOOD\n                2      Jack    blah4  Exclude\n3               3      Will    blah5  GOOD\n                3      Will    blah6  Exclude\n5               4      Andy    blah7  GOOD\n4               4      Andy    blah8  GOOD\n
    \n soup wrap:

    Rows can be excluded from the row number by using two case statements. The first one creates two separate partition by, one for excluded rows and one for rows you care about. The outer case then displays null for the excluded rows.

    select
        case
            when status = 'Exclude' then
                null
            else
                row_number() over
                (
                    partition by case when status = 'Exclude' then 0 else 1 end
                    order by numb
                )
        end new_rownumber,
        data.*
    from
    (
        select 1 numb, 'Bill' name, 'blah1' text, 'GOOD'    status from dual union all
        select 1 numb, 'Bill' name, 'blah2' text, 'Exclude' status from dual union all
        select 2 numb, 'Jack' name, 'blah3' text, 'GOOD'    status from dual union all
        select 2 numb, 'Jack' name, 'blah4' text, 'Exclude' status from dual union all
        select 3 numb, 'Will' name, 'blah5' text, 'GOOD'    status from dual union all
        select 3 numb, 'Will' name, 'blah6' text, 'Exclude' status from dual union all
        select 4 numb, 'Andy' name, 'blah7' text, 'GOOD'    status from dual union all
        select 4 numb, 'Andy' name, 'blah8' text, 'GOOD'    status from dual 
    ) data
    order by numb, status desc;
    

    The results don't exactly match. The example uses 1 twice for the new row number - is that a mistake?

    NEW_ROWNUMBER   NUMB   NAME   TEST    STATUS
    -------------   ----   ----   ----    ------
    1               1      Bill    blah1  GOOD
                    1      Bill    blah2  Exclude
    2               2      Jack    blah3  GOOD
                    2      Jack    blah4  Exclude
    3               3      Will    blah5  GOOD
                    3      Will    blah6  Exclude
    5               4      Andy    blah7  GOOD
    4               4      Andy    blah8  GOOD
    
    qid & accept id: (26430013, 26430125) query: Creating a Table in SQL, where each tuple can have mutiple values soup:

    You should create three separate tables:

    \n
    "persons"\nint ID (primary key, auto-increment)\nvarchar username\nvarchar email ... (all other info needed)\n\n"places"\nint ID (primary key, auto-increment)\nvarchar name\netc.\n
    \n

    And the third table gives you the relationship between the two:

    \n
    "person_places" (or place_persons, depends on what you like)\nint ID (primary key, auto-increment)\nint place_id (linked to the ID of the "places" entry)\nint person_id (linked to the ID of the "persons" entry)\n
    \n

    This way, every time a person starts working in a new place, you just add an entry to the "person_places". Same thing when they leave a place, or a place goes out of business or whatever, you just need to touch the "person_places" table.

    \n

    Also, this way, one person can work in several places, just like one place can have several people working in it.

    \n soup wrap:

    You should create three separate tables:

    "persons"
    int ID (primary key, auto-increment)
    varchar username
    varchar email ... (all other info needed)
    
    "places"
    int ID (primary key, auto-increment)
    varchar name
    etc.
    

    And the third table gives you the relationship between the two:

    "person_places" (or place_persons, depends on what you like)
    int ID (primary key, auto-increment)
    int place_id (linked to the ID of the "places" entry)
    int person_id (linked to the ID of the "persons" entry)
    

    This way, every time a person starts working in a new place, you just add an entry to the "person_places". Same thing when they leave a place, or a place goes out of business or whatever, you just need to touch the "person_places" table.

    Also, this way, one person can work in several places, just like one place can have several people working in it.

    qid & accept id: (26449233, 26449681) query: SQL:How to get min Quantity? soup:

    SQL Server 2005 supports window functions, so you can do something like this:

    \n
    select id,\n       name, \n       NaID,\n       name, \n       qty\nfrom  (\n  select p.id,\n         p.name, \n         p.NaID,\n         n.name,\n         min(P.Qty) over (partition by n.naid) as min_qty, \n         p.qty\n  from Product p\n     join Nation n on p.NaID=n.NaID\n) t\nwhere qty = min_qty;\n
    \n

    If there is more than one nation with the same minimum value, you will get each of them. If you don't want that, you need to use row_number()

    \n
    select id,\n       name, \n       NaID,\n       name, \n       qty\nfrom  (\n  select p.id,\n         p.name, \n         p.NaID,\n         n.name,\n         row_number() over (partition by n.naid order by p.qty) as rn, \n         p.qty\n  from Product p\n     join Nation n on p.NaID = n.NaID\n) t\nwhere rn = 1;\n
    \n

    As your example output with only includes the NaID but not the nation's name you don't really need the the join between product and nation.

    \n
    \n

    (There is no DBMS product called "SQL 2005". SQL is just a (standard) for a query language. The DBMS product you mean is called Microsoft SQL Server 2005. Or just SQL Server 2005).

    \n soup wrap:

    SQL Server 2005 supports window functions, so you can do something like this:

    select id,
           name, 
           NaID,
           name, 
           qty
    from  (
      select p.id,
             p.name, 
             p.NaID,
             n.name,
             min(P.Qty) over (partition by n.naid) as min_qty, 
             p.qty
      from Product p
         join Nation n on p.NaID=n.NaID
    ) t
    where qty = min_qty;
    

    If there is more than one nation with the same minimum value, you will get each of them. If you don't want that, you need to use row_number()

    select id,
           name, 
           NaID,
           name, 
           qty
    from  (
      select p.id,
             p.name, 
             p.NaID,
             n.name,
             row_number() over (partition by n.naid order by p.qty) as rn, 
             p.qty
      from Product p
         join Nation n on p.NaID = n.NaID
    ) t
    where rn = 1;
    

    As your example output with only includes the NaID but not the nation's name you don't really need the the join between product and nation.


    (There is no DBMS product called "SQL 2005". SQL is just a (standard) for a query language. The DBMS product you mean is called Microsoft SQL Server 2005. Or just SQL Server 2005).

    qid & accept id: (26494428, 26500162) query: Using function based index (oracle) to speed up count(X) soup:

    Add a redundant predicate to the query to convince Oracle that the expression will not return null values and an index can be used:

    \n
    select regexp_replace(film.title, '(\w+).*$','\1') first_word\nfrom film\nwhere regexp_replace(film.title, '(\w+).*$','\1') is not null;\n
    \n
    \n

    Oracle can use an index like a skinny version of a table. Many queries only contain a small subset of the columns in a table. If all the columns in that set are part of the same index, Oracle can use that index instead of the table. This will be either an INDEX FAST FULL SCAN or an INDEX FULL SCAN. The data may be read similar to the way a regular table scan works. But since the index is much smaller than the table, that access method can be much faster.

    \n

    But function-based indexes do not store NULLs. Oracle cannot use an index scan if it thinks there is a NULL that is not stored in the index. In this case, if the base column was defined as NOT NULL, the regular expression would always return a non-null value. But unsurprisingly, Oracle has not built code to determine whether or not a regular expression could return NULL. That sounds like an impossible task, similar to the halting problem.

    \n

    There are several ways to convince Oracle that the expression is not null. The simplest may be to repeat the predicate and add an IS NOT NULL condition.

    \n

    Sample Schema

    \n
    create table film (\nfilm_id number(5) not null,\ntitle varchar2(255) not null);\n\ninsert into film select rownumber, column_value\nfrom\n(\n    select rownum rownumber, column_value from table(sys.odcivarchar2list(\n    q'',\n    q'',\n    q'',\n    q'',\n    q'',\n    q'',\n    q'',\n    q'<12 Angry Men>',\n    q'',\n    q''))\n);\n\ncreate index film_idx1 on film(regexp_replace(title, '(\w+).*$','\1'));\n\nbegin\n    dbms_stats.gather_table_stats(user, 'FILM');\nend;\n/\n
    \n

    Query that does not use index

    \n

    Even with an index hint, the normal query will not use an index. Remember that hints are directives, and this query would use the index if it was possible.

    \n
    explain plan for\nselect /*+ index_ffs(film) */ regexp_replace(title, '(\w+).*$','\1') first_word\nfrom film;\n\nselect * from table(dbms_xplan.display);\n\nPlan hash value: 1232367652\n\n--------------------------------------------------------------------------\n| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |\n--------------------------------------------------------------------------\n|   0 | SELECT STATEMENT  |      |    10 |    50 |     3   (0)| 00:00:01 |\n|   1 |  TABLE ACCESS FULL| FILM |    10 |    50 |     3   (0)| 00:00:01 |\n--------------------------------------------------------------------------\n
    \n

    Query that uses index

    \n

    Now add the extra condition and the query will use the index. I'm not sure why it uses an INDEX FULL SCAN instead of an INDEX FAST FULL SCAN. With such small sample data it doesn't matter. The important point is that an index is used.

    \n
    explain plan for\nselect regexp_replace(film.title, '(\w+).*$','\1') first_word\nfrom film\nwhere regexp_replace(film.title, '(\w+).*$','\1') is not null;\n\nselect * from table(dbms_xplan.display);\n\nPlan hash value: 1151375616\n\n------------------------------------------------------------------------------\n| Id  | Operation        | Name      | Rows  | Bytes | Cost (%CPU)| Time     |\n------------------------------------------------------------------------------\n|   0 | SELECT STATEMENT |           |    10 |    50 |     1   (0)| 00:00:01 |\n|*  1 |  INDEX FULL SCAN | FILM_IDX1 |    10 |    50 |     1   (0)| 00:00:01 |\n------------------------------------------------------------------------------\n\nPredicate Information (identified by operation id):\n---------------------------------------------------\n\n   1 - filter( REGEXP_REPLACE ("TITLE",'(\w+).*$','\1') IS NOT NULL)\n
    \n soup wrap:

    Add a redundant predicate to the query to convince Oracle that the expression will not return null values and an index can be used:

    select regexp_replace(film.title, '(\w+).*$','\1') first_word
    from film
    where regexp_replace(film.title, '(\w+).*$','\1') is not null;
    

    Oracle can use an index like a skinny version of a table. Many queries only contain a small subset of the columns in a table. If all the columns in that set are part of the same index, Oracle can use that index instead of the table. This will be either an INDEX FAST FULL SCAN or an INDEX FULL SCAN. The data may be read similar to the way a regular table scan works. But since the index is much smaller than the table, that access method can be much faster.

    But function-based indexes do not store NULLs. Oracle cannot use an index scan if it thinks there is a NULL that is not stored in the index. In this case, if the base column was defined as NOT NULL, the regular expression would always return a non-null value. But unsurprisingly, Oracle has not built code to determine whether or not a regular expression could return NULL. That sounds like an impossible task, similar to the halting problem.

    There are several ways to convince Oracle that the expression is not null. The simplest may be to repeat the predicate and add an IS NOT NULL condition.

    Sample Schema

    create table film (
    film_id number(5) not null,
    title varchar2(255) not null);
    
    insert into film select rownumber, column_value
    from
    (
        select rownum rownumber, column_value from table(sys.odcivarchar2list(
        q'',
        q'',
        q'',
        q'',
        q'',
        q'',
        q'',
        q'<12 Angry Men>',
        q'',
        q''))
    );
    
    create index film_idx1 on film(regexp_replace(title, '(\w+).*$','\1'));
    
    begin
        dbms_stats.gather_table_stats(user, 'FILM');
    end;
    /
    

    Query that does not use index

    Even with an index hint, the normal query will not use an index. Remember that hints are directives, and this query would use the index if it was possible.

    explain plan for
    select /*+ index_ffs(film) */ regexp_replace(title, '(\w+).*$','\1') first_word
    from film;
    
    select * from table(dbms_xplan.display);
    
    Plan hash value: 1232367652
    
    --------------------------------------------------------------------------
    | Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    --------------------------------------------------------------------------
    |   0 | SELECT STATEMENT  |      |    10 |    50 |     3   (0)| 00:00:01 |
    |   1 |  TABLE ACCESS FULL| FILM |    10 |    50 |     3   (0)| 00:00:01 |
    --------------------------------------------------------------------------
    

    Query that uses index

    Now add the extra condition and the query will use the index. I'm not sure why it uses an INDEX FULL SCAN instead of an INDEX FAST FULL SCAN. With such small sample data it doesn't matter. The important point is that an index is used.

    explain plan for
    select regexp_replace(film.title, '(\w+).*$','\1') first_word
    from film
    where regexp_replace(film.title, '(\w+).*$','\1') is not null;
    
    select * from table(dbms_xplan.display);
    
    Plan hash value: 1151375616
    
    ------------------------------------------------------------------------------
    | Id  | Operation        | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
    ------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT |           |    10 |    50 |     1   (0)| 00:00:01 |
    |*  1 |  INDEX FULL SCAN | FILM_IDX1 |    10 |    50 |     1   (0)| 00:00:01 |
    ------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
    
       1 - filter( REGEXP_REPLACE ("TITLE",'(\w+).*$','\1') IS NOT NULL)
    
    qid & accept id: (26496183, 26496317) query: sql that identifies which account numbers have multiple agents soup:

    You can do this using window functions:

    \n
    select t.account_number, t.agent_name\nfrom (select t.*, min(agent_name) over (partition by account_number) as minan,\n             max(agent_name) over (partition by account_number) as maxan\n      from table t\n     ) t\nwhere minan <> maxan;\n
    \n

    If you know the agent names are never duplicated, you could just do:

    \n
    select t.account_number, t.agent_name\nfrom (select t.*, count(*) over (partition by account_number) as cnt\n      from table t\n     ) t\nwhere cnt > 1;\n
    \n soup wrap:

    You can do this using window functions:

    select t.account_number, t.agent_name
    from (select t.*, min(agent_name) over (partition by account_number) as minan,
                 max(agent_name) over (partition by account_number) as maxan
          from table t
         ) t
    where minan <> maxan;
    

    If you know the agent names are never duplicated, you could just do:

    select t.account_number, t.agent_name
    from (select t.*, count(*) over (partition by account_number) as cnt
          from table t
         ) t
    where cnt > 1;
    
    qid & accept id: (26518526, 26519466) query: Conditional JOIN Statement SQL Server soup:

    I think what you are asking for will work by joining the Initial table to both Option_A and Option_B using LEFT JOIN, which will produce something like this:

    \n
    Initial LEFT JOIN Option_A LEFT JOIN NULL\nOR\nInitial LEFT JOIN NULL LEFT JOIN Option_B\n
    \n

    Example code:

    \n
    SELECT i.*, COALESCE(a.id, b.id) as Option_Id, COALESCE(a.name, b.name) as Option_Name\nFROM Initial_Table i\nLEFT JOIN Option_A_Table a ON a.initial_id = i.id AND i.special_value = 1234\nLEFT JOIN Option_B_Table b ON b.initial_id = i.id AND i.special_value <> 1234\n
    \n

    Once you have done this, you 'ignore' the set of NULLS. The additional trick here is in the SELECT line, where you need to decide what to do with the NULL fields. If the Option_A and Option_B tables are similar, then you can use the COALESCE function to return the first NON NULL value (as per the example).

    \n

    The other option is that you will simply have to list the Option_A fields and the Option_B fields, and let whatever is using the ResultSet to handle determining which fields to use.

    \n soup wrap:

    I think what you are asking for will work by joining the Initial table to both Option_A and Option_B using LEFT JOIN, which will produce something like this:

    Initial LEFT JOIN Option_A LEFT JOIN NULL
    OR
    Initial LEFT JOIN NULL LEFT JOIN Option_B
    

    Example code:

    SELECT i.*, COALESCE(a.id, b.id) as Option_Id, COALESCE(a.name, b.name) as Option_Name
    FROM Initial_Table i
    LEFT JOIN Option_A_Table a ON a.initial_id = i.id AND i.special_value = 1234
    LEFT JOIN Option_B_Table b ON b.initial_id = i.id AND i.special_value <> 1234
    

    Once you have done this, you 'ignore' the set of NULLS. The additional trick here is in the SELECT line, where you need to decide what to do with the NULL fields. If the Option_A and Option_B tables are similar, then you can use the COALESCE function to return the first NON NULL value (as per the example).

    The other option is that you will simply have to list the Option_A fields and the Option_B fields, and let whatever is using the ResultSet to handle determining which fields to use.

    qid & accept id: (26548087, 26548384) query: Easiest way to query a SQL Server 2008 R2 XML data type? soup:

    If the column is already XML data type in SQL Server, then the code below should work by using the value function with XPATH. If it's stored as a varchar, you'd just need to replace ClassXML.value with CONVERT(XML, ClassXML).value. Hope this helps!

    \n
    DECLARE @Data TABLE (ClassXML XML)\nINSERT @Data VALUES ('false')\n\nSELECT\n    CONVERT(BIT, CASE WHEN ClassXML.value ('(/CustomContentData/prpIsRSSFeed)[1]',\n        'VARCHAR(50)') = 'true' THEN 1 ELSE 0 END) AS IsRssFeed\nFROM @Data\n
    \n

    Yields output

    \n
    IsRssFeed\n---------\n0\n
    \n soup wrap:

    If the column is already XML data type in SQL Server, then the code below should work by using the value function with XPATH. If it's stored as a varchar, you'd just need to replace ClassXML.value with CONVERT(XML, ClassXML).value. Hope this helps!

    DECLARE @Data TABLE (ClassXML XML)
    INSERT @Data VALUES ('false')
    
    SELECT
        CONVERT(BIT, CASE WHEN ClassXML.value ('(/CustomContentData/prpIsRSSFeed)[1]',
            'VARCHAR(50)') = 'true' THEN 1 ELSE 0 END) AS IsRssFeed
    FROM @Data
    

    Yields output

    IsRssFeed
    ---------
    0
    
    qid & accept id: (26592173, 26592457) query: SQL identify whether a word in nvarchar variable is listed in lookup table soup:

    Given your new requirements, I'm actually going to point you towards this answer that suggests (strongly) you use the Full Text Search functionality in SQL Server. If that is unavailable, though, you can take the performance hit of doing this yourself and use the following code:

    \n
    SELECT MyWord, @String AS SearchPhrase \nFROM MyLookup \nWHERE '.' + @String + '.' LIKE '%[^a-z]'+MyWord+'[^a-z]%'\n
    \n

    Full example with sample data:

    \n
    DECLARE @MyLookup TABLE (MyWord VARCHAR(20))\nINSERT INTO @MyLookup (MyWord) VALUES ('Flubber')\n\nDECLARE @String nVARCHAR(4000)\n\n\nSET @String = N'I really like watching Flubbers'\nSELECT MyWord, @String AS SearchPhrase FROM @MyLookup WHERE '.' + @String + '.' LIKE '%[^a-z]'+MyWord+'[^a-z]%'\n\nSET @String = N'I really like watching Flubber.'\nSELECT MyWord, @String AS SearchPhrase FROM @MyLookup WHERE '.' + @String + '.' LIKE '%[^a-z]'+MyWord+'[^a-z]%'\n\nSET @String = N'I really like watching Flubber, is that weird?'\nSELECT MyWord, @String AS SearchPhrase FROM @MyLookup WHERE '.' + @String + '.' LIKE '%[^a-z]'+MyWord+'[^a-z]%'\n\nSET @String = N'I really like watching the Flubber movie'\nSELECT MyWord, @String AS SearchPhrase FROM @MyLookup WHERE '.' + @String + '.' LIKE '%[^a-z]'+MyWord+'[^a-z]%'\n\nSET @String = N'I really like watching Flubber!'\nSELECT MyWord, @String AS SearchPhrase FROM @MyLookup WHERE '.' + @String + '.' LIKE '%[^a-z]'+MyWord+'[^a-z]%'\n\nSET @String = N'I really like watchingFlubber'\nSELECT MyWord, @String AS SearchPhrase FROM @MyLookup WHERE '.' + @String + '.' LIKE '%[^a-z]'+MyWord+'[^a-z]%'\n
    \n

    EDIT: talk about a moving target ... I've taken the code from your comment (using full text search) and printed off a return string to replace the word with asterisks.

    \n

    Note that if the word shows multiple times in the same search string, this will replace all instances. I don't have access to an instance with full text search enabled, so you'll have to confirm that this is working as expected.

    \n
    DECLARE @String NVARCHAR(4000) \nDECLARE @MatchedWord NVARCHAR(100) \nDECLARE @ReturnString NVARCHAR(4000) \nSET @String = 'i really like watching Flubber' \n\nSELECT \n    @MatchedWord = MyWord,\n    @ReturnString = REPLACE(@String, MyWord, REPLICATE('*', LEN(MyWord)))\nFROM MyLookup \nWHERE FREETEXT (DESCR,@String) \n\nPRINT CONVERT(VARCHAR(4000), @MatchedWord)\nPRINT CONVERT(VARCHAR(4000), @ReturnString)\n
    \n soup wrap:

    Given your new requirements, I'm actually going to point you towards this answer that suggests (strongly) you use the Full Text Search functionality in SQL Server. If that is unavailable, though, you can take the performance hit of doing this yourself and use the following code:

    SELECT MyWord, @String AS SearchPhrase 
    FROM MyLookup 
    WHERE '.' + @String + '.' LIKE '%[^a-z]'+MyWord+'[^a-z]%'
    

    Full example with sample data:

    DECLARE @MyLookup TABLE (MyWord VARCHAR(20))
    INSERT INTO @MyLookup (MyWord) VALUES ('Flubber')
    
    DECLARE @String nVARCHAR(4000)
    
    
    SET @String = N'I really like watching Flubbers'
    SELECT MyWord, @String AS SearchPhrase FROM @MyLookup WHERE '.' + @String + '.' LIKE '%[^a-z]'+MyWord+'[^a-z]%'
    
    SET @String = N'I really like watching Flubber.'
    SELECT MyWord, @String AS SearchPhrase FROM @MyLookup WHERE '.' + @String + '.' LIKE '%[^a-z]'+MyWord+'[^a-z]%'
    
    SET @String = N'I really like watching Flubber, is that weird?'
    SELECT MyWord, @String AS SearchPhrase FROM @MyLookup WHERE '.' + @String + '.' LIKE '%[^a-z]'+MyWord+'[^a-z]%'
    
    SET @String = N'I really like watching the Flubber movie'
    SELECT MyWord, @String AS SearchPhrase FROM @MyLookup WHERE '.' + @String + '.' LIKE '%[^a-z]'+MyWord+'[^a-z]%'
    
    SET @String = N'I really like watching Flubber!'
    SELECT MyWord, @String AS SearchPhrase FROM @MyLookup WHERE '.' + @String + '.' LIKE '%[^a-z]'+MyWord+'[^a-z]%'
    
    SET @String = N'I really like watchingFlubber'
    SELECT MyWord, @String AS SearchPhrase FROM @MyLookup WHERE '.' + @String + '.' LIKE '%[^a-z]'+MyWord+'[^a-z]%'
    

    EDIT: talk about a moving target ... I've taken the code from your comment (using full text search) and printed off a return string to replace the word with asterisks.

    Note that if the word shows multiple times in the same search string, this will replace all instances. I don't have access to an instance with full text search enabled, so you'll have to confirm that this is working as expected.

    DECLARE @String NVARCHAR(4000) 
    DECLARE @MatchedWord NVARCHAR(100) 
    DECLARE @ReturnString NVARCHAR(4000) 
    SET @String = 'i really like watching Flubber' 
    
    SELECT 
        @MatchedWord = MyWord,
        @ReturnString = REPLACE(@String, MyWord, REPLICATE('*', LEN(MyWord)))
    FROM MyLookup 
    WHERE FREETEXT (DESCR,@String) 
    
    PRINT CONVERT(VARCHAR(4000), @MatchedWord)
    PRINT CONVERT(VARCHAR(4000), @ReturnString)
    
    qid & accept id: (26607450, 26608029) query: Roll back Update or delete data Through Flashback query soup:

    There are two things,

    \n

    1.Flashback by SCN

    \n
    SELECT column_list\nFROM table_name\nAS OF SCN scn_number;\n
    \n

    2.Flashback by TIMESTAMP

    \n
    SELECT column_list\nFROM table_name\nAS OF TIMESTAMP TO_TIMESTAMP('the timestamp value');\n
    \n

    To get current_scn and systimestamp, query :

    \n
    SELECT current_scn, SYSTIMESTAMP\nFROM v$database;\n
    \n

    Update Example as requested by OP.

    \n

    To flashback the table to the old scn, use FLASHBACK TABLE..TO SCN clause.

    \n
    SQL> DROP TABLE string_ex PURGE;\n\nTable dropped.\n\nSQL> CREATE TABLE string_ex (sl_ps_code VARCHAR2(20) );\n\nTable created.\n\nSQL> INSERT INTO string_ex (sl_ps_code) VALUES ('AR14ASM0002');\n\n1 row created.\n\nSQL> INSERT INTO string_ex (sl_ps_code) VALUES ('AR14SFT0018');\n\n1 row created.\n\nSQL> INSERT INTO string_ex (sl_ps_code) VALUES ('AR14SFT0019');\n\n1 row created.\n\nSQL> INSERT INTO string_ex (sl_ps_code) VALUES ('AR14SFT0062');\n\n1 row created.\n\nSQL> COMMIT;\n\nCommit complete.\n\nSQL> SELECT current_scn, SYSTIMESTAMP FROM v$database;\n\n         CURRENT_SCN SYSTIMESTAMP\n-------------------- --------------------------------------------\n      13818123201277 29-OCT-14 03.02.17.419000 PM +05:30\n\nSQL> SELECT current_scn, SYSTIMESTAMP FROM v$database;\n\n         CURRENT_SCN SYSTIMESTAMP\n-------------------- --------------------------------------------\n      13818123201280 29-OCT-14 03.02.22.785000 PM +05:30\n\nSQL> SELECT current_scn, SYSTIMESTAMP FROM v$database;\n\n         CURRENT_SCN SYSTIMESTAMP\n-------------------- --------------------------------------------\n      13818123201282 29-OCT-14 03.02.26.781000 PM +05:30\n\nSQL> SELECT * FROM string_ex;\n\nSL_PS_CODE\n---------------\nAR14ASM0002\nAR14SFT0018\nAR14SFT0019\nAR14SFT0062\n\nSQL>\n
    \n

    I have four rows in the table.

    \n
    SQL> ALTER TABLE string_ex ENABLE ROW MOVEMENT;\n\nTable altered.\n\nSQL>\n
    \n

    Row movement is required.

    \n
    SQL> DELETE FROM string_ex WHERE ROWNUM =1;\n\n1 row deleted.\n\nSQL>\nSQL> COMMIT;\n\nCommit complete.\n\nSQL>\nSQL> SELECT * FROM string_ex;\n\nSL_PS_CODE\n---------------\nAR14SFT0018\nAR14SFT0019\nAR14SFT0062\n
    \n

    I deleted a row now and committed the changes.

    \n
    SQL> FLASHBACK TABLE string_ex TO SCN 13818123201277;\n\nFlashback complete.\n
    \n

    Flashback is complete

    \n
    SQL> SELECT * FROM string_ex;\n\nSL_PS_CODE\n---------------\nAR14ASM0002\nAR14SFT0018\nAR14SFT0019\nAR14SFT0062\n\nSQL>\n
    \n

    I now have my table to old state and the row is back

    \n soup wrap:

    There are two things,

    1.Flashback by SCN

    SELECT column_list
    FROM table_name
    AS OF SCN scn_number;
    

    2.Flashback by TIMESTAMP

    SELECT column_list
    FROM table_name
    AS OF TIMESTAMP TO_TIMESTAMP('the timestamp value');
    

    To get current_scn and systimestamp, query :

    SELECT current_scn, SYSTIMESTAMP
    FROM v$database;
    

    Update Example as requested by OP.

    To flashback the table to the old scn, use FLASHBACK TABLE..TO SCN clause.

    SQL> DROP TABLE string_ex PURGE;
    
    Table dropped.
    
    SQL> CREATE TABLE string_ex (sl_ps_code VARCHAR2(20) );
    
    Table created.
    
    SQL> INSERT INTO string_ex (sl_ps_code) VALUES ('AR14ASM0002');
    
    1 row created.
    
    SQL> INSERT INTO string_ex (sl_ps_code) VALUES ('AR14SFT0018');
    
    1 row created.
    
    SQL> INSERT INTO string_ex (sl_ps_code) VALUES ('AR14SFT0019');
    
    1 row created.
    
    SQL> INSERT INTO string_ex (sl_ps_code) VALUES ('AR14SFT0062');
    
    1 row created.
    
    SQL> COMMIT;
    
    Commit complete.
    
    SQL> SELECT current_scn, SYSTIMESTAMP FROM v$database;
    
             CURRENT_SCN SYSTIMESTAMP
    -------------------- --------------------------------------------
          13818123201277 29-OCT-14 03.02.17.419000 PM +05:30
    
    SQL> SELECT current_scn, SYSTIMESTAMP FROM v$database;
    
             CURRENT_SCN SYSTIMESTAMP
    -------------------- --------------------------------------------
          13818123201280 29-OCT-14 03.02.22.785000 PM +05:30
    
    SQL> SELECT current_scn, SYSTIMESTAMP FROM v$database;
    
             CURRENT_SCN SYSTIMESTAMP
    -------------------- --------------------------------------------
          13818123201282 29-OCT-14 03.02.26.781000 PM +05:30
    
    SQL> SELECT * FROM string_ex;
    
    SL_PS_CODE
    ---------------
    AR14ASM0002
    AR14SFT0018
    AR14SFT0019
    AR14SFT0062
    
    SQL>
    

    I have four rows in the table.

    SQL> ALTER TABLE string_ex ENABLE ROW MOVEMENT;
    
    Table altered.
    
    SQL>
    

    Row movement is required.

    SQL> DELETE FROM string_ex WHERE ROWNUM =1;
    
    1 row deleted.
    
    SQL>
    SQL> COMMIT;
    
    Commit complete.
    
    SQL>
    SQL> SELECT * FROM string_ex;
    
    SL_PS_CODE
    ---------------
    AR14SFT0018
    AR14SFT0019
    AR14SFT0062
    

    I deleted a row now and committed the changes.

    SQL> FLASHBACK TABLE string_ex TO SCN 13818123201277;
    
    Flashback complete.
    

    Flashback is complete

    SQL> SELECT * FROM string_ex;
    
    SL_PS_CODE
    ---------------
    AR14ASM0002
    AR14SFT0018
    AR14SFT0019
    AR14SFT0062
    
    SQL>
    

    I now have my table to old state and the row is back

    qid & accept id: (26682614, 26682658) query: Finding Non Matches in SQL Statement soup:

    EXISTS OPERATOR

    \n
    SELECT *\nFROM updated u\nWHERE NOT EXISTS (SELECT 1\n                  FROM accounts \n                  WHERE `name` = u_s_customer)\n
    \n

    LEFT JOIN

    \n
    SELECT *\nFROM updated LEFT JOIN accounts \nON `name` = u_s_customer\nWHERE name IS NULL\n
    \n

    NOT IN

    \n
    SELECT *\nFROM updated \nWHERE name NOT IN (SELECT u_s_customer\n                   FROM accounts )\n
    \n soup wrap:

    EXISTS OPERATOR

    SELECT *
    FROM updated u
    WHERE NOT EXISTS (SELECT 1
                      FROM accounts 
                      WHERE `name` = u_s_customer)
    

    LEFT JOIN

    SELECT *
    FROM updated LEFT JOIN accounts 
    ON `name` = u_s_customer
    WHERE name IS NULL
    

    NOT IN

    SELECT *
    FROM updated 
    WHERE name NOT IN (SELECT u_s_customer
                       FROM accounts )
    
    qid & accept id: (26711455, 26712299) query: SQL "First relevant day" soup:

    Tested, this one works :) you can keep it simple.

    \n

    Query:

    \n
    SELECT * FROM (\n    SELECT\n    oh.*\n    FROM\n    opening_hours oh\n    ORDER BY restaurant_id, \n    `day` + IF(`day` < $current_day, 7, 0)\n) sq\nGROUP BY restaurant_id;\n
    \n

    Explanation:

    \n

    Note though, that this is a bit hacky. To select a column that is not used in the group by and has no aggregate function applied to it, usually isn't allowed, because theoretically it could give you a random row of each group. That's why it's not allowed in most database systems. MySQL is actually the only one I know of, that allows this (if not set otherwise via sql-mode). Like I said, in theory. Practically it's a bit different and if you do an order by in the subquery, MySQL will always give you the minimum or maximum value (depending on the sort order).

    \n

    Tests:

    \n

    Desired result with current day = 1:

    \n
    root@VM:playground > SELECT * FROM (\n    ->     SELECT\n    ->     oh.*\n    ->     FROM\n    ->     opening_hours oh\n    ->     ORDER BY restaurant_id,\n    ->     `day` + IF(`day` < 1, 7, 0)\n    -> ) sq\n    -> GROUP BY restaurant_id;\n+----+---------------+------------+----------+-----+\n| id | restaurant_id | start_time | end_time | day |\n+----+---------------+------------+----------+-----+\n|  1 |             1 | 12:00:00   | 18:00:00 |   1 |\n|  3 |             2 | 09:00:00   | 16:00:00 |   4 |\n|  7 |             3 | 09:00:00   | 16:00:00 |   1 |\n+----+---------------+------------+----------+-----+\n3 rows in set (0.00 sec)\n
    \n

    Desired result with current day = 6:

    \n
    root@VM:playground > SELECT * FROM (\n    ->     SELECT\n    ->     oh.*\n    ->     FROM\n    ->     opening_hours oh\n    ->     ORDER BY restaurant_id,\n    ->     `day` + IF(`day` < 6, 7, 0)\n    -> ) sq\n    -> GROUP BY restaurant_id;\n+----+---------------+------------+----------+-----+\n| id | restaurant_id | start_time | end_time | day |\n+----+---------------+------------+----------+-----+\n|  1 |             1 | 12:00:00   | 18:00:00 |   1 |\n|  3 |             2 | 09:00:00   | 16:00:00 |   4 |\n|  8 |             3 | 09:00:00   | 16:00:00 |   6 |\n+----+---------------+------------+----------+-----+\n3 rows in set (0.00 sec)\n
    \n soup wrap:

    Tested, this one works :) you can keep it simple.

    Query:

    SELECT * FROM (
        SELECT
        oh.*
        FROM
        opening_hours oh
        ORDER BY restaurant_id, 
        `day` + IF(`day` < $current_day, 7, 0)
    ) sq
    GROUP BY restaurant_id;
    

    Explanation:

    Note though, that this is a bit hacky. To select a column that is not used in the group by and has no aggregate function applied to it, usually isn't allowed, because theoretically it could give you a random row of each group. That's why it's not allowed in most database systems. MySQL is actually the only one I know of, that allows this (if not set otherwise via sql-mode). Like I said, in theory. Practically it's a bit different and if you do an order by in the subquery, MySQL will always give you the minimum or maximum value (depending on the sort order).

    Tests:

    Desired result with current day = 1:

    root@VM:playground > SELECT * FROM (
        ->     SELECT
        ->     oh.*
        ->     FROM
        ->     opening_hours oh
        ->     ORDER BY restaurant_id,
        ->     `day` + IF(`day` < 1, 7, 0)
        -> ) sq
        -> GROUP BY restaurant_id;
    +----+---------------+------------+----------+-----+
    | id | restaurant_id | start_time | end_time | day |
    +----+---------------+------------+----------+-----+
    |  1 |             1 | 12:00:00   | 18:00:00 |   1 |
    |  3 |             2 | 09:00:00   | 16:00:00 |   4 |
    |  7 |             3 | 09:00:00   | 16:00:00 |   1 |
    +----+---------------+------------+----------+-----+
    3 rows in set (0.00 sec)
    

    Desired result with current day = 6:

    root@VM:playground > SELECT * FROM (
        ->     SELECT
        ->     oh.*
        ->     FROM
        ->     opening_hours oh
        ->     ORDER BY restaurant_id,
        ->     `day` + IF(`day` < 6, 7, 0)
        -> ) sq
        -> GROUP BY restaurant_id;
    +----+---------------+------------+----------+-----+
    | id | restaurant_id | start_time | end_time | day |
    +----+---------------+------------+----------+-----+
    |  1 |             1 | 12:00:00   | 18:00:00 |   1 |
    |  3 |             2 | 09:00:00   | 16:00:00 |   4 |
    |  8 |             3 | 09:00:00   | 16:00:00 |   6 |
    +----+---------------+------------+----------+-----+
    3 rows in set (0.00 sec)
    
    qid & accept id: (26718516, 26719029) query: HasMany with belongsToMany relationship soup:

    Like in the comment, it's impossible to setup such relationship with builtin methods of Eloquent. Here's how you can get the files using a bit of trickery:

    \n
    Person::with(['events.files' => function ($q) use (&$files) {\n  $files = $q->get()->unique();\n}])->find($id);\n
    \n

    Then:

    \n
    $files; // collection of files related to the Person through collection of his events\n
    \n

    Mind that this code will run additional query to get the files, so in the example above:

    \n
    1 fetch Person\n2 fetch Events related to Person\n3 fetch Files related to all the Events\n4 again fetch Files related to all the Events\n
    \n soup wrap:

    Like in the comment, it's impossible to setup such relationship with builtin methods of Eloquent. Here's how you can get the files using a bit of trickery:

    Person::with(['events.files' => function ($q) use (&$files) {
      $files = $q->get()->unique();
    }])->find($id);
    

    Then:

    $files; // collection of files related to the Person through collection of his events
    

    Mind that this code will run additional query to get the files, so in the example above:

    1 fetch Person
    2 fetch Events related to Person
    3 fetch Files related to all the Events
    4 again fetch Files related to all the Events
    
    qid & accept id: (26728011, 26729028) query: Oracle how to delete from a table except few partitions data soup:

    It is easy to delete data from a specific partition: this statement clears down all the data for February 2012:

    \n
    delete from t23 partition (feb2012);\n
    \n

    A quicker method is to truncate the partition:

    \n
    alter table t23 truncate partition feb2012;\n
    \n

    There are two potential snags here:

    \n
      \n
    1. Oracle won't let us truncate partitions if we have foreign keys referencing the table.
    2. \n
    3. The operation invalidates any partitioned Indexes so we need to rebuild them afterwards.
    4. \n
    \n

    Also, it's DDL, so no rollback.

    \n

    If we never again want to store data for that month we can drop the partition:

    \n
    alter table t23 drop partition feb2012;\n
    \n

    The problem arises when we want to zap multiple partitions and we don't fancy all that typing. We cannot parameterise the partition name, because it's an object name not a variable (no quotes). So leave only dynamic SQL.

    \n

    As you want to remove most of the data but retain the partition structure truncating the partitions is the best option. Remember to invalidate any integrity constraints (and to reinstate them afterwards).

    \n
    declare\n    stmt varchar2(32767);\nbegin\n    for lrec in ( select partition_name\n                  from user_tab_partitions\n                  where table_name = 'T23'\n                  and partition_name like '%2012'\n                )\n    loop\n        stmt := 'alter table t23 truncate partition '\n                    || lrec.partition_name\n                  ;\n        dbms_output.put_line(stmt);\n        execute immediate stmt;\n    end loop;\nend;\n/\n
    \n

    You should definitely run the loop first with execute immediate call commented out, so you can see which partitions your WHERE clause is selecting. Obviously you have a back-up and can recover data you didn't mean to remove. But the quickest way to undertake a restore is not to need one.

    \n

    Afterwards run this query to see which partitions you should rebuild:

    \n
    select ip.index_name, ip.partition_name, ip.status \nfrom user_indexes i\n     join user_ind_partitions ip\n      on  ip.index_name = i.index_name\nwhere i.table_name = 'T23'\nand ip.status = 'UNUSABLE';\n
    \n

    You can automate the rebuild statements in a similar fashion.

    \n
    \n
    \n

    " I am thinking of copying the data of partitions I need into a temp\n table and truncate the original table and copy back the data from temp\n table to original table. "

    \n
    \n

    That's another way of doing things. With exchange partition it might be quite quick. It might also be slower. It also depends on things like foreign keys and indexes, and the ratio of zapped partitions to retained ones. If performance is important and/or you need to undertake this operation regularly then you should to benchmark the various options and see what works best for you.

    \n soup wrap:

    It is easy to delete data from a specific partition: this statement clears down all the data for February 2012:

    delete from t23 partition (feb2012);
    

    A quicker method is to truncate the partition:

    alter table t23 truncate partition feb2012;
    

    There are two potential snags here:

    1. Oracle won't let us truncate partitions if we have foreign keys referencing the table.
    2. The operation invalidates any partitioned Indexes so we need to rebuild them afterwards.

    Also, it's DDL, so no rollback.

    If we never again want to store data for that month we can drop the partition:

    alter table t23 drop partition feb2012;
    

    The problem arises when we want to zap multiple partitions and we don't fancy all that typing. We cannot parameterise the partition name, because it's an object name not a variable (no quotes). So leave only dynamic SQL.

    As you want to remove most of the data but retain the partition structure truncating the partitions is the best option. Remember to invalidate any integrity constraints (and to reinstate them afterwards).

    declare
        stmt varchar2(32767);
    begin
        for lrec in ( select partition_name
                      from user_tab_partitions
                      where table_name = 'T23'
                      and partition_name like '%2012'
                    )
        loop
            stmt := 'alter table t23 truncate partition '
                        || lrec.partition_name
                      ;
            dbms_output.put_line(stmt);
            execute immediate stmt;
        end loop;
    end;
    /
    

    You should definitely run the loop first with execute immediate call commented out, so you can see which partitions your WHERE clause is selecting. Obviously you have a back-up and can recover data you didn't mean to remove. But the quickest way to undertake a restore is not to need one.

    Afterwards run this query to see which partitions you should rebuild:

    select ip.index_name, ip.partition_name, ip.status 
    from user_indexes i
         join user_ind_partitions ip
          on  ip.index_name = i.index_name
    where i.table_name = 'T23'
    and ip.status = 'UNUSABLE';
    

    You can automate the rebuild statements in a similar fashion.


    " I am thinking of copying the data of partitions I need into a temp table and truncate the original table and copy back the data from temp table to original table. "

    That's another way of doing things. With exchange partition it might be quite quick. It might also be slower. It also depends on things like foreign keys and indexes, and the ratio of zapped partitions to retained ones. If performance is important and/or you need to undertake this operation regularly then you should to benchmark the various options and see what works best for you.

    qid & accept id: (26729494, 26772475) query: swapping comma separated values in oracle soup:

    Using only regexp_replace,

    \n
    with string_table(slno, old_string)\nas (\n        select 1, '1,2,3,4,5,6' from dual union all\n        select 2, '1,2,3,4,5' from dual union all\n        select 3, 'a,b,c,d,e,f' from dual union all\n        select 4, 'a,b,c,d,e' from dual\n)\nselect\n        slno,\n        old_string,\n        regexp_replace(old_string,'([^,]+),([^,]+)','\2,\1')    new_string\nfrom \n        string_table;\n\n      SLNO  OLD_STRING   NEW_STRING\n----------  -----------  ------------------------------------------------------------\n         1  1,2,3,4,5,6  2,1,4,3,6,5\n         2  1,2,3,4,5    2,1,4,3,5\n         3  a,b,c,d,e,f  b,a,d,c,f,e\n         4  a,b,c,d,e    b,a,d,c,e\n
    \n

    Pattern:

    \n
    ([^,]+) -- any string without a comma. Enclosed in brackets to form first capture group.\n,       -- a comma\n([^,]+) -- any string without a comma. Enclosed in brackets to form second capture group.\n
    \n

    So, this pattern matches two strings separated by a comma.

    \n

    Replace_String:

    \n
    \2  -- the second capture group from the Pattern\n,   -- a comma\n\1  -- the first capture group from the Pattern\n
    \n

    So, this replaces the matched pattern with the same string, but interchanging the position.

    \n soup wrap:

    Using only regexp_replace,

    with string_table(slno, old_string)
    as (
            select 1, '1,2,3,4,5,6' from dual union all
            select 2, '1,2,3,4,5' from dual union all
            select 3, 'a,b,c,d,e,f' from dual union all
            select 4, 'a,b,c,d,e' from dual
    )
    select
            slno,
            old_string,
            regexp_replace(old_string,'([^,]+),([^,]+)','\2,\1')    new_string
    from 
            string_table;
    
          SLNO  OLD_STRING   NEW_STRING
    ----------  -----------  ------------------------------------------------------------
             1  1,2,3,4,5,6  2,1,4,3,6,5
             2  1,2,3,4,5    2,1,4,3,5
             3  a,b,c,d,e,f  b,a,d,c,f,e
             4  a,b,c,d,e    b,a,d,c,e
    

    Pattern:

    ([^,]+) -- any string without a comma. Enclosed in brackets to form first capture group.
    ,       -- a comma
    ([^,]+) -- any string without a comma. Enclosed in brackets to form second capture group.
    

    So, this pattern matches two strings separated by a comma.

    Replace_String:

    \2  -- the second capture group from the Pattern
    ,   -- a comma
    \1  -- the first capture group from the Pattern
    

    So, this replaces the matched pattern with the same string, but interchanging the position.

    qid & accept id: (26804962, 26805103) query: vertica check if unique elements for each group from two columns are identical soup:

    Assuming the values in the two columns are distinct for a given gid, you can do this with a full outer join and group by:

    \n
    select coalesce(t.gid, t2.gid) as gid,\n       (case when count(t.gid) = count(*) and count(t2.gid) = count(*)\n             then 1\n             else 0\n        end)\nfrom invertica t full outer join\n     invertica t2\n     on t.gid = t2.gid and t.a = t2.b\ngroup by coalesce(t.gid, t2.gid);\n
    \n

    If the values are not distinct, you would need to clarify your question to specify whether the counts need to be the same in each column. (If you don't care about the counts, the above will work.)

    \n

    EDIT:

    \n

    You could also express this using not exists:

    \n
    select t.gid, max(val)\nfrom (select t.gid,\n             (case when not exists (select 1 from invertica t2 where t.gid = t2.gid and t.a = t2.b)\n                   then 0\n                   when not exists (select 1 from invertica t2 where t.gid = t2.gid and t.b = t2.a)\n                   then 0\n                   else 1\n              end) as val\n      from invertica t\n     ) t\ngroup by t.gid;\n
    \n soup wrap:

    Assuming the values in the two columns are distinct for a given gid, you can do this with a full outer join and group by:

    select coalesce(t.gid, t2.gid) as gid,
           (case when count(t.gid) = count(*) and count(t2.gid) = count(*)
                 then 1
                 else 0
            end)
    from invertica t full outer join
         invertica t2
         on t.gid = t2.gid and t.a = t2.b
    group by coalesce(t.gid, t2.gid);
    

    If the values are not distinct, you would need to clarify your question to specify whether the counts need to be the same in each column. (If you don't care about the counts, the above will work.)

    EDIT:

    You could also express this using not exists:

    select t.gid, max(val)
    from (select t.gid,
                 (case when not exists (select 1 from invertica t2 where t.gid = t2.gid and t.a = t2.b)
                       then 0
                       when not exists (select 1 from invertica t2 where t.gid = t2.gid and t.b = t2.a)
                       then 0
                       else 1
                  end) as val
          from invertica t
         ) t
    group by t.gid;
    
    qid & accept id: (26849766, 26849935) query: Need the greatest value in new column soup:

    Using CASE Statement you can find the largest value among the columns.

    \n
    SELECT ID,\n       amt_1,\n       amt_2,\n       amt_3,\n       amt_4,\n       CASE\n         WHEN amt_1 >= amt_2 AND amt_1 >= amt_3 AND amt_1 >= amt_4 THEN amt_1\n         WHEN amt_2 >= amt_1 AND amt_2 >= amt_3 AND amt_2 >= amt_4 THEN amt_2\n         WHEN amt_3 >= amt_1 AND amt_3 >= amt_2 AND amt_3 >= amt_4 THEN amt_3\n         WHEN amt_4 >= amt_1 AND amt_4 >= amt_2 AND amt_4 >= amt_3 THEN amt_4\n       END NEW_COL\nFROM   Tablename \n
    \n

    If you are using SQL SERVER 2008 or later versions then try this

    \n
    SELECT ID,amt_1,amt_2,amt_3,amt_4,\n  (SELECT Max(amt) FROM (VALUES (amt_1), (amt_2), (amt_3),(amt_4)) AS value(amt))  NEW_COL\nFROM tablename\n
    \n soup wrap:

    Using CASE Statement you can find the largest value among the columns.

    SELECT ID,
           amt_1,
           amt_2,
           amt_3,
           amt_4,
           CASE
             WHEN amt_1 >= amt_2 AND amt_1 >= amt_3 AND amt_1 >= amt_4 THEN amt_1
             WHEN amt_2 >= amt_1 AND amt_2 >= amt_3 AND amt_2 >= amt_4 THEN amt_2
             WHEN amt_3 >= amt_1 AND amt_3 >= amt_2 AND amt_3 >= amt_4 THEN amt_3
             WHEN amt_4 >= amt_1 AND amt_4 >= amt_2 AND amt_4 >= amt_3 THEN amt_4
           END NEW_COL
    FROM   Tablename 
    

    If you are using SQL SERVER 2008 or later versions then try this

    SELECT ID,amt_1,amt_2,amt_3,amt_4,
      (SELECT Max(amt) FROM (VALUES (amt_1), (amt_2), (amt_3),(amt_4)) AS value(amt))  NEW_COL
    FROM tablename
    
    qid & accept id: (26862578, 26862930) query: SQL Server Return All Sub Categories soup:

    You can do this with a recursive common-table expression (CTE).

    \n

    Something like this below should do it. Here X = 4 is a
    \nconstant: your input cat_id (200 in your example).

    \n
    WITH CatCTE (cat_id) AS\n(\n    SELECT t.cat_id\n    FROM tblCategories t\n    WHERE t.cat_id = 4\n\n    UNION ALL\n\n    SELECT P.cat_child_id as cat_id\n    FROM CatCTE AS m\n    JOIN tblCategoryHierarchy AS P on m.cat_id = P.cat_parent_id\n\n)\nSELECT cat_id\nFROM CatCTE\nWHERE\ncat_id <> 4;\n
    \n


    \n

    SCRIPT which creates some testing data:

    \n
    create table tblCategories(cat_id int, cat_name varchar(20));\n\ncreate table tblCategoryHierarchy(cat_parent_id int, cat_child_id int);\n\ninsert into tblCategories(cat_id, cat_name) values ( 1, 'cat 1');\ninsert into tblCategories(cat_id, cat_name) values ( 2, 'cat 2');\ninsert into tblCategories(cat_id, cat_name) values ( 3, 'cat 3');\ninsert into tblCategories(cat_id, cat_name) values ( 4, 'cat 4');\ninsert into tblCategories(cat_id, cat_name) values ( 5, 'cat 5');\n\ninsert into tblCategories(cat_id, cat_name) values ( 6, 'cat 6');\ninsert into tblCategories(cat_id, cat_name) values ( 7, 'cat 7');\ninsert into tblCategories(cat_id, cat_name) values ( 8, 'cat 8');\ninsert into tblCategories(cat_id, cat_name) values ( 9, 'cat 9');\ninsert into tblCategories(cat_id, cat_name) values (10, 'cat 10');\n\ninsert into tblCategories(cat_id, cat_name) values (11, 'cat 11');\ninsert into tblCategories(cat_id, cat_name) values (12, 'cat 12');\n\ninsert into tblCategoryHierarchy (cat_parent_id, cat_child_id) values ( 1, 2);\ninsert into tblCategoryHierarchy (cat_parent_id, cat_child_id) values ( 1, 3);\n\ninsert into tblCategoryHierarchy (cat_parent_id, cat_child_id) values ( 4, 6);\ninsert into tblCategoryHierarchy (cat_parent_id, cat_child_id) values ( 4, 8);\n\ninsert into tblCategoryHierarchy (cat_parent_id, cat_child_id) values ( 8, 10);\ninsert into tblCategoryHierarchy (cat_parent_id, cat_child_id) values ( 8, 11);\n\ninsert into tblCategoryHierarchy (cat_parent_id, cat_child_id) values (11, 12);\n\ninsert into tblCategoryHierarchy (cat_parent_id, cat_child_id) values ( 5, 7);\ninsert into tblCategoryHierarchy (cat_parent_id, cat_child_id) values ( 5, 9);\n
    \n soup wrap:

    You can do this with a recursive common-table expression (CTE).

    Something like this below should do it. Here X = 4 is a
    constant: your input cat_id (200 in your example).

    WITH CatCTE (cat_id) AS
    (
        SELECT t.cat_id
        FROM tblCategories t
        WHERE t.cat_id = 4
    
        UNION ALL
    
        SELECT P.cat_child_id as cat_id
        FROM CatCTE AS m
        JOIN tblCategoryHierarchy AS P on m.cat_id = P.cat_parent_id
    
    )
    SELECT cat_id
    FROM CatCTE
    WHERE
    cat_id <> 4;
    


    SCRIPT which creates some testing data:

    create table tblCategories(cat_id int, cat_name varchar(20));
    
    create table tblCategoryHierarchy(cat_parent_id int, cat_child_id int);
    
    insert into tblCategories(cat_id, cat_name) values ( 1, 'cat 1');
    insert into tblCategories(cat_id, cat_name) values ( 2, 'cat 2');
    insert into tblCategories(cat_id, cat_name) values ( 3, 'cat 3');
    insert into tblCategories(cat_id, cat_name) values ( 4, 'cat 4');
    insert into tblCategories(cat_id, cat_name) values ( 5, 'cat 5');
    
    insert into tblCategories(cat_id, cat_name) values ( 6, 'cat 6');
    insert into tblCategories(cat_id, cat_name) values ( 7, 'cat 7');
    insert into tblCategories(cat_id, cat_name) values ( 8, 'cat 8');
    insert into tblCategories(cat_id, cat_name) values ( 9, 'cat 9');
    insert into tblCategories(cat_id, cat_name) values (10, 'cat 10');
    
    insert into tblCategories(cat_id, cat_name) values (11, 'cat 11');
    insert into tblCategories(cat_id, cat_name) values (12, 'cat 12');
    
    insert into tblCategoryHierarchy (cat_parent_id, cat_child_id) values ( 1, 2);
    insert into tblCategoryHierarchy (cat_parent_id, cat_child_id) values ( 1, 3);
    
    insert into tblCategoryHierarchy (cat_parent_id, cat_child_id) values ( 4, 6);
    insert into tblCategoryHierarchy (cat_parent_id, cat_child_id) values ( 4, 8);
    
    insert into tblCategoryHierarchy (cat_parent_id, cat_child_id) values ( 8, 10);
    insert into tblCategoryHierarchy (cat_parent_id, cat_child_id) values ( 8, 11);
    
    insert into tblCategoryHierarchy (cat_parent_id, cat_child_id) values (11, 12);
    
    insert into tblCategoryHierarchy (cat_parent_id, cat_child_id) values ( 5, 7);
    insert into tblCategoryHierarchy (cat_parent_id, cat_child_id) values ( 5, 9);
    
    qid & accept id: (26906460, 26906869) query: Extracting Data from two tables in SQL soup:

    Try this,

    \n
    SELECT Sum(B.English) Total\nFROM   #Table_1 A\nJOIN   #Table_2 B ON A.Name = B.Name\nWHERE  Grade = 'AA' \n
    \n

    If you want the marks separately use this

    \n
    SELECT A.Name,\n       Sum(B.English) Total\nFROM   #Table_1 A\nJOIN   #Table_2 B ON A.Name = B.Name\nWHERE  Grade = 'AA'\nGROUP  BY A.Name \n
    \n soup wrap:

    Try this,

    SELECT Sum(B.English) Total
    FROM   #Table_1 A
    JOIN   #Table_2 B ON A.Name = B.Name
    WHERE  Grade = 'AA' 
    

    If you want the marks separately use this

    SELECT A.Name,
           Sum(B.English) Total
    FROM   #Table_1 A
    JOIN   #Table_2 B ON A.Name = B.Name
    WHERE  Grade = 'AA'
    GROUP  BY A.Name 
    
    qid & accept id: (26929769, 26932834) query: EDT and EST timestamp sqlldr data load in oracle soup:

    If you can't change the format of the data in the file, and can't manipulate the file before loading it, you could replace a specific EDT value with the region value US/Eastern (or any suitable value like America/New_York) with an SQL operator:

    \n
    "DOC_DATE_ADDED" TIMESTAMP WITH TIME ZONE "DY MON DD HH24:MI:SS TZR YYYY"\n  "REPLACE(:DOC_DATE_ADDED, 'EDT', 'US/Eastern')"  \n
    \n

    (split into two lines for readability, but you can do that in the control file too).

    \n

    When your sample data file is loaded the table contains:

    \n
    select to_char(doc_date_added, 'YYYY-MM-DD HH24:MI:SS TZD') as TZD,\n   to_char(doc_date_added, 'YYYY-MM-DD HH24:MI:SS TZR') as TZR\nfrom my_table;\n\nTZD                     TZR                          \n----------------------- ------------------------------\n2013-03-07 14:27:14 EST 2013-03-07 14:27:14 EST        \n2013-03-07 14:27:27 EST 2013-03-07 14:27:27 EST        \n2013-04-09 18:20:54 EDT 2013-04-09 18:20:54 US/EASTERN \n2013-04-09 18:24:26 EDT 2013-04-09 18:24:26 US/EASTERN \n
    \n

    ... so you preserve the EST/EDT split; though the TZR shows US/EASTERN and EST - so it might be better to change the EST value as well, with a nested REPLACE or with:

    \n
    "DOC_DATE_ADDED" TIMESTAMP WITH TIME ZONE "DY MON DD HH24:MI:SS TZR YYYY"\n  "REGEXP_REPLACE(:DOC_DATE_ADDED, 'E[SD]T', 'US/Eastern')"\n
    \n

    Or if all your values are always EST/EDT, you could do the timestamp conversion explicitly and just cut out the actual string you're given:

    \n
    "DOC_DATE_ADDED" CHAR "FROM_TZ(TO_TIMESTAMP(SUBSTR(:DOC_DATE_ADDED, 1, 19)\n  || SUBSTR(:DOC_DATE_ADDED, 25, 29), 'DY MON DD HH24:MI:SS YYYY'), 'US/Eastern')"\n
    \n

    Which loads your data as:

    \n
    TZD                     TZR                          \n----------------------- ------------------------------\n2013-03-07 14:27:14 EST 2013-03-07 14:27:14 US/EASTERN \n2013-03-07 14:27:27 EST 2013-03-07 14:27:27 US/EASTERN \n2013-04-09 18:20:54 EDT 2013-04-09 18:20:54 US/EASTERN \n2013-04-09 18:24:26 EDT 2013-04-09 18:24:26 US/EASTERN \n
    \n

    The danger with that is that if you ever do get a value in a different time zone it'll silently be recorded against the wrong region, whereas the first version will either process it successfully or reject it, depending on whether it's recognised (i.e. in Wernfried's first list).

    \n soup wrap:

    If you can't change the format of the data in the file, and can't manipulate the file before loading it, you could replace a specific EDT value with the region value US/Eastern (or any suitable value like America/New_York) with an SQL operator:

    "DOC_DATE_ADDED" TIMESTAMP WITH TIME ZONE "DY MON DD HH24:MI:SS TZR YYYY"
      "REPLACE(:DOC_DATE_ADDED, 'EDT', 'US/Eastern')"  
    

    (split into two lines for readability, but you can do that in the control file too).

    When your sample data file is loaded the table contains:

    select to_char(doc_date_added, 'YYYY-MM-DD HH24:MI:SS TZD') as TZD,
       to_char(doc_date_added, 'YYYY-MM-DD HH24:MI:SS TZR') as TZR
    from my_table;
    
    TZD                     TZR                          
    ----------------------- ------------------------------
    2013-03-07 14:27:14 EST 2013-03-07 14:27:14 EST        
    2013-03-07 14:27:27 EST 2013-03-07 14:27:27 EST        
    2013-04-09 18:20:54 EDT 2013-04-09 18:20:54 US/EASTERN 
    2013-04-09 18:24:26 EDT 2013-04-09 18:24:26 US/EASTERN 
    

    ... so you preserve the EST/EDT split; though the TZR shows US/EASTERN and EST - so it might be better to change the EST value as well, with a nested REPLACE or with:

    "DOC_DATE_ADDED" TIMESTAMP WITH TIME ZONE "DY MON DD HH24:MI:SS TZR YYYY"
      "REGEXP_REPLACE(:DOC_DATE_ADDED, 'E[SD]T', 'US/Eastern')"
    

    Or if all your values are always EST/EDT, you could do the timestamp conversion explicitly and just cut out the actual string you're given:

    "DOC_DATE_ADDED" CHAR "FROM_TZ(TO_TIMESTAMP(SUBSTR(:DOC_DATE_ADDED, 1, 19)
      || SUBSTR(:DOC_DATE_ADDED, 25, 29), 'DY MON DD HH24:MI:SS YYYY'), 'US/Eastern')"
    

    Which loads your data as:

    TZD                     TZR                          
    ----------------------- ------------------------------
    2013-03-07 14:27:14 EST 2013-03-07 14:27:14 US/EASTERN 
    2013-03-07 14:27:27 EST 2013-03-07 14:27:27 US/EASTERN 
    2013-04-09 18:20:54 EDT 2013-04-09 18:20:54 US/EASTERN 
    2013-04-09 18:24:26 EDT 2013-04-09 18:24:26 US/EASTERN 
    

    The danger with that is that if you ever do get a value in a different time zone it'll silently be recorded against the wrong region, whereas the first version will either process it successfully or reject it, depending on whether it's recognised (i.e. in Wernfried's first list).

    qid & accept id: (26942697, 26944392) query: how to add hours ,minutes or seconds in 'TIME' Datatype in mysql soup:
    select arrival_time,\n       maketime(mod(HOUR(date_add(arrival_time, INTERVAL 1 HOUR)), 24),\n                mod(minute(date_add(arrival_time, INTERVAL 2 MINUTE)), 60),\n                mod(second(date_add(arrival_time, INTERVAL 2 SECOND)), 60)) sooner_or_later,\n       TIME((ADDTIME(TIME('23:59:59'), TIME('01:02:02')))%(TIME('24:00:00'))) or_rather_so\nfrom table1;\n
    \n

    returns

    \n
    |                   ARRIVAL_TIME |                SOONER_OR_LATER |                   OR_RATHER_SO |\n|--------------------------------|--------------------------------|--------------------------------|\n| January, 01 1970 23:59:59+0000 | January, 01 1970 00:01:01+0000 | January, 01 1970 01:02:01+0000 |\n
    \n

    Second column pushing bits. Last column doing proper arithmetic - borrowed from ADDTIME() return 24 hour time

    \n

    SQL Fiddle

    \n soup wrap:
    select arrival_time,
           maketime(mod(HOUR(date_add(arrival_time, INTERVAL 1 HOUR)), 24),
                    mod(minute(date_add(arrival_time, INTERVAL 2 MINUTE)), 60),
                    mod(second(date_add(arrival_time, INTERVAL 2 SECOND)), 60)) sooner_or_later,
           TIME((ADDTIME(TIME('23:59:59'), TIME('01:02:02')))%(TIME('24:00:00'))) or_rather_so
    from table1;
    

    returns

    |                   ARRIVAL_TIME |                SOONER_OR_LATER |                   OR_RATHER_SO |
    |--------------------------------|--------------------------------|--------------------------------|
    | January, 01 1970 23:59:59+0000 | January, 01 1970 00:01:01+0000 | January, 01 1970 01:02:01+0000 |
    

    Second column pushing bits. Last column doing proper arithmetic - borrowed from ADDTIME() return 24 hour time

    SQL Fiddle

    qid & accept id: (26945528, 26945552) query: insert into derived column with in same table soup:

    Since you have a computed column you have to specify the columns you insert into:

    \n
    insert into db (col, col2) values (10, 20);\n
    \n

    a select * from db after the insert above would give you:

    \n
    | COL | COL2 | CAL3 |\n|-----|------|------|\n|  10 |   20 |   30 |\n
    \n soup wrap:

    Since you have a computed column you have to specify the columns you insert into:

    insert into db (col, col2) values (10, 20);
    

    a select * from db after the insert above would give you:

    | COL | COL2 | CAL3 |
    |-----|------|------|
    |  10 |   20 |   30 |
    
    qid & accept id: (26965377, 26965438) query: sql server get value between spaces in one column soup:

    One trick for this type of problem is to use parsename(). I think the following does what you want, assuming there are no periods in the names:

    \n
    select parsename(replace(val, ' ', '.'), 2)\n
    \n

    Here is an example.

    \n

    EDIT:

    \n

    Sgeddes is correct. If you consistently want the second name and can have three or four parts, then reverse() can be used:

    \n
    select reverse(parsename(replace(reverse(val), ' ', '.'), 2))\n
    \n

    (It seems that one of the values does have four parts; I originally read it as "Delete From TableName".)

    \n soup wrap:

    One trick for this type of problem is to use parsename(). I think the following does what you want, assuming there are no periods in the names:

    select parsename(replace(val, ' ', '.'), 2)
    

    Here is an example.

    EDIT:

    Sgeddes is correct. If you consistently want the second name and can have three or four parts, then reverse() can be used:

    select reverse(parsename(replace(reverse(val), ' ', '.'), 2))
    

    (It seems that one of the values does have four parts; I originally read it as "Delete From TableName".)

    qid & accept id: (27004031, 27004544) query: Selected item from Datagridview to Show in ComboBox soup:

    You could try doing something like this.

    \n
            ComboBox1.Text = DataGridView1.SelectedRows.Item(0).Cells(0).FormattedValue + " " + DataGridView1.SelectedRows.Item(0).Cells(1).FormattedValue\n
    \n

    or

    \n
            ComboBox1.Text = DataGridView1.SelectedRows.Item(0).Cells(0).FormattedValue + " " + _\n                         DataGridView1.SelectedRows.Item(0).Cells(1).FormattedValue\n
    \n

    However if your drop down list box has an ID in the value you and you have it in the Grid, you set the

    \n
    ComboBox1.Value = DataGridView1.Rows[DataGridView1.SelectedIndex].Cells["HiddenIdRow"].Text.ToString()\n
    \n soup wrap:

    You could try doing something like this.

            ComboBox1.Text = DataGridView1.SelectedRows.Item(0).Cells(0).FormattedValue + " " + DataGridView1.SelectedRows.Item(0).Cells(1).FormattedValue
    

    or

            ComboBox1.Text = DataGridView1.SelectedRows.Item(0).Cells(0).FormattedValue + " " + _
                             DataGridView1.SelectedRows.Item(0).Cells(1).FormattedValue
    

    However if your drop down list box has an ID in the value you and you have it in the Grid, you set the

    ComboBox1.Value = DataGridView1.Rows[DataGridView1.SelectedIndex].Cells["HiddenIdRow"].Text.ToString()
    
    qid & accept id: (27055911, 27056117) query: Using datediff in oracle soup:

    I would create a function to return the difference since there is no datediff in oracle. Something like this:

    \n
    CREATE OR REPLACE FUNCTION datediff (options   IN VARCHAR2,\n                                     p_d1      IN DATE,\n                                     p_d2      IN DATE)\n   RETURN NUMBER\nAS\n   l_result   NUMBER;\nBEGIN\n   SELECT   (p_d2 - p_d1)\n          * DECODE (UPPER (options),\n                    'SS', 24 * 60 * 60,\n                    'MI', 24 * 60,\n                    'HH', 24,\n                    NULL)\n     INTO l_result\n     FROM DUAL;\n\n   RETURN l_result;\nEND;\n
    \n

    The OPTIONS variable tells the function what I want to return, so ss = seconds, mi= minutes, hh=hour.

    \n

    Then I would change you original query to:

    \n
    Select * from myTable where datediff ('ss', Receive_Date, Update_Date) >= 60\n
    \n soup wrap:

    I would create a function to return the difference since there is no datediff in oracle. Something like this:

    CREATE OR REPLACE FUNCTION datediff (options   IN VARCHAR2,
                                         p_d1      IN DATE,
                                         p_d2      IN DATE)
       RETURN NUMBER
    AS
       l_result   NUMBER;
    BEGIN
       SELECT   (p_d2 - p_d1)
              * DECODE (UPPER (options),
                        'SS', 24 * 60 * 60,
                        'MI', 24 * 60,
                        'HH', 24,
                        NULL)
         INTO l_result
         FROM DUAL;
    
       RETURN l_result;
    END;
    

    The OPTIONS variable tells the function what I want to return, so ss = seconds, mi= minutes, hh=hour.

    Then I would change you original query to:

    Select * from myTable where datediff ('ss', Receive_Date, Update_Date) >= 60
    
    qid & accept id: (27107093, 27107767) query: Select average from join MYSQL soup:

    Consider the following. How does this result differ from the desired result?

    \n
    +-----+------------+-------------+----------+------------+--------------+-----+-------+-----+--------+\n| uid | name       | description | url      | picurl     | mapurl       | pid | price | rid | rating |\n+-----+------------+-------------+----------+------------+--------------+-----+-------+-----+--------+\n|   5 | Havana Pub |             |          |            |              |  35 |    74 |  11 |      5 |\n|   3 | Hos Naboen |             |          |            |              |  33 |    74 |   9 |      5 |\n|   2 | Javel      | Musikk      | javel.no | pic.jave.. | map.javel.no |  38 |    88 |   8 |      5 |\n|   1 | Kick       | Yay         | kick.no  | http://p.. | map.kick.no  |  31 |    74 |  15 |      1 |\n|   6 | Leopold    |             |          |            |              |  36 |    74 |  12 |      5 |\n|   4 | Victoria   |             |          |            |              |  37 |    75 |  10 |      5 |\n+-----+------------+-------------+----------+------------+--------------+-----+-------+-----+--------+\n
    \n

    OK. I'm going to take a wild stab in the dark here...

    \n
     SELECT p.uid\n      , u.name\n      , u.description\n      , u.url\n      , u.picurl\n      , u.mapurl\n      , p.pid\n      , p.price\n      , AVG(r.rating) rating\n   FROM utested u\n   JOIN price p\n     ON p.uid = u.uid\n   JOIN ( SELECT uid, MAX(pid) latest_price FROM price GROUP BY uid ) px\n     ON px.uid = p.uid\n    AND px.latest_price = p.pid\n   JOIN rating r\n     ON r.uid = u.uid\n  GROUP\n     BY u.name;\n +-----+------------+-------------+----------+--------------+--------------+-----+-------+--------+\n | uid | name       | description | url      | picurl       | mapurl       | pid | price | rating |\n +-----+------------+-------------+----------+--------------+--------------+-----+-------+--------+\n |   5 | Havana Pub |             |          |              |              |  35 |    74 | 5.5000 |\n |   3 | Hos Naboen |             |          |              |              |  33 |    74 | 4.0000 |\n |   2 | Javel      | Musikk      | javel.no | pic.javel... | map.javel.no |  38 |    88 | 5.0000 |\n |   1 | Kick       | Yay         | kick.no  | http://pri.. | map.kick.no  |  31 |    74 | 3.4000 |\n |   6 | Leopold    |             |          |              |              |  36 |    74 | 3.5000 |\n |   4 | Victoria   |             |          |              |              |  37 |    75 | 4.0000 |\n +-----+------------+-------------+----------+--------------+--------------+-----+-------+--------+\n
    \n soup wrap:

    Consider the following. How does this result differ from the desired result?

    +-----+------------+-------------+----------+------------+--------------+-----+-------+-----+--------+
    | uid | name       | description | url      | picurl     | mapurl       | pid | price | rid | rating |
    +-----+------------+-------------+----------+------------+--------------+-----+-------+-----+--------+
    |   5 | Havana Pub |             |          |            |              |  35 |    74 |  11 |      5 |
    |   3 | Hos Naboen |             |          |            |              |  33 |    74 |   9 |      5 |
    |   2 | Javel      | Musikk      | javel.no | pic.jave.. | map.javel.no |  38 |    88 |   8 |      5 |
    |   1 | Kick       | Yay         | kick.no  | http://p.. | map.kick.no  |  31 |    74 |  15 |      1 |
    |   6 | Leopold    |             |          |            |              |  36 |    74 |  12 |      5 |
    |   4 | Victoria   |             |          |            |              |  37 |    75 |  10 |      5 |
    +-----+------------+-------------+----------+------------+--------------+-----+-------+-----+--------+
    

    OK. I'm going to take a wild stab in the dark here...

     SELECT p.uid
          , u.name
          , u.description
          , u.url
          , u.picurl
          , u.mapurl
          , p.pid
          , p.price
          , AVG(r.rating) rating
       FROM utested u
       JOIN price p
         ON p.uid = u.uid
       JOIN ( SELECT uid, MAX(pid) latest_price FROM price GROUP BY uid ) px
         ON px.uid = p.uid
        AND px.latest_price = p.pid
       JOIN rating r
         ON r.uid = u.uid
      GROUP
         BY u.name;
     +-----+------------+-------------+----------+--------------+--------------+-----+-------+--------+
     | uid | name       | description | url      | picurl       | mapurl       | pid | price | rating |
     +-----+------------+-------------+----------+--------------+--------------+-----+-------+--------+
     |   5 | Havana Pub |             |          |              |              |  35 |    74 | 5.5000 |
     |   3 | Hos Naboen |             |          |              |              |  33 |    74 | 4.0000 |
     |   2 | Javel      | Musikk      | javel.no | pic.javel... | map.javel.no |  38 |    88 | 5.0000 |
     |   1 | Kick       | Yay         | kick.no  | http://pri.. | map.kick.no  |  31 |    74 | 3.4000 |
     |   6 | Leopold    |             |          |              |              |  36 |    74 | 3.5000 |
     |   4 | Victoria   |             |          |              |              |  37 |    75 | 4.0000 |
     +-----+------------+-------------+----------+--------------+--------------+-----+-------+--------+
    
    qid & accept id: (27112737, 27113062) query: How do I find one matching strings in two txt files soup:

    Here's how I'd approach it (though using PowerShell rather than SQL):

    \n
    clear\npushd c:\myPath\myFolder\\n\n#read in the contents of the files\n$file1 = get-content("file1.txt") \n$file2 = get-content("file2.txt")\n\n#loop through each row of the whitespace separated file\n$file1 = $file1 | %{\n    #for each line, split on whitespace characters, returning the results back in a single column\n    $_ -split "\s" | %{$_}\n}\n#compare the two files for matching data & output this info\ncompare-object $file1 $file2 -IncludeEqual -ExcludeDifferent | ft -AutoSize \n\npopd\n
    \n

    NB: to ignore the protocol, simply remove it from the string using a similar technique to our split on spaces; i.e. a regex, this time going with replace instead of split.

    \n
    clear\npushd c:\temp\n\n$file1 = get-content("file1.txt") \n$file2 = get-content("file2.txt")\n\n$file1 = $file1 | %{\n    $_ -split "\s" | %{\n        $_ -replace ".*://(.*)",'$1'\n    }\n}\n\n$file2 = $file2 | %{\n    $_ -replace ".*://(.*)",'$1'\n}\n\ncompare-object $file1 $file2 -IncludeEqual -ExcludeDifferent | ft -AutoSize \n
    \n

    However, should you prefer a SQL solution, try this (MS SQL Server):

    \n
    create table f1(url nvarchar(1024))\ncreate table f2(url nvarchar(1024))\n\nBULK INSERT f1\nFROM 'C:\myPath\myFolder\file1.txt' \nWITH ( ROWTERMINATOR =' ', FIRSTROW = 1 )\n\nBULK INSERT f2\nFROM 'C:\myPath\myFolder\file2.txt' \nWITH ( FIRSTROW = 1 )\ngo\n\ndelete from f1 where coalesce(rtrim(url),'') = ''\ndelete from f2 where coalesce(rtrim(url),'') = ''\n\nselect x.url, x.x, y.y\nfrom\n(\n    select SUBSTRING(url,patindex('%://%',url)+3, len(url)) x\n    , url \n    from f1 \n) x\ninner join \n(\n    select SUBSTRING(url,patindex('%://%',url)+3, len(url)) y\n    , url \n    from f2 \n) y \non y.y = x.x\n
    \n soup wrap:

    Here's how I'd approach it (though using PowerShell rather than SQL):

    clear
    pushd c:\myPath\myFolder\
    
    #read in the contents of the files
    $file1 = get-content("file1.txt") 
    $file2 = get-content("file2.txt")
    
    #loop through each row of the whitespace separated file
    $file1 = $file1 | %{
        #for each line, split on whitespace characters, returning the results back in a single column
        $_ -split "\s" | %{$_}
    }
    #compare the two files for matching data & output this info
    compare-object $file1 $file2 -IncludeEqual -ExcludeDifferent | ft -AutoSize 
    
    popd
    

    NB: to ignore the protocol, simply remove it from the string using a similar technique to our split on spaces; i.e. a regex, this time going with replace instead of split.

    clear
    pushd c:\temp
    
    $file1 = get-content("file1.txt") 
    $file2 = get-content("file2.txt")
    
    $file1 = $file1 | %{
        $_ -split "\s" | %{
            $_ -replace ".*://(.*)",'$1'
        }
    }
    
    $file2 = $file2 | %{
        $_ -replace ".*://(.*)",'$1'
    }
    
    compare-object $file1 $file2 -IncludeEqual -ExcludeDifferent | ft -AutoSize 
    

    However, should you prefer a SQL solution, try this (MS SQL Server):

    create table f1(url nvarchar(1024))
    create table f2(url nvarchar(1024))
    
    BULK INSERT f1
    FROM 'C:\myPath\myFolder\file1.txt' 
    WITH ( ROWTERMINATOR =' ', FIRSTROW = 1 )
    
    BULK INSERT f2
    FROM 'C:\myPath\myFolder\file2.txt' 
    WITH ( FIRSTROW = 1 )
    go
    
    delete from f1 where coalesce(rtrim(url),'') = ''
    delete from f2 where coalesce(rtrim(url),'') = ''
    
    select x.url, x.x, y.y
    from
    (
        select SUBSTRING(url,patindex('%://%',url)+3, len(url)) x
        , url 
        from f1 
    ) x
    inner join 
    (
        select SUBSTRING(url,patindex('%://%',url)+3, len(url)) y
        , url 
        from f2 
    ) y 
    on y.y = x.x
    
    qid & accept id: (27122437, 27122570) query: Oracle Row fetch within limit soup:

    Try this:

    \n
    SELECT xml_to_string(XMLRECORD) FROM (select t.*, rownum rw from TABLENAME t) \nWHERE  rw>10000 AND rw<=20000\n
    \n

    Rownum is calculated when Oracle retrieves the result of the query. That's why a query select * from some_table where rownum > 1 never returns anything.

    \n

    In addition, without ORDER BY it doesn't make sense to get rows between 10000 and 20000. You might as well get the first 10000 (as rows are unsorted the result is unpredictable - any row can be the first).

    \n

    From Oracle documentation:

    \n
    \n

    For each row returned by a query, the ROWNUM pseudocolumn returns a\n number indicating the order in which Oracle selects the row from a\n table or set of joined rows. The first row selected has a ROWNUM of 1,\n the second has 2, and so on.

    \n

    Conditions testing for ROWNUM values greater than a positive integer\n are always false. For example, this query returns no rows:

    \n
    \n
    SELECT *\nFROM employees\nWHERE ROWNUM > 1;\n
    \n
    \n

    The first row fetched is assigned a ROWNUM of 1 and makes the\n condition false. The second row to be fetched is now the first row and\n is also assigned a ROWNUM of 1 and makes the condition false. All rows\n subsequently fail to satisfy the condition, so no rows are returned.

    \n
    \n soup wrap:

    Try this:

    SELECT xml_to_string(XMLRECORD) FROM (select t.*, rownum rw from TABLENAME t) 
    WHERE  rw>10000 AND rw<=20000
    

    Rownum is calculated when Oracle retrieves the result of the query. That's why a query select * from some_table where rownum > 1 never returns anything.

    In addition, without ORDER BY it doesn't make sense to get rows between 10000 and 20000. You might as well get the first 10000 (as rows are unsorted the result is unpredictable - any row can be the first).

    From Oracle documentation:

    For each row returned by a query, the ROWNUM pseudocolumn returns a number indicating the order in which Oracle selects the row from a table or set of joined rows. The first row selected has a ROWNUM of 1, the second has 2, and so on.

    Conditions testing for ROWNUM values greater than a positive integer are always false. For example, this query returns no rows:

    SELECT *
    FROM employees
    WHERE ROWNUM > 1;
    

    The first row fetched is assigned a ROWNUM of 1 and makes the condition false. The second row to be fetched is now the first row and is also assigned a ROWNUM of 1 and makes the condition false. All rows subsequently fail to satisfy the condition, so no rows are returned.

    qid & accept id: (27162998, 27163041) query: how to display only 20 items of the result in oracle sql? soup:

    The "highest" 20 entries suggests a sort. You would do something like this:

    \n
    select t.*\nfrom (select t.*\n      from table t\n      order by highestcol desc\n     ) t\nwhere rownum <= 20;\n
    \n

    If you are using Oracle 12g or more recent, you can use the fetch first clause instead:

    \n
    select t.*\nfrom table t\norder by highestcol desc\nfetch first 20 rows only;\n
    \n soup wrap:

    The "highest" 20 entries suggests a sort. You would do something like this:

    select t.*
    from (select t.*
          from table t
          order by highestcol desc
         ) t
    where rownum <= 20;
    

    If you are using Oracle 12g or more recent, you can use the fetch first clause instead:

    select t.*
    from table t
    order by highestcol desc
    fetch first 20 rows only;
    
    qid & accept id: (27166043, 27166699) query: Find the missing number group by category soup:

    You can do this by joining with a number table. This query uses thespt_valuestable and should work:

    \n
    ;with cte as (\n    select category , min(batchno) min_batch, max(batchno) max_batch\n    from #tmp\n    group by category\n)\nselect number, category\nfrom master..spt_values\ncross join cte\nwhere type = 'p'\n  and number > min_batch\n  and number < max_batch\ngroup by category, number\n
    \n

    Sample SQL Fiddle

    \n

    Note that this table only has a sequence of numbers 0-2047so if yourBatchNocan be higher you need another source for the query (could be another table or a recursive cte); something like this would work:

    \n
    ;with \n    cte (category, min_batch, max_batch) as (\n       select category , min(batchno), max(batchno)\n       from #tmp\n       group by category\n    ), \n    numbers (number, max_number) as (\n       select 1 as number, (select MAX(batchno) from #tmp) max_number\n       union all\n       select number + 1, max_number\n       from numbers\n       where number < max_number\n    )\n\nselect number, category\nfrom numbers cross join cte\nwhere number > min_batch\n  and number < max_batch\ngroup by category, number\noption (maxrecursion 0)\n
    \n soup wrap:

    You can do this by joining with a number table. This query uses thespt_valuestable and should work:

    ;with cte as (
        select category , min(batchno) min_batch, max(batchno) max_batch
        from #tmp
        group by category
    )
    select number, category
    from master..spt_values
    cross join cte
    where type = 'p'
      and number > min_batch
      and number < max_batch
    group by category, number
    

    Sample SQL Fiddle

    Note that this table only has a sequence of numbers 0-2047so if yourBatchNocan be higher you need another source for the query (could be another table or a recursive cte); something like this would work:

    ;with 
        cte (category, min_batch, max_batch) as (
           select category , min(batchno), max(batchno)
           from #tmp
           group by category
        ), 
        numbers (number, max_number) as (
           select 1 as number, (select MAX(batchno) from #tmp) max_number
           union all
           select number + 1, max_number
           from numbers
           where number < max_number
        )
    
    select number, category
    from numbers cross join cte
    where number > min_batch
      and number < max_batch
    group by category, number
    option (maxrecursion 0)
    
    qid & accept id: (27218389, 27226135) query: Location in mongoose, mongoDB soup:

    I fixed it myself.

    \n

    I did this in my model:

    \n
    loc :  { type: {type:String}, coordinates: [Number]},\n
    \n

    Underneath I made it a 2dsphere index.

    \n
    eventSchema.index({loc: '2dsphere'});\n
    \n

    And to add data to it:

    \n
    loc: { type: "Point", coordinates: [ longitude, latitude ] },\n
    \n soup wrap:

    I fixed it myself.

    I did this in my model:

    loc :  { type: {type:String}, coordinates: [Number]},
    

    Underneath I made it a 2dsphere index.

    eventSchema.index({loc: '2dsphere'});
    

    And to add data to it:

    loc: { type: "Point", coordinates: [ longitude, latitude ] },
    
    qid & accept id: (27251701, 27252444) query: TSQL - row values to column headings including further column values soup:

    It seems that you will need to unpivot the columns Column3, Column4, and Column5 first to return the rows that are not null, then pivot the Column2 items into columns. If you don't know the column names you'll need to use dynamic SQL inside of a stored procedure.

    \n

    Before using dynamic sql, I'd first write a static version of the query to get the correct logic. Since you are using SQL Server 2012, you can use CROSS APPLY and VALUES to unpivot the data:

    \n
    select \n  m.docId,\n  m.Date,\n  m.column1,\n  m.column2,\n  c.value\nfrom dbo.mappingTest m\ncross apply\n(\n  values\n    ('Column3', Column3),\n    ('Column4', Column4),\n    ('Column5', convert(varchar(10), Column5, 120))\n) c (Col, Value)\nwhere c.value is not null\n
    \n

    See Demo. Your data now looks like this:

    \n
    |      DOCID |       DATE |  COLUMN1 |      COLUMN2 |   VALUE |\n|------------|------------|----------|--------------|---------|\n| ABC000123  | 2014-04-11 | approval | project name |     ABC |\n| ABC000123  | 2014-04-11 | approval | article name |  Art 01 |\n| ABC000123  | 2014-04-11 | approval |     customer |    ACME |\n| ABC000123  | 2014-04-11 | approval |   department | Dept. A |\n| ABC000123  | 2014-04-11 | approval |        plant |  Europe |\n| ABC000123  | 2014-04-11 | approval |    sop month |      10 |\n
    \n

    You have multiple rows for each DocID with the values you'll eventually want under the Column2 items in a single column. Now you can apply the PIVOT function:

    \n
    select \n  DocID,\n  Date,\n  Column1,\n  [project name], [article name], [customer],\n  [department], [plant], [sop month],\n  [sop year], [eop month], [eop year], [budget], [savings]\nfrom \n(\n  select \n    m.docId,\n    m.Date,\n    m.column1,\n    m.column2,\n    c.value\n  from dbo.mappingTest m\n  cross apply\n  (\n    values\n      ('Column3', Column3),\n      ('Column4', Column4),\n      ('Column5', convert(varchar(10), Column5, 120))\n  ) c (Col, Value)\n  where c.value is not null\n) d\npivot\n(\n  max(value)\n  for column2 in ([project name], [article name], [customer],\n                  [department], [plant], [sop month],\n                  [sop year], [eop month], [eop year], [budget], [savings])\n) p\n
    \n

    See SQL Fiddle with Demo. I've included all column names, but inside of the PIVOT IN you'd include only the values you actually want as the new columns.

    \n

    Now if you want to use dynamic SQL, you'll adjust the code to be:

    \n
    DECLARE @cols AS NVARCHAR(MAX),\n    @query  AS NVARCHAR(MAX)\n\nselect @cols = STUFF((SELECT ',' + QUOTENAME(COLUMN2) \n                    from dbo.mappingTest\n                    group by COLUMN2\n                    order by COLUMN2\n            FOR XML PATH(''), TYPE\n            ).value('.', 'NVARCHAR(MAX)') \n        ,1,1,'')\n\nset @query \n  = 'SELECT \n        DocID,\n        Date,\n        Column1,' + @cols + ' \n      from \n      (\n        select \n          m.docId,\n          m.Date,\n          m.column1,\n          m.column2,\n          c.value\n        from dbo.mappingTest m\n        cross apply\n        (\n          values\n            (''Column3'', Column3),\n            (''Column4'', Column4),\n            (''Column5'', convert(varchar(10), Column5, 120))\n        ) c (Col, Value)\n        where c.value is not null\n      ) d\n      pivot \n      (\n        min(value)\n        for column2 in (' + @cols + ')\n      ) p '\n\nexec sp_executesql @query\n
    \n

    See SQL Fiddle with Demo. Both versions will give a result of:

    \n
    |      DOCID |       DATE |  COLUMN1 | ARTICLE NAME | BUDGET | CUSTOMER | DEPARTMENT | EOP MONTH | EOP YEAR |  PLANT | PROJECT NAME | SAVINGS | SOP MONTH | SOP YEAR |\n|------------|------------|----------|--------------|--------|----------|------------|-----------|----------|--------|--------------|---------|-----------|----------|\n| ABC000123  | 2014-04-11 | approval |       Art 01 |  17890 |     ACME |    Dept. A |         0 |    21019 | Europe |          ABC |  (null) |        10 |     2014 |\n| ABC000123  | 2014-04-11 |  project |       (null) | (null) |   (null) |     (null) |    (null) |   (null) | (null) |       (null) |  -0,020 |    (null) |   (null) |\n| DEF000123  | 2014-05-11 | approval |       Art 02 | (null) |   (null) |     (null) |    (null) |   (null) | (null) |          DEF |  (null) |    (null) |   (null) |\n
    \n soup wrap:

    It seems that you will need to unpivot the columns Column3, Column4, and Column5 first to return the rows that are not null, then pivot the Column2 items into columns. If you don't know the column names you'll need to use dynamic SQL inside of a stored procedure.

    Before using dynamic sql, I'd first write a static version of the query to get the correct logic. Since you are using SQL Server 2012, you can use CROSS APPLY and VALUES to unpivot the data:

    select 
      m.docId,
      m.Date,
      m.column1,
      m.column2,
      c.value
    from dbo.mappingTest m
    cross apply
    (
      values
        ('Column3', Column3),
        ('Column4', Column4),
        ('Column5', convert(varchar(10), Column5, 120))
    ) c (Col, Value)
    where c.value is not null
    

    See Demo. Your data now looks like this:

    |      DOCID |       DATE |  COLUMN1 |      COLUMN2 |   VALUE |
    |------------|------------|----------|--------------|---------|
    | ABC000123  | 2014-04-11 | approval | project name |     ABC |
    | ABC000123  | 2014-04-11 | approval | article name |  Art 01 |
    | ABC000123  | 2014-04-11 | approval |     customer |    ACME |
    | ABC000123  | 2014-04-11 | approval |   department | Dept. A |
    | ABC000123  | 2014-04-11 | approval |        plant |  Europe |
    | ABC000123  | 2014-04-11 | approval |    sop month |      10 |
    

    You have multiple rows for each DocID with the values you'll eventually want under the Column2 items in a single column. Now you can apply the PIVOT function:

    select 
      DocID,
      Date,
      Column1,
      [project name], [article name], [customer],
      [department], [plant], [sop month],
      [sop year], [eop month], [eop year], [budget], [savings]
    from 
    (
      select 
        m.docId,
        m.Date,
        m.column1,
        m.column2,
        c.value
      from dbo.mappingTest m
      cross apply
      (
        values
          ('Column3', Column3),
          ('Column4', Column4),
          ('Column5', convert(varchar(10), Column5, 120))
      ) c (Col, Value)
      where c.value is not null
    ) d
    pivot
    (
      max(value)
      for column2 in ([project name], [article name], [customer],
                      [department], [plant], [sop month],
                      [sop year], [eop month], [eop year], [budget], [savings])
    ) p
    

    See SQL Fiddle with Demo. I've included all column names, but inside of the PIVOT IN you'd include only the values you actually want as the new columns.

    Now if you want to use dynamic SQL, you'll adjust the code to be:

    DECLARE @cols AS NVARCHAR(MAX),
        @query  AS NVARCHAR(MAX)
    
    select @cols = STUFF((SELECT ',' + QUOTENAME(COLUMN2) 
                        from dbo.mappingTest
                        group by COLUMN2
                        order by COLUMN2
                FOR XML PATH(''), TYPE
                ).value('.', 'NVARCHAR(MAX)') 
            ,1,1,'')
    
    set @query 
      = 'SELECT 
            DocID,
            Date,
            Column1,' + @cols + ' 
          from 
          (
            select 
              m.docId,
              m.Date,
              m.column1,
              m.column2,
              c.value
            from dbo.mappingTest m
            cross apply
            (
              values
                (''Column3'', Column3),
                (''Column4'', Column4),
                (''Column5'', convert(varchar(10), Column5, 120))
            ) c (Col, Value)
            where c.value is not null
          ) d
          pivot 
          (
            min(value)
            for column2 in (' + @cols + ')
          ) p '
    
    exec sp_executesql @query
    

    See SQL Fiddle with Demo. Both versions will give a result of:

    |      DOCID |       DATE |  COLUMN1 | ARTICLE NAME | BUDGET | CUSTOMER | DEPARTMENT | EOP MONTH | EOP YEAR |  PLANT | PROJECT NAME | SAVINGS | SOP MONTH | SOP YEAR |
    |------------|------------|----------|--------------|--------|----------|------------|-----------|----------|--------|--------------|---------|-----------|----------|
    | ABC000123  | 2014-04-11 | approval |       Art 01 |  17890 |     ACME |    Dept. A |         0 |    21019 | Europe |          ABC |  (null) |        10 |     2014 |
    | ABC000123  | 2014-04-11 |  project |       (null) | (null) |   (null) |     (null) |    (null) |   (null) | (null) |       (null) |  -0,020 |    (null) |   (null) |
    | DEF000123  | 2014-05-11 | approval |       Art 02 | (null) |   (null) |     (null) |    (null) |   (null) | (null) |          DEF |  (null) |    (null) |   (null) |
    
    qid & accept id: (27304683, 27304952) query: Sample Oracle SQL Randomly - in absense of ROWID soup:

    The error message you get when you try to run the first query is a pretty big clue:

    \n
    ORA-01446: cannot select ROWID from, or sample, a view with DISTINCT, GROUP BY, etc.\n
    \n

    It's pretty clear to me from this that the SAMPLE functionality requires access to ROWID to work. As ROWID is a pseudocolumn that the database uses to physically locate a row, any query where the ROWID is indeterminate (such as when the data is aggregated), cannot use SAMPLE on the outer query. In the case of ALL_ALL_TABLES, the fact that it is a view that combines two tables via UNION blocks access to the ROWID.

    \n
    \n

    From your revised question, the first thing that jumps out at me is that the SAMPLE clause must be in the FROM clause, between the table name and any alias. I was able to sample in a query with joins like this:

    \n
    SELECT *\nFROM   table_a SAMPLE (10) a\n       JOIN table_b SAMPLE (10) b \n       ON a.column1 = b.column1\n
    \n

    Regarding your actual query, I tried using the tables (again, actually views) that you're trying to sample one at a time:

    \n
    select * from all_constraints sample(10)\n\nORA-01445: cannot select ROWID from, or sample, a join view without a key-preserved table\n\nselect * from all_cons_columns sample(10)\n\nORA-01445: cannot select ROWID from, or sample, a join view without a key-preserved table\n
    \n

    This message is pretty clear: none of the tables in these views are key-preserved (i.e. guaranteed to return each row no more than once), so you can't sample them.

    \n
    \n

    The following query should work to manually create a random sample, using DBMS_RANDOM.

    \n
    SELECT   *\nFROM     (SELECT cols.table_name,\n                 cols.column_name,\n                 cols.position,\n                 cons.status,\n                 cons.owner,\n                 cons.constraint_type,\n                 DBMS_RANDOM.VALUE rnd\n          FROM   all_constraints cons\n                 JOIN all_cons_columns cols\n                    ON     cons.constraint_name = cols.constraint_name\n                       AND cons.owner = cols.owner)\nWHERE    rnd < .1\nORDER BY table_name, position\n
    \n soup wrap:

    The error message you get when you try to run the first query is a pretty big clue:

    ORA-01446: cannot select ROWID from, or sample, a view with DISTINCT, GROUP BY, etc.
    

    It's pretty clear to me from this that the SAMPLE functionality requires access to ROWID to work. As ROWID is a pseudocolumn that the database uses to physically locate a row, any query where the ROWID is indeterminate (such as when the data is aggregated), cannot use SAMPLE on the outer query. In the case of ALL_ALL_TABLES, the fact that it is a view that combines two tables via UNION blocks access to the ROWID.


    From your revised question, the first thing that jumps out at me is that the SAMPLE clause must be in the FROM clause, between the table name and any alias. I was able to sample in a query with joins like this:

    SELECT *
    FROM   table_a SAMPLE (10) a
           JOIN table_b SAMPLE (10) b 
           ON a.column1 = b.column1
    

    Regarding your actual query, I tried using the tables (again, actually views) that you're trying to sample one at a time:

    select * from all_constraints sample(10)
    
    ORA-01445: cannot select ROWID from, or sample, a join view without a key-preserved table
    
    select * from all_cons_columns sample(10)
    
    ORA-01445: cannot select ROWID from, or sample, a join view without a key-preserved table
    

    This message is pretty clear: none of the tables in these views are key-preserved (i.e. guaranteed to return each row no more than once), so you can't sample them.


    The following query should work to manually create a random sample, using DBMS_RANDOM.

    SELECT   *
    FROM     (SELECT cols.table_name,
                     cols.column_name,
                     cols.position,
                     cons.status,
                     cons.owner,
                     cons.constraint_type,
                     DBMS_RANDOM.VALUE rnd
              FROM   all_constraints cons
                     JOIN all_cons_columns cols
                        ON     cons.constraint_name = cols.constraint_name
                           AND cons.owner = cols.owner)
    WHERE    rnd < .1
    ORDER BY table_name, position
    
    qid & accept id: (27309001, 27309032) query: SQL statement equivalent to ternary operator soup:

    Try this:

    \n
    SELECT id, IF(integer_val = 10, 100, 0) AS new_val \nFROM my_table;\n
    \n

    OR

    \n
    SELECT id, (CASE WHEN integer_val = 10 THEN 100 ELSE 0 END) AS new_val \nFROM my_table;\n
    \n soup wrap:

    Try this:

    SELECT id, IF(integer_val = 10, 100, 0) AS new_val 
    FROM my_table;
    

    OR

    SELECT id, (CASE WHEN integer_val = 10 THEN 100 ELSE 0 END) AS new_val 
    FROM my_table;
    
    qid & accept id: (27315388, 27352804) query: how to match strings from different tables considering the spaces,hyphons,dot etc in oracle soup:

    Contrary to the comments stating otherwise, this is fairly straightforward. Since you're using Oracle, you can use a combination of TRIM and REPLACE to get rid of the dots out of the abbrevi column. There are two cases your join needs to consider - in some of your data, the initials come before the name, and in some, they come after the name. I suggest using IN to cover both cases.

    \n

    Your query could be written like this, if it didn't have to match ARUNKUMAR with ARUN KUMAR.

    \n
    SELECT c.name, b.name \nFROM   S1DM c JOIN G1DM b \nON  c.name IN ( \n    TRIM( REPLACE(b.abbrevi, '.', ' ') || ' ' || b.name), \n    TRIM( b.name || ' ' || REPLACE(b.abbrevi, '.', ' ')))\nAND c.dob = b.dob\n
    \n

    To deal with the case where the spaces don't match between the two names, you could eliminate the spaces entirely, from both names. That would then look something like this.

    \n
    SELECT c.name, b.name \nFROM   S1DM c JOIN G1DM b \nON  REPLACE(c.name, ' ', '') IN ( \n    REPLACE(b.abbrevi, '.', '') || REPLACE(b.name, ' ', ''), \n    REPLACE(b.name, ' ', '') || REPLACE(b.abbrevi, '.', ''))\nAND c.dob = b.dob\n
    \n soup wrap:

    Contrary to the comments stating otherwise, this is fairly straightforward. Since you're using Oracle, you can use a combination of TRIM and REPLACE to get rid of the dots out of the abbrevi column. There are two cases your join needs to consider - in some of your data, the initials come before the name, and in some, they come after the name. I suggest using IN to cover both cases.

    Your query could be written like this, if it didn't have to match ARUNKUMAR with ARUN KUMAR.

    SELECT c.name, b.name 
    FROM   S1DM c JOIN G1DM b 
    ON  c.name IN ( 
        TRIM( REPLACE(b.abbrevi, '.', ' ') || ' ' || b.name), 
        TRIM( b.name || ' ' || REPLACE(b.abbrevi, '.', ' ')))
    AND c.dob = b.dob
    

    To deal with the case where the spaces don't match between the two names, you could eliminate the spaces entirely, from both names. That would then look something like this.

    SELECT c.name, b.name 
    FROM   S1DM c JOIN G1DM b 
    ON  REPLACE(c.name, ' ', '') IN ( 
        REPLACE(b.abbrevi, '.', '') || REPLACE(b.name, ' ', ''), 
        REPLACE(b.name, ' ', '') || REPLACE(b.abbrevi, '.', ''))
    AND c.dob = b.dob
    
    qid & accept id: (27326723, 27326787) query: Remove duplicate column after SQL query soup:

    The most elegant way would be to use the USING clause in an explicit join condition:

    \n
    SELECT houseid, v.vehid, v.epatmpg, d.houseid, d.trpmiles\nFROM   vehv2pub v\nJOIN   dayv2pub d USING (houseid)\nWHERE  v.vehid >= 1\nAND    d.trpmiles < 15;
    \n

    This way, the column houseid is in the result only once, even if you use SELECT *.

    \n

    Per documentation:

    \n
    \n

    USING is a shorthand notation: it takes a comma-separated list of\n column names, which the joined tables must have in common, and forms a\n join condition specifying equality of each of these pairs of columns.\n Furthermore, the output of JOIN USING has one column for each of the\n equated pairs of input columns, followed by the remaining columns from each table.

    \n
    \n

    To get the average epatmpg for the selected rows:

    \n
    SELECT avg(v.epatmpg) AS avg_epatmpg\nFROM   vehv2pub v\nJOIN   dayv2pub d USING (houseid)\nWHERE  v.vehid >= 1\nAND    d.trpmiles < 15;\n
    \n

    If there are multiple matches in dayv2pub, the derived table can hold multiple instances of each row in vehv2pub after the join. avg() is based on the derived table.

    \n soup wrap:

    The most elegant way would be to use the USING clause in an explicit join condition:

    SELECT houseid, v.vehid, v.epatmpg, d.houseid, d.trpmiles
    FROM   vehv2pub v
    JOIN   dayv2pub d USING (houseid)
    WHERE  v.vehid >= 1
    AND    d.trpmiles < 15;

    This way, the column houseid is in the result only once, even if you use SELECT *.

    Per documentation:

    USING is a shorthand notation: it takes a comma-separated list of column names, which the joined tables must have in common, and forms a join condition specifying equality of each of these pairs of columns. Furthermore, the output of JOIN USING has one column for each of the equated pairs of input columns, followed by the remaining columns from each table.

    To get the average epatmpg for the selected rows:

    SELECT avg(v.epatmpg) AS avg_epatmpg
    FROM   vehv2pub v
    JOIN   dayv2pub d USING (houseid)
    WHERE  v.vehid >= 1
    AND    d.trpmiles < 15;
    

    If there are multiple matches in dayv2pub, the derived table can hold multiple instances of each row in vehv2pub after the join. avg() is based on the derived table.

    qid & accept id: (27379264, 27379518) query: Looking for infinite relation in SQL soup:

    Does a simple inner join to the same table not do what you need?

    \n
    SELECT table1.Parent as Row, table1.Child as OtherRow\nFROM table table1\n    inner join table table2\n    ON table1.Parent = table2.Child\n    AND table1.Child = table2.Parent\n
    \n

    This would give you the "Parent->Child" row for each match, like so:

    \n
    Row | OtherRow\n----|----------\n  1 |  2\n
    \n

    But you could then use the result as a sub-query to pull out all the rows

    \n

    eg

    \n
    SELECT table.Parent as Parent, table.Child as Child\nFROM table\n    INNER JOIN (that query) query\n    ON (table.Parent = query.Row AND table.Child = query.OtherRow)\n        OR (table.Parent = query.OtherRow AND table.Child = query.Row)\n
    \n

    This would give you

    \n
    Parent | Child\n-------|------\n   1   |   2\n   2   |   1\n
    \n soup wrap:

    Does a simple inner join to the same table not do what you need?

    SELECT table1.Parent as Row, table1.Child as OtherRow
    FROM table table1
        inner join table table2
        ON table1.Parent = table2.Child
        AND table1.Child = table2.Parent
    

    This would give you the "Parent->Child" row for each match, like so:

    Row | OtherRow
    ----|----------
      1 |  2
    

    But you could then use the result as a sub-query to pull out all the rows

    eg

    SELECT table.Parent as Parent, table.Child as Child
    FROM table
        INNER JOIN (that query) query
        ON (table.Parent = query.Row AND table.Child = query.OtherRow)
            OR (table.Parent = query.OtherRow AND table.Child = query.Row)
    

    This would give you

    Parent | Child
    -------|------
       1   |   2
       2   |   1
    
    qid & accept id: (27381225, 27384938) query: Changing the values in a column with a value from the same column soup:

    This will do what you want:

    \n
        UPDATE Table1, (SELECT TOP 1 Field1 As F FROM Table1 WHERE Field1 Is Not Null)\n    SET Field1 = F\n    WHERE Field1 Is Null\n
    \n

    It handles several special cases safely:

    \n
      \n
    • If more than one row has a value in Fleld1, the value from the first such row is used.
    • \n
    • The first row of the table need not have a value in Field1.
    • \n
    • If none of the rows have a value in Field1, nothing bad will happen.
    • \n
    \n

    Before:

    \n
        Field1  Field2  \n            apple \n    pet     cat    \n            dog    \n    color   red    \n            blue     \n
    \n

    After:

    \n
        Field1  Field2  \n    pet     apple \n    pet     cat    \n    pet     dog    \n    color   red    \n    pet     blue     \n
    \n soup wrap:

    This will do what you want:

        UPDATE Table1, (SELECT TOP 1 Field1 As F FROM Table1 WHERE Field1 Is Not Null)
        SET Field1 = F
        WHERE Field1 Is Null
    

    It handles several special cases safely:

    • If more than one row has a value in Fleld1, the value from the first such row is used.
    • The first row of the table need not have a value in Field1.
    • If none of the rows have a value in Field1, nothing bad will happen.

    Before:

        Field1  Field2  
                apple 
        pet     cat    
                dog    
        color   red    
                blue     
    

    After:

        Field1  Field2  
        pet     apple 
        pet     cat    
        pet     dog    
        color   red    
        pet     blue     
    
    qid & accept id: (27399755, 27400795) query: hierarchy records soup:

    Note: In the following, the t_hierarchy is your source table (since you did not name it in your question).

    \n

    Although maybe possible to do this via hierarchical SQL, I believe that pure SQL would be a PITA to implement and that a regular recursive PLSQL function will do just fine.

    \n

    First, create yourself a simple schema-level collection type

    \n
    create or replace type arr_integers as table of integer;\n
    \n

    then a function

    \n
    create or replace\nfunction f_parent_child_xml\n    ( i_parent_id                   in t_hierarchy.parent_id%type\n    , i_visited_nodes               in arr_integers default null )\n    return xmltype\nis\n    l_result                        xmltype;\n\n    l_contained_by_xml              xmltype;\n    l_contained_by#                 integer;\n    l_contains_xml                  xmltype;\n    l_contains#                     integer;\n\n    l_new_visited_nodes             arr_integers;\nbegin\n    if i_visited_nodes is null then\n        select\n            xmlelement("view_hierarchy",\n                xmlattributes('com.hierarchy' as "chm"),\n                xmlelement("link",\n                    f_parent_child_xml(i_parent_id, arr_integers())\n            ))\n        into l_result\n        from dual;\n    else\n        select parent_id\n        bulk collect into l_new_visited_nodes\n        from t_hierarchy\n        where i_parent_id in (child_id, parent_id)\n        union\n        select child_id\n        from t_hierarchy\n        where i_parent_id in (child_id, parent_id)\n        union\n        select column_value\n        from table(i_visited_nodes);\n\n        select\n            xmlagg(\n                f_parent_child_xml(H1.parent_id, l_new_visited_nodes)\n            ) as xml$,\n            count(1) as rows#\n        into l_contained_by_xml, l_contained_by#\n        from t_hierarchy H1\n        where H1.child_id = i_parent_id\n            and not exists (\n                select 1\n                from table(i_visited_nodes) X\n                where X.column_value = H1.parent_id\n            )\n        ;\n\n        select\n            xmlagg(\n                f_parent_child_xml(H2.child_id, l_new_visited_nodes)\n            ) as xml$,\n            count(1) as rows#\n        into l_contains_xml, l_contains#\n        from t_hierarchy H2\n        where H2.parent_id = i_parent_id\n            and not exists (\n                select 1\n                from table(i_visited_nodes) X\n                where X.column_value = H2.child_id\n            );\n\n        select\n            xmlelement("ID",\n                xmlattributes(i_parent_id as "refno"),\n                case when l_contained_by# > 0 then xmlelement("contained_by", l_contained_by_xml) end,\n                case when l_contains# > 0 then xmlelement("contains", l_contains_xml) end\n            )\n        into l_result\n        from dual;\n    end if;\n\n    return l_result;\nend;\n
    \n
    \n

    Running e.g.

    \n
    select f_parent_child_xml(101)\nfrom dual;\n
    \n

    or

    \n
    select f_parent_child_xml(101).getStringVal()\nfrom dual;\n
    \n

    yields (after manual reformatting):

    \n
    \n    \n        \n            \n                \n                    \n                        \n                    \n                    \n                        \n                            \n                                \n                                \n                            \n                        \n                    \n                \n                \n            \n            \n                \n                \n            \n        \n    \n\n
    \n

    Enjoy!

    \n soup wrap:

    Note: In the following, the t_hierarchy is your source table (since you did not name it in your question).

    Although maybe possible to do this via hierarchical SQL, I believe that pure SQL would be a PITA to implement and that a regular recursive PLSQL function will do just fine.

    First, create yourself a simple schema-level collection type

    create or replace type arr_integers as table of integer;
    

    then a function

    create or replace
    function f_parent_child_xml
        ( i_parent_id                   in t_hierarchy.parent_id%type
        , i_visited_nodes               in arr_integers default null )
        return xmltype
    is
        l_result                        xmltype;
    
        l_contained_by_xml              xmltype;
        l_contained_by#                 integer;
        l_contains_xml                  xmltype;
        l_contains#                     integer;
    
        l_new_visited_nodes             arr_integers;
    begin
        if i_visited_nodes is null then
            select
                xmlelement("view_hierarchy",
                    xmlattributes('com.hierarchy' as "chm"),
                    xmlelement("link",
                        f_parent_child_xml(i_parent_id, arr_integers())
                ))
            into l_result
            from dual;
        else
            select parent_id
            bulk collect into l_new_visited_nodes
            from t_hierarchy
            where i_parent_id in (child_id, parent_id)
            union
            select child_id
            from t_hierarchy
            where i_parent_id in (child_id, parent_id)
            union
            select column_value
            from table(i_visited_nodes);
    
            select
                xmlagg(
                    f_parent_child_xml(H1.parent_id, l_new_visited_nodes)
                ) as xml$,
                count(1) as rows#
            into l_contained_by_xml, l_contained_by#
            from t_hierarchy H1
            where H1.child_id = i_parent_id
                and not exists (
                    select 1
                    from table(i_visited_nodes) X
                    where X.column_value = H1.parent_id
                )
            ;
    
            select
                xmlagg(
                    f_parent_child_xml(H2.child_id, l_new_visited_nodes)
                ) as xml$,
                count(1) as rows#
            into l_contains_xml, l_contains#
            from t_hierarchy H2
            where H2.parent_id = i_parent_id
                and not exists (
                    select 1
                    from table(i_visited_nodes) X
                    where X.column_value = H2.child_id
                );
    
            select
                xmlelement("ID",
                    xmlattributes(i_parent_id as "refno"),
                    case when l_contained_by# > 0 then xmlelement("contained_by", l_contained_by_xml) end,
                    case when l_contains# > 0 then xmlelement("contains", l_contains_xml) end
                )
            into l_result
            from dual;
        end if;
    
        return l_result;
    end;
    

    Running e.g.

    select f_parent_child_xml(101)
    from dual;
    

    or

    select f_parent_child_xml(101).getStringVal()
    from dual;
    

    yields (after manual reformatting):

    
        
            
                
                    
                        
                            
                        
                        
                            
                                
                                    
                                    
                                
                            
                        
                    
                    
                
                
                    
                    
                
            
        
    
    

    Enjoy!

    qid & accept id: (27417566, 27418171) query: Delete duplicates in sql server soup:

    his will do for you using CTE

    \n
    WITH crows AS (\n     SELECT MIN(contactID) contactID, ContactName \n     FROM tbl_contact \n     GROUP BY ContactName\n)\nUPDATE tbl_l_contact_fund \n    SET contactID = (SELECT ContactID \n                                            FROM crows cr \n                                            WHERE ContactName in \n            (SELECT ContactName \n             FROM tbl_contact \n             WHERE ContactID = a.contactID))\nFROM tbl_l_contact_fund a\nGO\nDELETE tbl_contact WHERE ContactID NOT IN (SELECT MIN(contactID) contactID \n                                           FROM tbl_contact GROUP BY ContactName)\n
    \n

    Or you can do a direct Update

    \n
    UPDATE tbl_l_contact_fund \n        SET contactID = (SELECT MIN(ContactID) \n                         FROM tbl_contact cr \n                         WHERE ContactName in \n            (Select ContactName \n             FROM tbl_contact \n             WHERE ContactID = a.contactID))\nFROM tbl_l_contact_fund a\nGO\nDELETE tbl_contact WHERE ContactID NOT IN (SELECT MIN(contactID) contactID \n                                            FROM tbl_contact GROUP BY ContactName)\n
    \n soup wrap:

    his will do for you using CTE

    WITH crows AS (
         SELECT MIN(contactID) contactID, ContactName 
         FROM tbl_contact 
         GROUP BY ContactName
    )
    UPDATE tbl_l_contact_fund 
        SET contactID = (SELECT ContactID 
                                                FROM crows cr 
                                                WHERE ContactName in 
                (SELECT ContactName 
                 FROM tbl_contact 
                 WHERE ContactID = a.contactID))
    FROM tbl_l_contact_fund a
    GO
    DELETE tbl_contact WHERE ContactID NOT IN (SELECT MIN(contactID) contactID 
                                               FROM tbl_contact GROUP BY ContactName)
    

    Or you can do a direct Update

    UPDATE tbl_l_contact_fund 
            SET contactID = (SELECT MIN(ContactID) 
                             FROM tbl_contact cr 
                             WHERE ContactName in 
                (Select ContactName 
                 FROM tbl_contact 
                 WHERE ContactID = a.contactID))
    FROM tbl_l_contact_fund a
    GO
    DELETE tbl_contact WHERE ContactID NOT IN (SELECT MIN(contactID) contactID 
                                                FROM tbl_contact GROUP BY ContactName)
    
    qid & accept id: (27418411, 27418686) query: Three way server table check soup:

    If you've got a column that stores the date/time that records are added then you can use this to get an accurate count at a point in time across the database.

    \n

    So firstly set up a variable to hold the datetime checkpoint:

    \n
    declare @dateTimeCutOff datetime = GETDATE()\n
    \n

    I'm not sure how you're running the checks as you've not provided any code, but you would use this value to query the databases across the servers:

    \n
    SELECT COUNT(1) \nFROM TransactionTable\nWHERE DateAdded <= @dateTimeCutOff\n
    \n

    If you run this query against the 3 database tables with the same variable value (which is only set up once and shared between the checks), they should produce the same result.

    \n soup wrap:

    If you've got a column that stores the date/time that records are added then you can use this to get an accurate count at a point in time across the database.

    So firstly set up a variable to hold the datetime checkpoint:

    declare @dateTimeCutOff datetime = GETDATE()
    

    I'm not sure how you're running the checks as you've not provided any code, but you would use this value to query the databases across the servers:

    SELECT COUNT(1) 
    FROM TransactionTable
    WHERE DateAdded <= @dateTimeCutOff
    

    If you run this query against the 3 database tables with the same variable value (which is only set up once and shared between the checks), they should produce the same result.

    qid & accept id: (27471951, 27472027) query: Order and reverse order by group? soup:

    You want:

    \n
    order by name desc, id asc\n
    \n

    You can put multiple keys into the order by. The first key is used for the sorting. When the key values are the same, the second gets used, and so on.

    \n

    EDIT:

    \n

    I see, you want the names with the smallest id first. For this, use a join:

    \n
    select t.*\nfrom table t join\n     (select name, max(id) as maxid\n      from table t\n      group by name\n     ) n\n     on t.name = n.name\norder by n.maxid, t.name, t.id\n
    \n soup wrap:

    You want:

    order by name desc, id asc
    

    You can put multiple keys into the order by. The first key is used for the sorting. When the key values are the same, the second gets used, and so on.

    EDIT:

    I see, you want the names with the smallest id first. For this, use a join:

    select t.*
    from table t join
         (select name, max(id) as maxid
          from table t
          group by name
         ) n
         on t.name = n.name
    order by n.maxid, t.name, t.id
    
    qid & accept id: (27473908, 27473936) query: How to Write MySQL Select Statement to Show Latest Entry by Each UserID? soup:

    You can get the latest entry by using a simple group by

    \n
    select userid, max(TimeInserted) from content group by userid\n
    \n

    use the result from the group by in the WHERE Clause like this:

    \n
    select * \n   from content \nwhere (userid, TimeInserted) in\n\n(\n  select userid, max(TimeInserted) from content group by userid  \n)\n
    \n soup wrap:

    You can get the latest entry by using a simple group by

    select userid, max(TimeInserted) from content group by userid
    

    use the result from the group by in the WHERE Clause like this:

    select * 
       from content 
    where (userid, TimeInserted) in
    
    (
      select userid, max(TimeInserted) from content group by userid  
    )
    
    qid & accept id: (27476006, 27476019) query: Sql delete value from database soup:

    If I understand your question you should simply be able to delete based on the sec_id. This assumes that it is part of the table.

    \n
    DELETE FROM main WHERE sec_id = @sec_id\n
    \n

    In this example @sec_id is the value of the corresponding sec_id that you want to match

    \n

    If it's not part of the table but you have another table that contains both, then I you might use something like:

    \n
    DELETE FROM main \nWHERE EXISTS(SELECT 1 FROM other \n             WHERE main.id = other.id AND other.sec_id = @sec_id)\n
    \n soup wrap:

    If I understand your question you should simply be able to delete based on the sec_id. This assumes that it is part of the table.

    DELETE FROM main WHERE sec_id = @sec_id
    

    In this example @sec_id is the value of the corresponding sec_id that you want to match

    If it's not part of the table but you have another table that contains both, then I you might use something like:

    DELETE FROM main 
    WHERE EXISTS(SELECT 1 FROM other 
                 WHERE main.id = other.id AND other.sec_id = @sec_id)
    
    qid & accept id: (27512053, 27531755) query: Symfony2 execute SQL file in Doctrine Fixtures Load soup:

    I find a good solution. I didn't find an exec method in class ObjectManager, so... this work very well for me.

    \n
    public function load(ObjectManager $manager)\n{\n    // Bundle to manage file and directories\n    $finder = new Finder();\n    $finder->in('web/sql');\n    $finder->name('categories.sql');\n\n    foreach( $finder as $file ){\n        $content = $file->getContents();\n\n        $stmt = $this->container->get('doctrine.orm.entity_manager')->getConnection()->prepare($content);\n        $stmt->execute();\n    }\n}\n
    \n

    In this solution your fixture class has to implement the ContainerAwareInterface with the method

    \n
    public function setContainer( ContainerInterface $container = null )\n{\n    $this->container = $container;\n}\n
    \n soup wrap:

    I find a good solution. I didn't find an exec method in class ObjectManager, so... this work very well for me.

    public function load(ObjectManager $manager)
    {
        // Bundle to manage file and directories
        $finder = new Finder();
        $finder->in('web/sql');
        $finder->name('categories.sql');
    
        foreach( $finder as $file ){
            $content = $file->getContents();
    
            $stmt = $this->container->get('doctrine.orm.entity_manager')->getConnection()->prepare($content);
            $stmt->execute();
        }
    }
    

    In this solution your fixture class has to implement the ContainerAwareInterface with the method

    public function setContainer( ContainerInterface $container = null )
    {
        $this->container = $container;
    }
    
    qid & accept id: (27521297, 27521922) query: How to match rows in the same table across schemas by using foreign key restraints soup:

    your example data - I have renamed some things to make life easier.

    \n
    create schema a; create schema b;\ncreate table a.t (id int primary key ,a text,b text);\ninsert into a.t values(1,'A','B'),(2,'C','D');\ncreate table a.f (id int references a.t(id),field1 text);\ninsert into a.f values (1,'XYZ'),(1,'WVU'),(2,'STR'),(2,'PQR');\ncreate table b.t (id int primary key ,a text,b text);\ninsert into b.t values(11,'A''','B'''),(22,'C''','D''');\ncreate table b.f (id int references b.t(id),field1 text);\ninsert into b.f values (11,'XYZ'),(11,'WVU'),(22,'STR'),(22,'PQR');\n
    \n

    the join:

    \n
    SELECT * FROM a.t \n  JOIN a.f ON a.t.id = a.f.id\n  JOIN b.f ON a.f.field1 = b.f.field1 \n  JOIN b.t ON b.t.id = b.f.id \n
    \n

    an update?:

    \n
    UPDATE b.t \n  SET b=b.t.b||'('||a.t.id||')'\n  FROM a.f \n    JOIN b.f ON a.f.field1 = b.f.field1 \n    JOIN a.t ON a.t.id = a.f.id \n  WHERE b.t.id = b.f.id\n;\n
    \n

    the cleanup

    \n
    drop schema a cascade;drop schema b cascade;\n
    \n soup wrap:

    your example data - I have renamed some things to make life easier.

    create schema a; create schema b;
    create table a.t (id int primary key ,a text,b text);
    insert into a.t values(1,'A','B'),(2,'C','D');
    create table a.f (id int references a.t(id),field1 text);
    insert into a.f values (1,'XYZ'),(1,'WVU'),(2,'STR'),(2,'PQR');
    create table b.t (id int primary key ,a text,b text);
    insert into b.t values(11,'A''','B'''),(22,'C''','D''');
    create table b.f (id int references b.t(id),field1 text);
    insert into b.f values (11,'XYZ'),(11,'WVU'),(22,'STR'),(22,'PQR');
    

    the join:

    SELECT * FROM a.t 
      JOIN a.f ON a.t.id = a.f.id
      JOIN b.f ON a.f.field1 = b.f.field1 
      JOIN b.t ON b.t.id = b.f.id 
    

    an update?:

    UPDATE b.t 
      SET b=b.t.b||'('||a.t.id||')'
      FROM a.f 
        JOIN b.f ON a.f.field1 = b.f.field1 
        JOIN a.t ON a.t.id = a.f.id 
      WHERE b.t.id = b.f.id
    ;
    

    the cleanup

    drop schema a cascade;drop schema b cascade;
    
    qid & accept id: (27547669, 27548173) query: Calculate sum from two queries? soup:

    The best way is to break down the query into two simple queries and then join them together. This solution assumes that a job will always have someone working it but might not have any fitting costs (hence the left join from wages to fittings). Really this is a deficiency in the schema design as there should be a job table (maybe there is one that you haven't included in your example) which you would left join both wages and fittings to.

    \n
    WITH  job_wage_costs AS\n(\n   SELECT   job_id,\n            SUM(hours) * 30 AS wage_costs\n   FROM     staff_on_job\n   GROUP BY job_id\n),\njob_fitting_costs AS (\n   SELECT   job_id,\n            SUM(COST) AS fitting_costs\n   FROM     job_fittings jf\n   JOIN     fittings f ON (f.fitting_name = jf.fitting_name)\n   GROUP BY job_id\n)\nSELECT   jw.job_id,\n         jw.wage_costs,\n         jf.fitting_costs\nFROM     job_wage_costs jw\nLEFT OUTER JOIN job_fitting_costs jf ON (jf.job_id = jw.job_id);\n
    \n
    \n
    JOB_ID   WAGE_COSTS  FITTING_COSTS \n1        60          20\n2        480         164.99\n6        1200        199.99\n12       1200        320.98\n9        90 \n
    \n

    As an aside, the design of your fittings table could do with changing as it isn't a normalized design. By reproducing the fitting type in every row you make it very difficult to change the wording of those fitting types in the future as you'd need to change every row - they should be in a fitting_type table which can then be joined to fittings.

    \n soup wrap:

    The best way is to break down the query into two simple queries and then join them together. This solution assumes that a job will always have someone working it but might not have any fitting costs (hence the left join from wages to fittings). Really this is a deficiency in the schema design as there should be a job table (maybe there is one that you haven't included in your example) which you would left join both wages and fittings to.

    WITH  job_wage_costs AS
    (
       SELECT   job_id,
                SUM(hours) * 30 AS wage_costs
       FROM     staff_on_job
       GROUP BY job_id
    ),
    job_fitting_costs AS (
       SELECT   job_id,
                SUM(COST) AS fitting_costs
       FROM     job_fittings jf
       JOIN     fittings f ON (f.fitting_name = jf.fitting_name)
       GROUP BY job_id
    )
    SELECT   jw.job_id,
             jw.wage_costs,
             jf.fitting_costs
    FROM     job_wage_costs jw
    LEFT OUTER JOIN job_fitting_costs jf ON (jf.job_id = jw.job_id);
    

    JOB_ID   WAGE_COSTS  FITTING_COSTS 
    1        60          20
    2        480         164.99
    6        1200        199.99
    12       1200        320.98
    9        90 
    

    As an aside, the design of your fittings table could do with changing as it isn't a normalized design. By reproducing the fitting type in every row you make it very difficult to change the wording of those fitting types in the future as you'd need to change every row - they should be in a fitting_type table which can then be joined to fittings.

    qid & accept id: (27554417, 27554514) query: Trim String After Keyword soup:

    You can use a combination of Charindex and Substring and Len to do it.

    \n

    Try this:

    \n
    select SUBSTRING(field,charindex('keyword',field), LEN('keyword'))\n
    \n

    So this will find Flop and extract it wherever it is in the field

    \n
    select SUBSTRING('bullflop',charindex('flop','bullflop'), LEN('flop'))\n
    \n

    EDIT:

    \n

    To get the remainder then just set LEN to the field LEN(field)

    \n
    declare @field varchar(200)\nset @field = 'this is bullflop and other such junk'\nselect SUBSTRING(@field,charindex('flop',@field), LEN(@field) )\n
    \n

    EDIT 2:

    \n

    Now I understand, here is a quick and dirty version...

    \n
    declare @field varchar(200)\nset @field = 'From X to Y'\nselect Replace(SUBSTRING(@field,charindex('to ',@field), LEN(@field) ), 'to ','')\n
    \n

    Returns:

    \n

    Y

    \n

    EDIT 3:

    \n

    Cory is right, this is cleaner.

    \n
    declare @field varchar(200) = 'From X to Y'\ndeclare @keyword varchar(200) = 'to '\nselect SUBSTRING(@field,charindex(@keyword,@field) + LEN(@keyword), LEN(@field) )\n
    \n soup wrap:

    You can use a combination of Charindex and Substring and Len to do it.

    Try this:

    select SUBSTRING(field,charindex('keyword',field), LEN('keyword'))
    

    So this will find Flop and extract it wherever it is in the field

    select SUBSTRING('bullflop',charindex('flop','bullflop'), LEN('flop'))
    

    EDIT:

    To get the remainder then just set LEN to the field LEN(field)

    declare @field varchar(200)
    set @field = 'this is bullflop and other such junk'
    select SUBSTRING(@field,charindex('flop',@field), LEN(@field) )
    

    EDIT 2:

    Now I understand, here is a quick and dirty version...

    declare @field varchar(200)
    set @field = 'From X to Y'
    select Replace(SUBSTRING(@field,charindex('to ',@field), LEN(@field) ), 'to ','')
    

    Returns:

    Y

    EDIT 3:

    Cory is right, this is cleaner.

    declare @field varchar(200) = 'From X to Y'
    declare @keyword varchar(200) = 'to '
    select SUBSTRING(@field,charindex(@keyword,@field) + LEN(@keyword), LEN(@field) )
    
    qid & accept id: (27563140, 27563781) query: Select last changed row in sub-query soup:

    The ROW_NUMBER analytical function might help with such queries:

    \n
    SELECT  "owner_id", "id", "box_id", "last_activity" FROM\n(\n\n    SELECT "owner_id", "id", "box_id", "last_activity",\n           ROW_NUMBER() \n            OVER (PARTITION BY "box_id" ORDER BY "last_activity" DESC NULLS LAST) rn\n            --                                                   ^^^^^^^^^^^^^^^\n            --               descending order, reject nulls after not null values\n            --                                 (this is the default, but making it\n            --                                  explicit here for self-documentation\n            --                                  purpose)\n    FROM T\n    WHERE "owner_id" = 2\n\n) V\nWHERE rn = 1 or "box_id" IS NULL\nORDER BY "id" -- <-- probably not necessary, but matches your example\n
    \n

    See http://sqlfiddle.com/#!4/db775/8

    \n
    \n
    \n

    there can be nulls as a value. If there are nulls in all products inside a box, then MIN(id) should be returned

    \n
    \n

    Even if is is probably not a good idea to rely on id to order things is you think you need that, you will have to change the ORDER BY clause to:

    \n
    ... ORDER BY "last_activity" DESC NULLS LAST, "id" DESC\n--                                          ^^^^^^^^^^^\n
    \n soup wrap:

    The ROW_NUMBER analytical function might help with such queries:

    SELECT  "owner_id", "id", "box_id", "last_activity" FROM
    (
    
        SELECT "owner_id", "id", "box_id", "last_activity",
               ROW_NUMBER() 
                OVER (PARTITION BY "box_id" ORDER BY "last_activity" DESC NULLS LAST) rn
                --                                                   ^^^^^^^^^^^^^^^
                --               descending order, reject nulls after not null values
                --                                 (this is the default, but making it
                --                                  explicit here for self-documentation
                --                                  purpose)
        FROM T
        WHERE "owner_id" = 2
    
    ) V
    WHERE rn = 1 or "box_id" IS NULL
    ORDER BY "id" -- <-- probably not necessary, but matches your example
    

    See http://sqlfiddle.com/#!4/db775/8


    there can be nulls as a value. If there are nulls in all products inside a box, then MIN(id) should be returned

    Even if is is probably not a good idea to rely on id to order things is you think you need that, you will have to change the ORDER BY clause to:

    ... ORDER BY "last_activity" DESC NULLS LAST, "id" DESC
    --                                          ^^^^^^^^^^^
    
    qid & accept id: (27571493, 27571577) query: Determine if last action performed on each item was an install or a removal soup:

    This will select all PanelIDs that have been installed and have not been removed after installation

    \n
    select PanelID from (\n    select PanelID,\n    row_number() over (partition by PanelID order by ActualCompleteDate desc) rn,\n    WorkType\n    from mytable\n) t1 where rn = 1 and WorkType = "electrical install"\n
    \n

    or using not exists if your db doesn't support row_number()

    \n
    select PanelID from mytable t1\nwhere WorkType = "electrical install"\nand not exists (\n    select 1 from mytable t2\n    where t2.PanelID = t1.PanelID\n    and t2.ActualCompleteDate > t1.ActualCompleteDate\n    and t2.WorkType = "electrical removal"\n)\n
    \n soup wrap:

    This will select all PanelIDs that have been installed and have not been removed after installation

    select PanelID from (
        select PanelID,
        row_number() over (partition by PanelID order by ActualCompleteDate desc) rn,
        WorkType
        from mytable
    ) t1 where rn = 1 and WorkType = "electrical install"
    

    or using not exists if your db doesn't support row_number()

    select PanelID from mytable t1
    where WorkType = "electrical install"
    and not exists (
        select 1 from mytable t2
        where t2.PanelID = t1.PanelID
        and t2.ActualCompleteDate > t1.ActualCompleteDate
        and t2.WorkType = "electrical removal"
    )
    
    qid & accept id: (27573760, 27573862) query: Moving IF EXISTS to the WHERE clause soup:

    You can write this as:

    \n
    SELECT  Stuff\nFROM    Foo\nWHERE   X = 'Y' AND\n        (FullOrderNumber = @FullOrderNo OR\n         (NOT EXISTS (SELECT 1 FROM Foo WHERE FullOrderNumber = @FullOrderNo) and OrderNumber = @OrderNo) )\n
    \n

    If you are looking for only one row, you can use order by for prioritization:

    \n
    SELECT  TOP (1) Stuff\nFROM    Foo\nWHERE   X = 'Y' AND\n        (FullOrderNumber = @FullOrderNo OR OrderNumber = @OrderNo)\nORDER BY (CASE WHEN FullOrderNumber = @FullOrderNo THEN 1 ELSE 2 END)\n
    \n

    Actually, even if there are duplicates, you can use with ties like this:

    \n
    SELECT  TOP (1) WITH TIES Stuff\nFROM    Foo\nWHERE   X = 'Y' AND\n        (FullOrderNumber = @FullOrderNo OR OrderNumber = @OrderNo)\nORDER BY (CASE WHEN FullOrderNumber = @FullOrderNo THEN 1 ELSE 2 END)\n
    \n soup wrap:

    You can write this as:

    SELECT  Stuff
    FROM    Foo
    WHERE   X = 'Y' AND
            (FullOrderNumber = @FullOrderNo OR
             (NOT EXISTS (SELECT 1 FROM Foo WHERE FullOrderNumber = @FullOrderNo) and OrderNumber = @OrderNo) )
    

    If you are looking for only one row, you can use order by for prioritization:

    SELECT  TOP (1) Stuff
    FROM    Foo
    WHERE   X = 'Y' AND
            (FullOrderNumber = @FullOrderNo OR OrderNumber = @OrderNo)
    ORDER BY (CASE WHEN FullOrderNumber = @FullOrderNo THEN 1 ELSE 2 END)
    

    Actually, even if there are duplicates, you can use with ties like this:

    SELECT  TOP (1) WITH TIES Stuff
    FROM    Foo
    WHERE   X = 'Y' AND
            (FullOrderNumber = @FullOrderNo OR OrderNumber = @OrderNo)
    ORDER BY (CASE WHEN FullOrderNumber = @FullOrderNo THEN 1 ELSE 2 END)
    
    qid & accept id: (27585801, 27717012) query: Perform calculations on number ranges mysql soup:

    Try this:

    \n
    SELECT (CASE WHEN Action1 = 'Add' THEN col1 \n             WHEN Action1 = 'Subtr' THEN col1 + 1 \n             ELSE 0 \n        END) col1, \n       (CASE WHEN Action2 = 'Add' THEN col2 \n             WHEN Action2 = 'Subtr' THEN col2 - 1 \n             ELSE 0 \n        END) col2\nFROM (SELECT CEILING(RowNum / 2) AS RowId, \n             MAX(CASE WHEN RowNum % 2 = 1 THEN col ELSE 0 END) AS col1, \n             MAX(CASE WHEN RowNum % 2 = 1 THEN Action ELSE '' END) AS Action1,         \n             MAX(CASE WHEN RowNum % 2 = 0 THEN col ELSE 0 END) AS col2, \n             MAX(CASE WHEN RowNum % 2 = 0 THEN Action ELSE '' END) AS Action2\n      FROM (SELECT (@id:=@id+1) AS RowNum, col, Action \n            FROM (SELECT col1 AS col, Action \n                  FROM tableA \n                  UNION \n                  SELECT col2 AS col, Action \n                  FROM tableA \n                 ) AS A, \n                 (SELECT @id:=0) AS B\n            ORDER BY col\n           ) AS A\n      GROUP BY RowId\n     ) AS A;\n
    \n

    Check this SQL FIDDLE DEMO

    \n

    OUTPUT

    \n
    | COL1 | COL2 |\n|------|------|\n|    1 |   19 |\n|   41 |   59 |\n|   66 |  100 |\n|  200 |  210 |\n
    \n

    ::Explanation::

    \n

    As you see in my query that first I had fetched all start-range and end-range in single column with its action type and then given a row number for each records to transpose that data in two columns because I want to create an inner table as below:

    \n
    | ROWID | COL1 | ACTION1 | COL2 | ACTION2 |\n|-------|------|---------|------|---------|\n|     1 |    1 |     Add |   20 |   Subtr |\n|     2 |   40 |   Subtr |   60 |   Subtr |\n|     3 |   65 |   Subtr |  100 |     Add |\n|     4 |  200 |     Add |  210 |     Add |\n
    \n

    And in last I had used CASE statement to generate specific output.

    \n soup wrap:

    Try this:

    SELECT (CASE WHEN Action1 = 'Add' THEN col1 
                 WHEN Action1 = 'Subtr' THEN col1 + 1 
                 ELSE 0 
            END) col1, 
           (CASE WHEN Action2 = 'Add' THEN col2 
                 WHEN Action2 = 'Subtr' THEN col2 - 1 
                 ELSE 0 
            END) col2
    FROM (SELECT CEILING(RowNum / 2) AS RowId, 
                 MAX(CASE WHEN RowNum % 2 = 1 THEN col ELSE 0 END) AS col1, 
                 MAX(CASE WHEN RowNum % 2 = 1 THEN Action ELSE '' END) AS Action1,         
                 MAX(CASE WHEN RowNum % 2 = 0 THEN col ELSE 0 END) AS col2, 
                 MAX(CASE WHEN RowNum % 2 = 0 THEN Action ELSE '' END) AS Action2
          FROM (SELECT (@id:=@id+1) AS RowNum, col, Action 
                FROM (SELECT col1 AS col, Action 
                      FROM tableA 
                      UNION 
                      SELECT col2 AS col, Action 
                      FROM tableA 
                     ) AS A, 
                     (SELECT @id:=0) AS B
                ORDER BY col
               ) AS A
          GROUP BY RowId
         ) AS A;
    

    Check this SQL FIDDLE DEMO

    OUTPUT

    | COL1 | COL2 |
    |------|------|
    |    1 |   19 |
    |   41 |   59 |
    |   66 |  100 |
    |  200 |  210 |
    

    ::Explanation::

    As you see in my query that first I had fetched all start-range and end-range in single column with its action type and then given a row number for each records to transpose that data in two columns because I want to create an inner table as below:

    | ROWID | COL1 | ACTION1 | COL2 | ACTION2 |
    |-------|------|---------|------|---------|
    |     1 |    1 |     Add |   20 |   Subtr |
    |     2 |   40 |   Subtr |   60 |   Subtr |
    |     3 |   65 |   Subtr |  100 |     Add |
    |     4 |  200 |     Add |  210 |     Add |
    

    And in last I had used CASE statement to generate specific output.

    qid & accept id: (27610637, 27611059) query: Group and Summarize Time Series Transactions with Start and Stop Times soup:

    To resolve such issues, you need to generate some group number for each consecutive rows. Here I first use LAG to generate a tick mark each time we start a new group. An outer query using SUM will count the number of tick marks from the first from to the current one to generate a group number:

    \n
    SELECT "Area", \n       MIN("Start Time") as "Start Time", \n       MAX("End Time") as "End Time",  \n       SUM("End Time" - "Start Time")*24*60 as "Total Minutes", \n       COUNT("Transaction ID") as "#Transaction ID"\nFROM (\n  SELECT SUM(clk)\n         OVER (ORDER BY "Start Time"\n               ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) grp,\n--       ^^^^^^^^^^^^^\n--        Generate a group number by summing the tick marks\n         V1.*\n  FROM (\n    SELECT CASE\n             WHEN LAG("Area", 1, NULL) OVER (ORDER BY "Start Time") = "Area"\n             THEN 0\n             ELSE 1\n           END clk,\n--         ^^^^^^^^\n--         Set a tick mark ("clock") to 1 each time we change group\n         T.*\n    FROM T\n    ) V1\n  ) V2\nGROUP BY GRP, "Area"\nORDER BY "Start Time"\n
    \n

    See http://sqlfiddle.com/#!4/93f05/2

    \n
    \n

    A little bit more difficult to grasp, but this works too:

    \n
    SELECT "Area", \n       MIN("Start Time") as "Start Time", \n       MAX("End Time") as "End Time",  \n       SUM("End Time" - "Start Time")*60 as "Total Minutes", \n       COUNT("Transaction ID") as "#Transaction ID"\nFROM (\n  SELECT ROWNUM-ROW_NUMBER() \n                   OVER (PARTITION BY "Area" ORDER BY "Start Time") grp,\n       T.*\n  FROM T\n  ORDER BY "Start Time"\n) V\nGROUP BY GRP, "Area"\nORDER BY "Start Time"\n
    \n

    See http://sqlfiddle.com/#!4/07d43/6

    \n

    In that case, beware that you can have duplicate grp -- but only for different group. Unless the rows are contiguous. Please refer to the comments bellow for a discussion about that.

    \n soup wrap:

    To resolve such issues, you need to generate some group number for each consecutive rows. Here I first use LAG to generate a tick mark each time we start a new group. An outer query using SUM will count the number of tick marks from the first from to the current one to generate a group number:

    SELECT "Area", 
           MIN("Start Time") as "Start Time", 
           MAX("End Time") as "End Time",  
           SUM("End Time" - "Start Time")*24*60 as "Total Minutes", 
           COUNT("Transaction ID") as "#Transaction ID"
    FROM (
      SELECT SUM(clk)
             OVER (ORDER BY "Start Time"
                   ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) grp,
    --       ^^^^^^^^^^^^^
    --        Generate a group number by summing the tick marks
             V1.*
      FROM (
        SELECT CASE
                 WHEN LAG("Area", 1, NULL) OVER (ORDER BY "Start Time") = "Area"
                 THEN 0
                 ELSE 1
               END clk,
    --         ^^^^^^^^
    --         Set a tick mark ("clock") to 1 each time we change group
             T.*
        FROM T
        ) V1
      ) V2
    GROUP BY GRP, "Area"
    ORDER BY "Start Time"
    

    See http://sqlfiddle.com/#!4/93f05/2


    A little bit more difficult to grasp, but this works too:

    SELECT "Area", 
           MIN("Start Time") as "Start Time", 
           MAX("End Time") as "End Time",  
           SUM("End Time" - "Start Time")*60 as "Total Minutes", 
           COUNT("Transaction ID") as "#Transaction ID"
    FROM (
      SELECT ROWNUM-ROW_NUMBER() 
                       OVER (PARTITION BY "Area" ORDER BY "Start Time") grp,
           T.*
      FROM T
      ORDER BY "Start Time"
    ) V
    GROUP BY GRP, "Area"
    ORDER BY "Start Time"
    

    See http://sqlfiddle.com/#!4/07d43/6

    In that case, beware that you can have duplicate grp -- but only for different group. Unless the rows are contiguous. Please refer to the comments bellow for a discussion about that.

    qid & accept id: (27682228, 27682724) query: How to get matching data from another SQL table for two different columns: Inner Join and/or Union? soup:

    Every base table has a statement template, aka predicate, parameterized by column names, by which we put a row in or leave it out. We can use a shorthand for the predicate that is like its SQL declaration.

    \n
    // facilitator [facilID] is named [facilFname] [facilLname]\nfacilitator(facilID,facilLname,facilFname)\n// class [classID] named [className] has prime [primeFacil] & backup [secondFacil]\nclass(classID,className,primeFacil,secondFacil)\n
    \n

    Plugging a row into a predicate gives a statement aka proposition. The rows that make a true proposition go in a table and the rows that make a false proposition stay out. (So a table states the proposition of each present row and states NOT the proposition of each absent row.)

    \n
    // facilitator f1 is named Jane Doe\nfacilitator(f1,'Jane','Doe')\n// class c1 named CSC101 has prime f1 & backup f8\nclass(c1,'CSC101',f1,f8)\n
    \n

    But every table expression value has a predicate per its expression. SQL is designed so that if tables T and U hold the (NULL-free non-duplicate) rows where T(...) and U(...) (respectively) then:

    \n
      \n
    • T CROSS JOIN U holds rows where T(...) AND U(...)
    • \n
    • T INNER JOIN U ONcondition holds rows where T(...) AND U(...) AND condition
    • \n
    • T LEFT JOIN U ONcondition holds rows where (for U-only columns U1,...)
      \n    T(...) AND U(...) AND condition
      \nOR T(...) AND NOT(U(...) AND condition) AND U1 IS NULL AND ...
    • \n
    • T WHEREcondition holds rows where T(...) AND condition
    • \n
    • T INTERSECT U holds rows where T(...) AND U(...)
    • \n
    • T UNION U holds rows where T(...) OR U(...)
    • \n
    • T EXCEPT U holds rows where T(...) AND NOT U(...)
    • \n
    • SELECT DISTINCT * FROM T holds rows where T(...)
    • \n
    • SELECT DISTINCTcolumns to keepFROM T holds rows where
      \nTHERE EXISTS columns to drop SUCH THAT T(...)
    • \n
    • VALUES (C1, C2, ...)((v1,v2, ...), ...) holds rows where
      \nC1 = v1 AND C2 = v2 AND ... OR ...
    • \n
    \n

    Also:

    \n
      \n
    • (...) IN T means T(...)
    • \n
    • scalar= T means T(scalar)
    • \n
    • T(..., X, ...) AND X = Y means T(..., Y, ...) AND X = Y
    • \n
    \n

    So to query we find a way of phrasing the predicate for the rows that we want in natural language using base table predicates, then in shorthand using base table predicates, then in SQL using base table names (plus conditions wherever needed). If we need to mention a table twice then we give it aliases.

    \n
    // natural language\nTHERE EXISTS classID,primeFacil,secondFacil SUCH THAT\n    class [classID] named [className] has prime [primeFacil] & backup [secondFacil]\nAND facilitator [primeFacil] is named [pf.facilFname] [pf.facilLname]\nAND facilitator [secondFacil] is named [sf.facilFname] [sf.facilLname]\n\n// shorthand\nTHERE EXISTS classID,primeFacil,secondFacil SUCH THAT\n    class(classID,className,primeFacil,secondFacil)\nAND facilitator(pf.facilID,pf.facilLname,pf.facilFname)\nAND pf.facilID = primeFacil\nAND facilitator(sf.facilID,sf.facilLname,sf.facilFname)\nAND sf.facilID = secondFacil\n\n// table names & (MS Access) SQL\nSELECT className,pf.facilLname,pf.facilFname,sf.facilLname,sf.facilFname\nFROM (class JOIN facilitator AS pf ON pf.facilID = primeFacil)\nJOIN facilitator AS sf ON sf.facilID = secondFacil\n
    \n

    OUTER JOIN would be used when a class doesn't always have both facilitators or something doesn't always have all names. (Ie if a column can be NULL.) But you haven't given the specific predicates for your base table and query or the business rules about when things might be NULL so I have assumed no NULLs.

    \n

    (Re MS Access JOIN parentheses see this from SO and this from MS.)

    \n soup wrap:

    Every base table has a statement template, aka predicate, parameterized by column names, by which we put a row in or leave it out. We can use a shorthand for the predicate that is like its SQL declaration.

    // facilitator [facilID] is named [facilFname] [facilLname]
    facilitator(facilID,facilLname,facilFname)
    // class [classID] named [className] has prime [primeFacil] & backup [secondFacil]
    class(classID,className,primeFacil,secondFacil)
    

    Plugging a row into a predicate gives a statement aka proposition. The rows that make a true proposition go in a table and the rows that make a false proposition stay out. (So a table states the proposition of each present row and states NOT the proposition of each absent row.)

    // facilitator f1 is named Jane Doe
    facilitator(f1,'Jane','Doe')
    // class c1 named CSC101 has prime f1 & backup f8
    class(c1,'CSC101',f1,f8)
    

    But every table expression value has a predicate per its expression. SQL is designed so that if tables T and U hold the (NULL-free non-duplicate) rows where T(...) and U(...) (respectively) then:

    • T CROSS JOIN U holds rows where T(...) AND U(...)
    • T INNER JOIN U ONcondition holds rows where T(...) AND U(...) AND condition
    • T LEFT JOIN U ONcondition holds rows where (for U-only columns U1,...)
          T(...) AND U(...) AND condition
      OR T(...) AND NOT(U(...) AND condition) AND U1 IS NULL AND ...
    • T WHEREcondition holds rows where T(...) AND condition
    • T INTERSECT U holds rows where T(...) AND U(...)
    • T UNION U holds rows where T(...) OR U(...)
    • T EXCEPT U holds rows where T(...) AND NOT U(...)
    • SELECT DISTINCT * FROM T holds rows where T(...)
    • SELECT DISTINCTcolumns to keepFROM T holds rows where
      THERE EXISTS columns to drop SUCH THAT T(...)
    • VALUES (C1, C2, ...)((v1,v2, ...), ...) holds rows where
      C1 = v1 AND C2 = v2 AND ... OR ...

    Also:

    • (...) IN T means T(...)
    • scalar= T means T(scalar)
    • T(..., X, ...) AND X = Y means T(..., Y, ...) AND X = Y

    So to query we find a way of phrasing the predicate for the rows that we want in natural language using base table predicates, then in shorthand using base table predicates, then in SQL using base table names (plus conditions wherever needed). If we need to mention a table twice then we give it aliases.

    // natural language
    THERE EXISTS classID,primeFacil,secondFacil SUCH THAT
        class [classID] named [className] has prime [primeFacil] & backup [secondFacil]
    AND facilitator [primeFacil] is named [pf.facilFname] [pf.facilLname]
    AND facilitator [secondFacil] is named [sf.facilFname] [sf.facilLname]
    
    // shorthand
    THERE EXISTS classID,primeFacil,secondFacil SUCH THAT
        class(classID,className,primeFacil,secondFacil)
    AND facilitator(pf.facilID,pf.facilLname,pf.facilFname)
    AND pf.facilID = primeFacil
    AND facilitator(sf.facilID,sf.facilLname,sf.facilFname)
    AND sf.facilID = secondFacil
    
    // table names & (MS Access) SQL
    SELECT className,pf.facilLname,pf.facilFname,sf.facilLname,sf.facilFname
    FROM (class JOIN facilitator AS pf ON pf.facilID = primeFacil)
    JOIN facilitator AS sf ON sf.facilID = secondFacil
    

    OUTER JOIN would be used when a class doesn't always have both facilitators or something doesn't always have all names. (Ie if a column can be NULL.) But you haven't given the specific predicates for your base table and query or the business rules about when things might be NULL so I have assumed no NULLs.

    (Re MS Access JOIN parentheses see this from SO and this from MS.)

    qid & accept id: (27689846, 27689980) query: How can I order a query by the domains of email addresses? soup:

    You can use charindex() and substring():

    \n
    order by lastLogin,\n         substring(email, charindex('@', email) + 1, len(email))\n
    \n

    If you want the date component of the login:

    \n
    order by cast(lastLogin as date),\n         substring(email, charindex('@', email) + 1, len(email))\n
    \n soup wrap:

    You can use charindex() and substring():

    order by lastLogin,
             substring(email, charindex('@', email) + 1, len(email))
    

    If you want the date component of the login:

    order by cast(lastLogin as date),
             substring(email, charindex('@', email) + 1, len(email))
    
    qid & accept id: (27744850, 27744912) query: SQL computed column for sum of data in another table soup:

    Create a trigger

    \n
    CREATE TRIGGER test\nON DiskUsage\nafter INSERT, UPDATE\nAS\n  BEGIN\n      UPDATE StatisticsTable\n      SET    TotalDiskUsage = (SELECT Sum(DiskUsage)\n                               FROM   DiskUsage)\n  END \n
    \n

    Or as Mentioned by King.code create a view instead of having a table

    \n
    CREATE VIEW StatisticsTable\nAS\n  SELECT Sum(DiskUsage)\n  FROM   DiskUsage \n
    \n soup wrap:

    Create a trigger

    CREATE TRIGGER test
    ON DiskUsage
    after INSERT, UPDATE
    AS
      BEGIN
          UPDATE StatisticsTable
          SET    TotalDiskUsage = (SELECT Sum(DiskUsage)
                                   FROM   DiskUsage)
      END 
    

    Or as Mentioned by King.code create a view instead of having a table

    CREATE VIEW StatisticsTable
    AS
      SELECT Sum(DiskUsage)
      FROM   DiskUsage 
    
    qid & accept id: (27789682, 27795120) query: Oracle 11g hierarchical query needs some inherited data soup:

    Your inner query is correct. All you need is to pick only the rightmost number from the meat_id column of inner query, when flag is Y.\nI have used REGEXP_SUBSTR function to get the rightmost number and CASE statement to check the flag.

    \n

    SQL Fiddle

    \n

    Query 1:

    \n
    select  taco_id,  \n        taco_name,\n        taco_prntid,\n        case meat_inht\n            when 'N' then meat_id\n            when 'Y' then to_number(regexp_substr(meat_id2,'\d+\s*$'))\n        end meat_id,\n        meat_inht\nfrom    (   select   taco_id, \n                     taco_name,\n                     taco_prntid,\n                     meat_id,\n                     meat_inht,\n                     level-1 "level", \n                     sys_connect_by_path(meat_id, ' ') meat_id2\n            from     taco\n            start    with taco_prntid is null \n            connect  by prior taco_id = taco_prntid \n        )\norder by 1\n
    \n

    Results:

    \n
    | TACO_ID | TACO_NAME | TACO_PRNTID | MEAT_ID | MEAT_INHT |\n|---------|-----------|-------------|---------|-----------|\n|       1 |         1 |      (null) |       1 |         N |\n|       2 |       1.1 |           1 |       1 |         Y |\n|       3 |     1.1.1 |           2 |  (null) |         N |\n|       4 |       1.2 |           1 |       2 |         N |\n|       5 |     1.2.1 |           4 |       2 |         Y |\n|       6 |     1.1.2 |           2 |       1 |         Y |\n
    \n

    Query 2:

    \n
    select   taco_id, \n                     taco_name,\n                     taco_prntid,\n                     meat_id,\n                     meat_inht,\n                     level-1 "level", \n                     sys_connect_by_path(meat_id, ' ') meat_id2\n            from     taco\n            start    with taco_prntid is null \n            connect  by prior taco_id = taco_prntid \n
    \n

    Results:

    \n
    | TACO_ID | TACO_NAME | TACO_PRNTID | MEAT_ID | MEAT_INHT | LEVEL | MEAT_ID2 |\n|---------|-----------|-------------|---------|-----------|-------|----------|\n|       1 |         1 |      (null) |       1 |         N |     0 |     1    |\n|       2 |       1.1 |           1 |  (null) |         Y |     1 |     1    |\n|       3 |     1.1.1 |           2 |  (null) |         N |     2 |     1    |\n|       6 |     1.1.2 |           2 |  (null) |         Y |     2 |     1    |\n|       4 |       1.2 |           1 |       2 |         N |     1 |     1 2  |\n|       5 |     1.2.1 |           4 |  (null) |         Y |     2 |     1 2  |\n
    \n soup wrap:

    Your inner query is correct. All you need is to pick only the rightmost number from the meat_id column of inner query, when flag is Y. I have used REGEXP_SUBSTR function to get the rightmost number and CASE statement to check the flag.

    SQL Fiddle

    Query 1:

    select  taco_id,  
            taco_name,
            taco_prntid,
            case meat_inht
                when 'N' then meat_id
                when 'Y' then to_number(regexp_substr(meat_id2,'\d+\s*$'))
            end meat_id,
            meat_inht
    from    (   select   taco_id, 
                         taco_name,
                         taco_prntid,
                         meat_id,
                         meat_inht,
                         level-1 "level", 
                         sys_connect_by_path(meat_id, ' ') meat_id2
                from     taco
                start    with taco_prntid is null 
                connect  by prior taco_id = taco_prntid 
            )
    order by 1
    

    Results:

    | TACO_ID | TACO_NAME | TACO_PRNTID | MEAT_ID | MEAT_INHT |
    |---------|-----------|-------------|---------|-----------|
    |       1 |         1 |      (null) |       1 |         N |
    |       2 |       1.1 |           1 |       1 |         Y |
    |       3 |     1.1.1 |           2 |  (null) |         N |
    |       4 |       1.2 |           1 |       2 |         N |
    |       5 |     1.2.1 |           4 |       2 |         Y |
    |       6 |     1.1.2 |           2 |       1 |         Y |
    

    Query 2:

    select   taco_id, 
                         taco_name,
                         taco_prntid,
                         meat_id,
                         meat_inht,
                         level-1 "level", 
                         sys_connect_by_path(meat_id, ' ') meat_id2
                from     taco
                start    with taco_prntid is null 
                connect  by prior taco_id = taco_prntid 
    

    Results:

    | TACO_ID | TACO_NAME | TACO_PRNTID | MEAT_ID | MEAT_INHT | LEVEL | MEAT_ID2 |
    |---------|-----------|-------------|---------|-----------|-------|----------|
    |       1 |         1 |      (null) |       1 |         N |     0 |     1    |
    |       2 |       1.1 |           1 |  (null) |         Y |     1 |     1    |
    |       3 |     1.1.1 |           2 |  (null) |         N |     2 |     1    |
    |       6 |     1.1.2 |           2 |  (null) |         Y |     2 |     1    |
    |       4 |       1.2 |           1 |       2 |         N |     1 |     1 2  |
    |       5 |     1.2.1 |           4 |  (null) |         Y |     2 |     1 2  |
    
    qid & accept id: (27817195, 27817597) query: Distinct LISTAGG that is inside a subquery in the SELECT list soup:

    The following method gets rid of the in-line view to fetch duplicates, it uses REGEXP_REPLACE and RTRIM on the LISTAGG function to get the distinct result set in the aggregated list. Thus, it won't do more than one scan.

    \n

    Adding this piece to your code,

    \n
    RTRIM(REGEXP_REPLACE(listagg (tm_redir.team_code, ',') \n                     WITHIN GROUP (ORDER BY tm_redir.team_code),\n                     '([^,]+)(,\1)+', '\1'),\n                     ',')\n
    \n

    Modified query-

    \n
    SQL> with tran_party as -- ALL DUMMY DATA ARE IN THESE CTE FOR YOUR REFERENCE\n  2           (select 1 tran_party_id, 11 transaction_id, 101 team_id_redirect\n  3              from dual\n  4            union all\n  5            select 2, 11, 101 from dual\n  6            union all\n  7            select 3, 11, 102 from dual\n  8            union all\n  9            select 4, 12, 103 from dual\n 10            union all\n 11            select 5, 12, 103 from dual\n 12            union all\n 13            select 6, 12, 104 from dual\n 14            union all\n 15            select 7, 13, 104 from dual\n 16            union all\n 17            select 8, 13, 105 from dual),\n 18       tran as\n 19           (select 11 transaction_id, 1001 account_id, 1034.93 amount from dual\n 20            union all\n 21            select 12, 1001, 2321.89 from dual\n 22            union all\n 23            select 13, 1002, 3201.47 from dual),\n 24       account as\n 25           (select 1001 account_id, 111 team_id from dual\n 26            union all\n 27            select 1002, 112 from dual),\n 28       team as\n 29           (select 101 team_id, 'UUU' as team_code from dual\n 30            union all\n 31            select 102, 'VV' from dual\n 32            union all\n 33            select 103, 'WWW' from dual\n 34            union all\n 35            select 104, 'XXXXX' from dual\n 36            union all\n 37            select 105, 'Z' from dual)\n 38  -- The Actual Query\n 39  select a.account_id,\n 40         t.transaction_id,\n 41         (SELECT  RTRIM(\n 42           REGEXP_REPLACE(listagg (tm_redir.team_code, ',')\n 43                     WITHIN GROUP (ORDER BY tm_redir.team_code),\n 44             '([^,]+)(,\1)+', '\1'),\n 45           ',')\n 46            from tran_party tp_redir\n 47                 inner join team tm_redir\n 48                     on tp_redir.team_id_redirect = tm_redir.team_id\n 49                 inner join tran t_redir\n 50                     on tp_redir.transaction_id = t_redir.transaction_id\n 51           where     t_redir.account_id = a.account_id\n 52                 and t_redir.transaction_id != t.transaction_id)\n 53             AS teams_redirected\n 54    from tran t inner join account a on t.account_id = a.account_id\n 55  /\n\nACCOUNT_ID TRANSACTION_ID TEAMS_REDIRECTED\n---------- -------------- --------------------\n      1001             11 WWW,XXXXX\n      1001             12 UUU,VV\n      1002             13\n\nSQL>\n
    \n soup wrap:

    The following method gets rid of the in-line view to fetch duplicates, it uses REGEXP_REPLACE and RTRIM on the LISTAGG function to get the distinct result set in the aggregated list. Thus, it won't do more than one scan.

    Adding this piece to your code,

    RTRIM(REGEXP_REPLACE(listagg (tm_redir.team_code, ',') 
                         WITHIN GROUP (ORDER BY tm_redir.team_code),
                         '([^,]+)(,\1)+', '\1'),
                         ',')
    

    Modified query-

    SQL> with tran_party as -- ALL DUMMY DATA ARE IN THESE CTE FOR YOUR REFERENCE
      2           (select 1 tran_party_id, 11 transaction_id, 101 team_id_redirect
      3              from dual
      4            union all
      5            select 2, 11, 101 from dual
      6            union all
      7            select 3, 11, 102 from dual
      8            union all
      9            select 4, 12, 103 from dual
     10            union all
     11            select 5, 12, 103 from dual
     12            union all
     13            select 6, 12, 104 from dual
     14            union all
     15            select 7, 13, 104 from dual
     16            union all
     17            select 8, 13, 105 from dual),
     18       tran as
     19           (select 11 transaction_id, 1001 account_id, 1034.93 amount from dual
     20            union all
     21            select 12, 1001, 2321.89 from dual
     22            union all
     23            select 13, 1002, 3201.47 from dual),
     24       account as
     25           (select 1001 account_id, 111 team_id from dual
     26            union all
     27            select 1002, 112 from dual),
     28       team as
     29           (select 101 team_id, 'UUU' as team_code from dual
     30            union all
     31            select 102, 'VV' from dual
     32            union all
     33            select 103, 'WWW' from dual
     34            union all
     35            select 104, 'XXXXX' from dual
     36            union all
     37            select 105, 'Z' from dual)
     38  -- The Actual Query
     39  select a.account_id,
     40         t.transaction_id,
     41         (SELECT  RTRIM(
     42           REGEXP_REPLACE(listagg (tm_redir.team_code, ',')
     43                     WITHIN GROUP (ORDER BY tm_redir.team_code),
     44             '([^,]+)(,\1)+', '\1'),
     45           ',')
     46            from tran_party tp_redir
     47                 inner join team tm_redir
     48                     on tp_redir.team_id_redirect = tm_redir.team_id
     49                 inner join tran t_redir
     50                     on tp_redir.transaction_id = t_redir.transaction_id
     51           where     t_redir.account_id = a.account_id
     52                 and t_redir.transaction_id != t.transaction_id)
     53             AS teams_redirected
     54    from tran t inner join account a on t.account_id = a.account_id
     55  /
    
    ACCOUNT_ID TRANSACTION_ID TEAMS_REDIRECTED
    ---------- -------------- --------------------
          1001             11 WWW,XXXXX
          1001             12 UUU,VV
          1002             13
    
    SQL>
    
    qid & accept id: (27832430, 27832895) query: How to query for a count within a count (without using a sub query)? soup:

    I think you could use this:

    \n
    select a1.person\n  from awards a1\n  join awards a2\n    on a1.person = a2.person\n   and a1.year = a2.year\n   and a1.award <> a2.award\n group by a1.person\nhaving count(distinct a1.year) > 1\n
    \n

    Fiddle:\nhttp://sqlfiddle.com/#!2/b98bf/8/0

    \n

    But you would be better off with a subquery:

    \n
    select person\n  from (select person, year, count(*) as num_in_yr\n          from awards\n         group by person, year) x\n group by person\nhaving sum(num_in_yr >= 2) >= 2\n
    \n

    Fiddle:\nhttp://sqlfiddle.com/#!2/b98bf/7/0

    \n soup wrap:

    I think you could use this:

    select a1.person
      from awards a1
      join awards a2
        on a1.person = a2.person
       and a1.year = a2.year
       and a1.award <> a2.award
     group by a1.person
    having count(distinct a1.year) > 1
    

    Fiddle: http://sqlfiddle.com/#!2/b98bf/8/0

    But you would be better off with a subquery:

    select person
      from (select person, year, count(*) as num_in_yr
              from awards
             group by person, year) x
     group by person
    having sum(num_in_yr >= 2) >= 2
    

    Fiddle: http://sqlfiddle.com/#!2/b98bf/7/0

    qid & accept id: (27855167, 27855506) query: mysql concatenate columns if column is not null soup:

    Depending on what you want to do if the value is null, you can try

    \n
    SELECT CONCAT(\nc.custom_param1, '=', IFNULL(c.custom_value1, ''), '; ',\nc.custom_param2, '=', IFNULL(c.custom_value2, ''), '; ',\nc.custom_param3, '=', IFNULL(c.custom_value3, ''), '; ') as 'Custom Parameters'\nFROM campaign as c;\n
    \n

    Will return

    \n
    param1=value1; param2=value2; param3=;\n
    \n

    Or you can exclude the whole value pair like this....

    \n
    SELECT CONCAT(\nIFNULL(CONCAT(c.custom_param1, '=', c.custom_value1, '; '), ''),\nIFNULL(CONCAT(c.custom_param2, '=', c.custom_value2, '; '), ''),\nIFNULL(CONCAT(c.custom_param3, '=', c.custom_value3, '; '), '')) AS 'Custom Parameters'\nFROM campaign as c;\n
    \n

    which will return

    \n
    param1=value1; param2=value2;\n
    \n

    Hope that helps

    \n soup wrap:

    Depending on what you want to do if the value is null, you can try

    SELECT CONCAT(
    c.custom_param1, '=', IFNULL(c.custom_value1, ''), '; ',
    c.custom_param2, '=', IFNULL(c.custom_value2, ''), '; ',
    c.custom_param3, '=', IFNULL(c.custom_value3, ''), '; ') as 'Custom Parameters'
    FROM campaign as c;
    

    Will return

    param1=value1; param2=value2; param3=;
    

    Or you can exclude the whole value pair like this....

    SELECT CONCAT(
    IFNULL(CONCAT(c.custom_param1, '=', c.custom_value1, '; '), ''),
    IFNULL(CONCAT(c.custom_param2, '=', c.custom_value2, '; '), ''),
    IFNULL(CONCAT(c.custom_param3, '=', c.custom_value3, '; '), '')) AS 'Custom Parameters'
    FROM campaign as c;
    

    which will return

    param1=value1; param2=value2;
    

    Hope that helps

    qid & accept id: (27876454, 27876473) query: Merge two columns from different tables soup:

    Use a union:

    \n
    select email\nfrom   teachers\nunion\nselect email\nfrom   students\n
    \n

    It concatenates the two results, and shows the overall distinct values. (In contrary to union all that can result in duplicate values since all row values are shown, not only the distinct values)

    \n

    Just a little extra, if you do want to know the origin of the email address, you could do this:

    \n
    select 'teacher' origin\n,      id\n,      email\nfrom   teachers\nunion\nselect 'student' origin\n,      id\n,      email\nfrom   students\n
    \n soup wrap:

    Use a union:

    select email
    from   teachers
    union
    select email
    from   students
    

    It concatenates the two results, and shows the overall distinct values. (In contrary to union all that can result in duplicate values since all row values are shown, not only the distinct values)

    Just a little extra, if you do want to know the origin of the email address, you could do this:

    select 'teacher' origin
    ,      id
    ,      email
    from   teachers
    union
    select 'student' origin
    ,      id
    ,      email
    from   students
    
    qid & accept id: (27916519, 27916579) query: Update column data without using temporary tables soup:

    You could use a case expression:

    \n
    UPDATE emp\nSET    gender = CASE gender WHEN 'M' THEN 'F' ELSE 'M' END\n
    \n

    EDIT:
    \nThe above statement assumes, for simplicity's sake, that 'M' and 'F' are the only two options - no nulls, no unknowns, no nothing. A more robust query could eliminate this assumption and just strictly replace Ms and Fs leaving other possible values untouched:

    \n
    UPDATE emp\nSET    gender = CASE gender WHEN 'M' THEN 'F' \n                            WHEN 'F' THEN 'M'\n                            ELSE gender\n                END\n
    \n soup wrap:

    You could use a case expression:

    UPDATE emp
    SET    gender = CASE gender WHEN 'M' THEN 'F' ELSE 'M' END
    

    EDIT:
    The above statement assumes, for simplicity's sake, that 'M' and 'F' are the only two options - no nulls, no unknowns, no nothing. A more robust query could eliminate this assumption and just strictly replace Ms and Fs leaving other possible values untouched:

    UPDATE emp
    SET    gender = CASE gender WHEN 'M' THEN 'F' 
                                WHEN 'F' THEN 'M'
                                ELSE gender
                    END
    
    qid & accept id: (27922122, 27922337) query: sqlite How to get whole month data from table soup:

    Normally you can try the sqlite date and time functions--\nSQLITE date and time functions

    \n

    According to this you can pass time and date parameters can be passed as arguments, along with the **strftime** it proves to be a very powerful tool. Some specifiers are--

    \n
    %d      day of month: 00\n%f      fractional seconds: SS.SSS\n%H      hour: 00-24\n%m      month: 01-12 where, Jan=01, Feb=02, ... ... ... December==12.\n%M      minute: 00-59\n%S      seconds: 00-59\n%w      day of week 0-6 with Sunday==0, Monday==1, ... ... Saturday==6.\n%Y      year: 0000-9999 \n
    \n

    For example if you want to do a select * for the month of april, it will be like,

    \n
    SELECT * FROM `table name` WHERE strftime('%m', `date column`) = '04'\n
    \n soup wrap:

    Normally you can try the sqlite date and time functions-- SQLITE date and time functions

    According to this you can pass time and date parameters can be passed as arguments, along with the **strftime** it proves to be a very powerful tool. Some specifiers are--

    %d      day of month: 00
    %f      fractional seconds: SS.SSS
    %H      hour: 00-24
    %m      month: 01-12 where, Jan=01, Feb=02, ... ... ... December==12.
    %M      minute: 00-59
    %S      seconds: 00-59
    %w      day of week 0-6 with Sunday==0, Monday==1, ... ... Saturday==6.
    %Y      year: 0000-9999 
    

    For example if you want to do a select * for the month of april, it will be like,

    SELECT * FROM `table name` WHERE strftime('%m', `date column`) = '04'
    
    qid & accept id: (27932305, 27933756) query: Database Design for Conditional Questionnaire soup:

    A table has an associated fill-in-the-(named-)blanks statement aka predicate. Rows that make it into a true statement go in the table. Rows that make it into a false statement stay out. That's how we interpret a table (base, view or query) and update a base. Each table represents an application relationship.

    \n

    (So your predicate-style quote for 2 is how to give a table's meaning. Because then JOIN's meaning is the AND of argument meanings, and UNION the OR, EXCEPT is the AND NOT, etc.)

    \n
    \n
      \n
    1. How can I modify my schema to "link" answers to multiple choice\n questions? (ie "the following answers are available for question X.")
    2. \n
    \n
    \n
    // question [question_id] has available answer [answer_id]\nquestion_answers(question_id, answerid)\n
    \n
    \n
      \n
    1. How should the answers drive the next question?\n (ie. "for question #1, if answer A is chosen, then GOTO question 5")
    2. \n
    \n
    \n
    // for question [this_id] if answer [answer_id] is chosen then go to question [next_id]\nnext_question(this_id, answer_id, next_id)\n
    \n

    PS
    \nThere are many different ways of representing graphs (nodes with edges between them) via tables. Here the nodes are questions and the edges are this-next question pairs. Different tables support different kinds of graphs and different patterns of reading and update. (I chose one reflecting your application, but framed my answer to help you find your best representation via proper design yourself.)

    \n

    PPS
    \nIf different user traces through questions can mean that which question follows another is context-dependent:

    \n
    // in context [this_id] if answer [answer_id] is chosen then go to context[next_id]\nnext_context(this_id, answer_id, next_id)\n
    \n

    What a "context" is depends on aspects of your application that you have not given. What you have given suggests that your only notion of context is the current question. Also, depending on what a context contains, this table may need normalization. You might want independent notions of current context vs current question. (Topic: finite state machines.)

    \n soup wrap:

    A table has an associated fill-in-the-(named-)blanks statement aka predicate. Rows that make it into a true statement go in the table. Rows that make it into a false statement stay out. That's how we interpret a table (base, view or query) and update a base. Each table represents an application relationship.

    (So your predicate-style quote for 2 is how to give a table's meaning. Because then JOIN's meaning is the AND of argument meanings, and UNION the OR, EXCEPT is the AND NOT, etc.)

    1. How can I modify my schema to "link" answers to multiple choice questions? (ie "the following answers are available for question X.")
    // question [question_id] has available answer [answer_id]
    question_answers(question_id, answerid)
    
    1. How should the answers drive the next question? (ie. "for question #1, if answer A is chosen, then GOTO question 5")
    // for question [this_id] if answer [answer_id] is chosen then go to question [next_id]
    next_question(this_id, answer_id, next_id)
    

    PS
    There are many different ways of representing graphs (nodes with edges between them) via tables. Here the nodes are questions and the edges are this-next question pairs. Different tables support different kinds of graphs and different patterns of reading and update. (I chose one reflecting your application, but framed my answer to help you find your best representation via proper design yourself.)

    PPS
    If different user traces through questions can mean that which question follows another is context-dependent:

    // in context [this_id] if answer [answer_id] is chosen then go to context[next_id]
    next_context(this_id, answer_id, next_id)
    

    What a "context" is depends on aspects of your application that you have not given. What you have given suggests that your only notion of context is the current question. Also, depending on what a context contains, this table may need normalization. You might want independent notions of current context vs current question. (Topic: finite state machines.)

    qid & accept id: (27979526, 27979853) query: Convert decimal data to custom format in SQL soup:

    Decimals, dates, integers etc have no format. They are all binary values. Formats apply only when you want to create a string from the value or parse a string to a value.

    \n

    In SQL Server 2012+ you can use the FORMAT function to format a decimal in a custom format, eg:

    \n
    declare @data decimal(19,6)=1050.850000\nselect FORMAT(@data,'#,###.00')\n
    \n

    The syntax of the format string is the same as .NET's

    \n

    Your desired output truncates the decimals yet displays the value with decimals. In case this isn't a typo, you can either replace the decimals with literals, eg:

    \n
    select FORMAT(@data,'#,###\.\0\0')\n
    \n

    Or truncate the value before formatting

    \n
    declare @data decimal(19,6)=1050.850000\nselect FORMAT(floor(@data),'#,###.00')\n
    \n

    In previous SQL Server versions you are restricted to the predefined money type formats of the CONVERT function :

    \n
    select CONVERT(nvarchar,cast(@data as money),1)\n
    \n

    Note that nvarchar defaults to nvarchar(30). Strings larger than 30 characters will be truncated to the first 30 characters.

    \n

    Again, if you want to truncate the decimals, use the FLOOR function.

    \n soup wrap:

    Decimals, dates, integers etc have no format. They are all binary values. Formats apply only when you want to create a string from the value or parse a string to a value.

    In SQL Server 2012+ you can use the FORMAT function to format a decimal in a custom format, eg:

    declare @data decimal(19,6)=1050.850000
    select FORMAT(@data,'#,###.00')
    

    The syntax of the format string is the same as .NET's

    Your desired output truncates the decimals yet displays the value with decimals. In case this isn't a typo, you can either replace the decimals with literals, eg:

    select FORMAT(@data,'#,###\.\0\0')
    

    Or truncate the value before formatting

    declare @data decimal(19,6)=1050.850000
    select FORMAT(floor(@data),'#,###.00')
    

    In previous SQL Server versions you are restricted to the predefined money type formats of the CONVERT function :

    select CONVERT(nvarchar,cast(@data as money),1)
    

    Note that nvarchar defaults to nvarchar(30). Strings larger than 30 characters will be truncated to the first 30 characters.

    Again, if you want to truncate the decimals, use the FLOOR function.

    qid & accept id: (28031438, 28053861) query: JOINS between 7 tables soup:

    I've placed the tables in a more logic other:

    \n
    SELECT DISTINCT\n    p.name,\n    v.VehicleType,\n    v.make,\n    v.model,\n    CASE\n       WHEN ltvt.VehicleType_ID =  v.VehicleType_ID THEN 1\n       ELSE 0\n    END allowed\nFROM\n    Person p\nJOIN\n    Vehicles v\n    ON p.ID = v.person_id\nLEFT JOIN\n    DrivingLicense dl\n    ON p.ID = dl.Person_ID\nLEFT JOIN\n    DrivingLicenseLicenceType dlt\n    ON dl.ID = dlt.DrivingLicense_ID\nLEFT JOIN\n    LicenseTypes lt\n    ON dlt.LicenseType_ID = lt.ID\nLEFT JOIN\n    LicenseTypeVehicleTypes ltvt\n    ON dlt.LicenseType_ID = ltvt.LicenseType_ID\n    AND ltvt.VehicleType_ID = v.VehicleType_ID\n    AND dl.ExpiryDate >= CURRENT_DATE \n
    \n

    This will result in all persons with the vehicles they own.\nBy adding some conditions to the last left join you can determine if driving a certain vehicle is permitted.\nIf you want a specific person and/or a specific vehicle just add a WHERE clause

    \n

    Update

    \n
    \n

    John has 1 drivers license, which has 2 licensetypes: B and AM. Both B and AM Licensetypes are connected to to the VehicleType: Scooter. Only Licensetype B, is connected to my VehicleType: Familywagon, when i change the vehicletype from Scooter to Familywagon, my value shows 0, that i'm not allowed to drive this vehicle

    \n
    \n

    This has to do with the join logic and the datamodel.\nBecause driving license is related to person and not directly to a vehicle, with the left join a vehicletype will have as many entries as the number of vehicletypes a licensetype has.

    \n

    In your situation John's Familywagon will apear twice in the result, once for the scooter part of the license type (with which he isn't allowed to drive a car), and once for Familywagon part (with which he is allowed to drive a car).

    \n

    This can be avoided by creating a virtual table with a subquery with the distinct vehicletypes a person is allowed to drive.

    \n
    SELECT\n    p.name,\n    v.VehicleType_ID,\n    v.make,\n    v.model,\n    CASE\n        WHEN dls.VehicleType_ID =  v.VehicleType_ID THEN 1\n        ELSE 0\n    END allowed\nFROM\n    Person p\nJOIN\n    Vehicles v\n    ON p.ID = v.Person_ID\nLEFT JOIN\n    (SELECT DISTINCT\n        dl.Person_ID,\n        ltvt.VehicleType_ID\n    FROM\n        DrivingLicense dl       \n    JOIN\n        DrivingLicenseLicenseTypes dlt\n        ON dl.ID = dlt.DrivingLicense_ID\n    JOIN\n        LicenseTypes lt\n        ON dlt.LicenseType_ID = lt.ID\n    JOIN\n        LicenseTypeVehicleTypes ltvt\n        ON dlt.LicenseType_ID = ltvt.LicenseType_ID\n    WHERE\n        dl.ExpiryDate >= CURRENT_DATE\n    ) dls\n    ON p.ID = dls.Person_ID\n        AND dls.VehicleType_ID =  v.VehicleType_ID\n
    \n

    Note that it might be wise to include the persons table in the subquery, if you want information about a certain person, so you can narrow down the number of results from the subquery.

    \n soup wrap:

    I've placed the tables in a more logic other:

    SELECT DISTINCT
        p.name,
        v.VehicleType,
        v.make,
        v.model,
        CASE
           WHEN ltvt.VehicleType_ID =  v.VehicleType_ID THEN 1
           ELSE 0
        END allowed
    FROM
        Person p
    JOIN
        Vehicles v
        ON p.ID = v.person_id
    LEFT JOIN
        DrivingLicense dl
        ON p.ID = dl.Person_ID
    LEFT JOIN
        DrivingLicenseLicenceType dlt
        ON dl.ID = dlt.DrivingLicense_ID
    LEFT JOIN
        LicenseTypes lt
        ON dlt.LicenseType_ID = lt.ID
    LEFT JOIN
        LicenseTypeVehicleTypes ltvt
        ON dlt.LicenseType_ID = ltvt.LicenseType_ID
        AND ltvt.VehicleType_ID = v.VehicleType_ID
        AND dl.ExpiryDate >= CURRENT_DATE 
    

    This will result in all persons with the vehicles they own. By adding some conditions to the last left join you can determine if driving a certain vehicle is permitted. If you want a specific person and/or a specific vehicle just add a WHERE clause

    Update

    John has 1 drivers license, which has 2 licensetypes: B and AM. Both B and AM Licensetypes are connected to to the VehicleType: Scooter. Only Licensetype B, is connected to my VehicleType: Familywagon, when i change the vehicletype from Scooter to Familywagon, my value shows 0, that i'm not allowed to drive this vehicle

    This has to do with the join logic and the datamodel. Because driving license is related to person and not directly to a vehicle, with the left join a vehicletype will have as many entries as the number of vehicletypes a licensetype has.

    In your situation John's Familywagon will apear twice in the result, once for the scooter part of the license type (with which he isn't allowed to drive a car), and once for Familywagon part (with which he is allowed to drive a car).

    This can be avoided by creating a virtual table with a subquery with the distinct vehicletypes a person is allowed to drive.

    SELECT
        p.name,
        v.VehicleType_ID,
        v.make,
        v.model,
        CASE
            WHEN dls.VehicleType_ID =  v.VehicleType_ID THEN 1
            ELSE 0
        END allowed
    FROM
        Person p
    JOIN
        Vehicles v
        ON p.ID = v.Person_ID
    LEFT JOIN
        (SELECT DISTINCT
            dl.Person_ID,
            ltvt.VehicleType_ID
        FROM
            DrivingLicense dl       
        JOIN
            DrivingLicenseLicenseTypes dlt
            ON dl.ID = dlt.DrivingLicense_ID
        JOIN
            LicenseTypes lt
            ON dlt.LicenseType_ID = lt.ID
        JOIN
            LicenseTypeVehicleTypes ltvt
            ON dlt.LicenseType_ID = ltvt.LicenseType_ID
        WHERE
            dl.ExpiryDate >= CURRENT_DATE
        ) dls
        ON p.ID = dls.Person_ID
            AND dls.VehicleType_ID =  v.VehicleType_ID
    

    Note that it might be wise to include the persons table in the subquery, if you want information about a certain person, so you can narrow down the number of results from the subquery.

    qid & accept id: (28043394, 28043516) query: Import file with : separators into MySQL database soup:

    With PHPMyAdmin

    \n

    On the import tab, you can upload a CSV file. Upload your file, select format 'CSV', and set:

    \n
      \n
    • Columns separated with: :
    • \n
    • Columns enclosed with: (empty)
    • \n
    \n

    With PHP

    \n

    You can write a small PHP program to do that. First, read in the file:

    \n
    $fh = fopen('the-file.txt', 'r');\n
    \n

    Now, $fh is a handler to read the-file.txt. Now we can use fgetcsv():

    \n
    while (($data = fgetcsv($fh, 1000, ':', '')) !== false) {\n    // insert data in database\n}\n
    \n

    $data is an array of all the data in the record.

    \n
    \n

    An example with PDO and a prepared statement:

    \n
    // prepare the statement to insert a new record\n$stmt = $pdo->prepare("INSERT INTO `some_table` (`id`, `category`, `topic`, `date`) VALUES (?, ?, ?, ?)");\n// read the file\n$fh = fopen('the-file.txt', 'r');\nwhile (($data = fgetcsv($fh, 1000, ':', '')) !== false) {\n    $stmt->execute($data);\n}\n
    \n

    This would assume the column names are 'id', 'category', 'topic' and 'date'.

    \n

    The number 1000 is the max line length. You can set it to 0, for no maximum length, but this is slightly slower. See the docs for more information.

    \n soup wrap:

    With PHPMyAdmin

    On the import tab, you can upload a CSV file. Upload your file, select format 'CSV', and set:

    • Columns separated with: :
    • Columns enclosed with: (empty)

    With PHP

    You can write a small PHP program to do that. First, read in the file:

    $fh = fopen('the-file.txt', 'r');
    

    Now, $fh is a handler to read the-file.txt. Now we can use fgetcsv():

    while (($data = fgetcsv($fh, 1000, ':', '')) !== false) {
        // insert data in database
    }
    

    $data is an array of all the data in the record.


    An example with PDO and a prepared statement:

    // prepare the statement to insert a new record
    $stmt = $pdo->prepare("INSERT INTO `some_table` (`id`, `category`, `topic`, `date`) VALUES (?, ?, ?, ?)");
    // read the file
    $fh = fopen('the-file.txt', 'r');
    while (($data = fgetcsv($fh, 1000, ':', '')) !== false) {
        $stmt->execute($data);
    }
    

    This would assume the column names are 'id', 'category', 'topic' and 'date'.

    The number 1000 is the max line length. You can set it to 0, for no maximum length, but this is slightly slower. See the docs for more information.

    qid & accept id: (28061347, 28061902) query: Replace comma separated values soup:

    I have written logic inside the query

    \n
    ;WITH CTE AS\n(\n    SELECT LTRIM(RTRIM(Split.a.value('.', 'VARCHAR(100)'))) 'KeyWords' \n    FROM  \n    (\n         -- To change ',' to any other delimeter, just change ',' before '' to your desired one\n         SELECT CAST ('' + REPLACE(Primary_Kwd, ',', '') + '' AS XML) AS Data \n         FROM Kwd_UploadRecored     \n    ) AS A \n    CROSS APPLY Data.nodes ('/M') AS Split(a)\n\n    UNION ALL\n\n    SELECT LTRIM(RTRIM(Split.a.value('.', 'VARCHAR(100)'))) 'KeyWords' \n    FROM  \n    (\n         -- To change ',' to any other delimeter, just change ',' before '' to your desired one\n         SELECT CAST ('' + REPLACE(Sec_Kwd, ',', '') + '' AS XML) AS Data \n         FROM Kwd_UploadRecored     \n    ) AS A \n    CROSS APPLY Data.nodes ('/M') AS Split(a)\n\n    UNION ALL\n\n    SELECT LTRIM(RTRIM(Split.a.value('.', 'VARCHAR(100)'))) 'KeyWords' \n    FROM  \n    (\n         -- To change ',' to any other delimeter, just change ',' before '' to your desired one\n         SELECT CAST ('' + REPLACE(Main_Kwd, ',', '') + '' AS XML) AS Data \n         FROM Kwd_UploadRecored     \n    ) AS A \n    CROSS APPLY Data.nodes ('/M') AS Split(a)\n)\nSELECT T.English_Keywords, T.German_Keywords\nFROM CTE C\nJOIN Englishgermankwds_tbl T ON C.KeyWords=T.English_Keywords\n
    \n\n

    UPDATE

    \n

    Here is the query that does your expected output.

    \n
    ;WITH CTE AS\n(\n    -- Since CSV values is scattered with non-alphabetical order, we use ROW_NUMBER()\n    -- to maintain the order by default\n    SELECT *,\n    ROW_NUMBER() OVER(PARTITION BY ID ORDER BY (SELECT(0))) RNO,'Primary_Kwd' Colum \n    FROM\n    (\n        -- Convert CSV to rows\n        SELECT ID,LTRIM(RTRIM(Split.a.value('.', 'VARCHAR(100)'))) 'KeyWords' \n        FROM  \n        (\n             -- To change ',' to any other delimeter, just change ',' before '' to your desired one\n             SELECT ID,CAST ('' + REPLACE(Primary_Kwd, ',', '') + '' AS XML) AS Data \n             FROM #Kwd_UploadRecored     \n        ) AS A \n        CROSS APPLY Data.nodes ('/M') AS Split(a)\n    )TAB\n\n    UNION ALL\n\n    SELECT *,\n    ROW_NUMBER() OVER(PARTITION BY ID ORDER BY (SELECT(0))) RNO,'Sec_Kwd'  \n    FROM\n    (\n        SELECT ID,LTRIM(RTRIM(Split.a.value('.', 'VARCHAR(100)'))) 'KeyWords' \n        FROM  \n        (\n             -- To change ',' to any other delimeter, just change ',' before '' to your desired one\n             SELECT ID,CAST ('' + REPLACE(Sec_Kwd, ',', '') + '' AS XML) AS Data \n             FROM #Kwd_UploadRecored     \n        ) AS A \n        CROSS APPLY Data.nodes ('/M') AS Split(a)\n    )TAB\n\n    UNION ALL\n\n    SELECT *,\n    ROW_NUMBER() OVER(PARTITION BY ID ORDER BY (SELECT(0))) RNO,'Main_Kwd'  \n    FROM\n    (\n        SELECT ID,LTRIM(RTRIM(Split.a.value('.', 'VARCHAR(100)'))) 'KeyWords' \n        FROM  \n        (\n             -- To change ',' to any other delimeter, just change ',' before '' to your desired one\n             SELECT ID,CAST ('' + REPLACE(Main_Kwd, ',', '') + '' AS XML) AS Data \n             FROM #Kwd_UploadRecored     \n        ) AS A \n        CROSS APPLY Data.nodes ('/M') AS Split(a)\n    )TAB\n)\n,CTE2 AS\n(\n    -- Check for German word, if matched German word else English\n    SELECT C.ID,C.RNO,C.Colum,ISNULL(T.German_Keywords,C.KeyWords) German_Keywords \n    FROM CTE C\n    LEFT JOIN #Englishgermankwds_tbl T ON C.KeyWords=T.English_Keywords \n) \n,CTE3 AS\n(\n    -- Convert back to CSV values with the old order of strings\n    SELECT  ID,COLUM,\n    SUBSTRING(\n            (SELECT  ', ' + German_Keywords\n            FROM CTE2 \n            WHERE C2.Id=Id AND C2.COLUM=COLUM \n            ORDER BY RNO\n            FOR XML PATH('')),2,200000) German_Keywords\n    FROM CTE2 C2\n)\n-- Now we convert back Primary_Kwd,Sec_Kwd,Main_Kwd to columns with CSV values\nSELECT ID,\nMIN(CASE Colum WHEN 'Primary_Kwd' THEN German_Keywords END) Primary_Kwd,\nMIN(CASE Colum WHEN 'Sec_Kwd' THEN German_Keywords END) Sec_Kwd,\nMIN(CASE Colum WHEN 'Main_Kwd' THEN German_Keywords END) Main_Kwd \nFROM CTE3\nGROUP BY ID\n
    \n\n

    UPDATE 2

    \n

    After closing the bracket of CTE3 give the below code

    \n
    UPDATE Kwd_UploadRecored \nSET Primary_Kwd = TAB.Primary_Kwd,\nSec_Kwd = TAB.Sec_Kwd,\nMain_Kwd = TAB.Main_Kwd\nFROM\n(\n    SELECT ID,\n    MIN(CASE Colum WHEN 'Primary_Kwd' THEN German_Keywords END) Primary_Kwd,\n    MIN(CASE Colum WHEN 'Sec_Kwd' THEN German_Keywords END) Sec_Kwd,\n    MIN(CASE Colum WHEN 'Main_Kwd' THEN German_Keywords END) Main_Kwd \n    FROM CTE3\n    GROUP BY ID\n)TAB\nWHERE Kwd_UploadRecored.ID=TAB.ID\n
    \n\n soup wrap:

    I have written logic inside the query

    ;WITH CTE AS
    (
        SELECT LTRIM(RTRIM(Split.a.value('.', 'VARCHAR(100)'))) 'KeyWords' 
        FROM  
        (
             -- To change ',' to any other delimeter, just change ',' before '' to your desired one
             SELECT CAST ('' + REPLACE(Primary_Kwd, ',', '') + '' AS XML) AS Data 
             FROM Kwd_UploadRecored     
        ) AS A 
        CROSS APPLY Data.nodes ('/M') AS Split(a)
    
        UNION ALL
    
        SELECT LTRIM(RTRIM(Split.a.value('.', 'VARCHAR(100)'))) 'KeyWords' 
        FROM  
        (
             -- To change ',' to any other delimeter, just change ',' before '' to your desired one
             SELECT CAST ('' + REPLACE(Sec_Kwd, ',', '') + '' AS XML) AS Data 
             FROM Kwd_UploadRecored     
        ) AS A 
        CROSS APPLY Data.nodes ('/M') AS Split(a)
    
        UNION ALL
    
        SELECT LTRIM(RTRIM(Split.a.value('.', 'VARCHAR(100)'))) 'KeyWords' 
        FROM  
        (
             -- To change ',' to any other delimeter, just change ',' before '' to your desired one
             SELECT CAST ('' + REPLACE(Main_Kwd, ',', '') + '' AS XML) AS Data 
             FROM Kwd_UploadRecored     
        ) AS A 
        CROSS APPLY Data.nodes ('/M') AS Split(a)
    )
    SELECT T.English_Keywords, T.German_Keywords
    FROM CTE C
    JOIN Englishgermankwds_tbl T ON C.KeyWords=T.English_Keywords
    

    UPDATE

    Here is the query that does your expected output.

    ;WITH CTE AS
    (
        -- Since CSV values is scattered with non-alphabetical order, we use ROW_NUMBER()
        -- to maintain the order by default
        SELECT *,
        ROW_NUMBER() OVER(PARTITION BY ID ORDER BY (SELECT(0))) RNO,'Primary_Kwd' Colum 
        FROM
        (
            -- Convert CSV to rows
            SELECT ID,LTRIM(RTRIM(Split.a.value('.', 'VARCHAR(100)'))) 'KeyWords' 
            FROM  
            (
                 -- To change ',' to any other delimeter, just change ',' before '' to your desired one
                 SELECT ID,CAST ('' + REPLACE(Primary_Kwd, ',', '') + '' AS XML) AS Data 
                 FROM #Kwd_UploadRecored     
            ) AS A 
            CROSS APPLY Data.nodes ('/M') AS Split(a)
        )TAB
    
        UNION ALL
    
        SELECT *,
        ROW_NUMBER() OVER(PARTITION BY ID ORDER BY (SELECT(0))) RNO,'Sec_Kwd'  
        FROM
        (
            SELECT ID,LTRIM(RTRIM(Split.a.value('.', 'VARCHAR(100)'))) 'KeyWords' 
            FROM  
            (
                 -- To change ',' to any other delimeter, just change ',' before '' to your desired one
                 SELECT ID,CAST ('' + REPLACE(Sec_Kwd, ',', '') + '' AS XML) AS Data 
                 FROM #Kwd_UploadRecored     
            ) AS A 
            CROSS APPLY Data.nodes ('/M') AS Split(a)
        )TAB
    
        UNION ALL
    
        SELECT *,
        ROW_NUMBER() OVER(PARTITION BY ID ORDER BY (SELECT(0))) RNO,'Main_Kwd'  
        FROM
        (
            SELECT ID,LTRIM(RTRIM(Split.a.value('.', 'VARCHAR(100)'))) 'KeyWords' 
            FROM  
            (
                 -- To change ',' to any other delimeter, just change ',' before '' to your desired one
                 SELECT ID,CAST ('' + REPLACE(Main_Kwd, ',', '') + '' AS XML) AS Data 
                 FROM #Kwd_UploadRecored     
            ) AS A 
            CROSS APPLY Data.nodes ('/M') AS Split(a)
        )TAB
    )
    ,CTE2 AS
    (
        -- Check for German word, if matched German word else English
        SELECT C.ID,C.RNO,C.Colum,ISNULL(T.German_Keywords,C.KeyWords) German_Keywords 
        FROM CTE C
        LEFT JOIN #Englishgermankwds_tbl T ON C.KeyWords=T.English_Keywords 
    ) 
    ,CTE3 AS
    (
        -- Convert back to CSV values with the old order of strings
        SELECT  ID,COLUM,
        SUBSTRING(
                (SELECT  ', ' + German_Keywords
                FROM CTE2 
                WHERE C2.Id=Id AND C2.COLUM=COLUM 
                ORDER BY RNO
                FOR XML PATH('')),2,200000) German_Keywords
        FROM CTE2 C2
    )
    -- Now we convert back Primary_Kwd,Sec_Kwd,Main_Kwd to columns with CSV values
    SELECT ID,
    MIN(CASE Colum WHEN 'Primary_Kwd' THEN German_Keywords END) Primary_Kwd,
    MIN(CASE Colum WHEN 'Sec_Kwd' THEN German_Keywords END) Sec_Kwd,
    MIN(CASE Colum WHEN 'Main_Kwd' THEN German_Keywords END) Main_Kwd 
    FROM CTE3
    GROUP BY ID
    

    UPDATE 2

    After closing the bracket of CTE3 give the below code

    UPDATE Kwd_UploadRecored 
    SET Primary_Kwd = TAB.Primary_Kwd,
    Sec_Kwd = TAB.Sec_Kwd,
    Main_Kwd = TAB.Main_Kwd
    FROM
    (
        SELECT ID,
        MIN(CASE Colum WHEN 'Primary_Kwd' THEN German_Keywords END) Primary_Kwd,
        MIN(CASE Colum WHEN 'Sec_Kwd' THEN German_Keywords END) Sec_Kwd,
        MIN(CASE Colum WHEN 'Main_Kwd' THEN German_Keywords END) Main_Kwd 
        FROM CTE3
        GROUP BY ID
    )TAB
    WHERE Kwd_UploadRecored.ID=TAB.ID
    
    qid & accept id: (28068971, 28069381) query: SQL Pivot Table dynamic soup:

    Here you will select the values in a column to show as column in pivot

    \n
    DECLARE @cols NVARCHAR (MAX)\n\nSELECT @cols = COALESCE (@cols + ',[' + AvJT + ']', '[' + AvJT + ']')\n               FROM    (SELECT DISTINCT AvJT FROM YourTable) PV  \n               ORDER BY AvJT\n
    \n

    Now pivot the query

    \n
    DECLARE @query NVARCHAR(MAX)\nSET @query = 'SELECT * FROM \n             (\n                 SELECT date_1, StartHour,AvJT, data_source \n                 FROM YourTable\n             ) x\n             PIVOT \n             (\n                 -- Values in each dynamic column\n                 SUM(data_source)\n                 FOR AvJT IN (' + @cols + ')                      \n            ) p;' \n\nEXEC SP_EXECUTESQL @query\n
    \n\n

    If you want to do it to where column names are not dynamic, you can do the below query

    \n
    SELECT DATE_1,STARTHOUR,\nMIN(CASE WHEN AvJT='00001a' THEN data_source END) [00001a],\nMIN(CASE WHEN AvJT='00002a' THEN data_source END) [00002a],\nMIN(CASE WHEN AvJT='00003a' THEN data_source END) [00003a],\nMIN(CASE WHEN AvJT='00004a' THEN data_source END) [00004a]\nFROM YOURTABLE\nGROUP BY  DATE_1,STARTHOUR\n
    \n\n

    EDIT :

    \n

    I am updating for your updated question.

    \n

    Declare a variable for filtering data_source

    \n
    DECLARE @DATASOURCE VARCHAR(20) = '1' \n
    \n

    Instead of QUOTENAME, you can use another format to get the columns for pivot

    \n
    DECLARE @cols NVARCHAR (MAX)\n\nSELECT @cols = COALESCE (@cols + ',[' + Link_ID + ']', '[' + Link_ID + ']')\n               FROM    (SELECT DISTINCT Link_ID FROM C1_May_Routes WHERE data_source=@DATASOURCE) PV  \n               ORDER BY Link_ID\n
    \n

    Now pivot

    \n
    DECLARE @query NVARCHAR(MAX)\nSET @query = 'SELECT * FROM \n             (\n                 -- We will select the data that has to be shown for pivoting\n                 -- with filtered data_source\n                 SELECT date_1, StartHour,AvJT, Link_ID\n                 FROM C1_May_Routes\n                 WHERE data_source = '+@DATASOURCE+'\n             ) x\n             PIVOT \n             (\n                 -- Values in each dynamic column\n                 SUM(AvJT)\n                 -- Select columns from @cols \n                 FOR Link_ID IN (' + @cols + ')                      \n            ) p;' \n\nEXEC SP_EXECUTESQL @query\n
    \n\n soup wrap:

    Here you will select the values in a column to show as column in pivot

    DECLARE @cols NVARCHAR (MAX)
    
    SELECT @cols = COALESCE (@cols + ',[' + AvJT + ']', '[' + AvJT + ']')
                   FROM    (SELECT DISTINCT AvJT FROM YourTable) PV  
                   ORDER BY AvJT
    

    Now pivot the query

    DECLARE @query NVARCHAR(MAX)
    SET @query = 'SELECT * FROM 
                 (
                     SELECT date_1, StartHour,AvJT, data_source 
                     FROM YourTable
                 ) x
                 PIVOT 
                 (
                     -- Values in each dynamic column
                     SUM(data_source)
                     FOR AvJT IN (' + @cols + ')                      
                ) p;' 
    
    EXEC SP_EXECUTESQL @query
    

    If you want to do it to where column names are not dynamic, you can do the below query

    SELECT DATE_1,STARTHOUR,
    MIN(CASE WHEN AvJT='00001a' THEN data_source END) [00001a],
    MIN(CASE WHEN AvJT='00002a' THEN data_source END) [00002a],
    MIN(CASE WHEN AvJT='00003a' THEN data_source END) [00003a],
    MIN(CASE WHEN AvJT='00004a' THEN data_source END) [00004a]
    FROM YOURTABLE
    GROUP BY  DATE_1,STARTHOUR
    

    EDIT :

    I am updating for your updated question.

    Declare a variable for filtering data_source

    DECLARE @DATASOURCE VARCHAR(20) = '1' 
    

    Instead of QUOTENAME, you can use another format to get the columns for pivot

    DECLARE @cols NVARCHAR (MAX)
    
    SELECT @cols = COALESCE (@cols + ',[' + Link_ID + ']', '[' + Link_ID + ']')
                   FROM    (SELECT DISTINCT Link_ID FROM C1_May_Routes WHERE data_source=@DATASOURCE) PV  
                   ORDER BY Link_ID
    

    Now pivot

    DECLARE @query NVARCHAR(MAX)
    SET @query = 'SELECT * FROM 
                 (
                     -- We will select the data that has to be shown for pivoting
                     -- with filtered data_source
                     SELECT date_1, StartHour,AvJT, Link_ID
                     FROM C1_May_Routes
                     WHERE data_source = '+@DATASOURCE+'
                 ) x
                 PIVOT 
                 (
                     -- Values in each dynamic column
                     SUM(AvJT)
                     -- Select columns from @cols 
                     FOR Link_ID IN (' + @cols + ')                      
                ) p;' 
    
    EXEC SP_EXECUTESQL @query
    
    qid & accept id: (28082687, 28083220) query: Search through meta array in database field soup:

    If the data is really stored in the way you show us, then this can be achieved using a hstore because that value can directly be cast to a hstore:

    \n
    select *\nfrom the_table\nwhere extract(month from ((meta::hstore -> 'date_approved')::timestamp)) = 1\n
    \n

    This will fail if the format in the column isn't exactly as you have shown us or if the timestamps are formatted in a different way.

    \n

    You might need to create the hstore extension to be able to use that:

    \n
    create extension hstore;\n
    \n

    This needs to be done as the superuser.

    \n

    SQLFiddle: http://sqlfiddle.com/#!15/d41d8/4408

    \n soup wrap:

    If the data is really stored in the way you show us, then this can be achieved using a hstore because that value can directly be cast to a hstore:

    select *
    from the_table
    where extract(month from ((meta::hstore -> 'date_approved')::timestamp)) = 1
    

    This will fail if the format in the column isn't exactly as you have shown us or if the timestamps are formatted in a different way.

    You might need to create the hstore extension to be able to use that:

    create extension hstore;
    

    This needs to be done as the superuser.

    SQLFiddle: http://sqlfiddle.com/#!15/d41d8/4408

    qid & accept id: (28109037, 28115702) query: Passing multiple values in single parameter soup:

    Your function wouldn't be created. RETURN after end is syntactical nonsense.

    \n

    Either way, a function with a VARIADIC parameter does exactly what you ask for:

    \n
    CREATE OR REPLACE FUNCTION test_function(VARIADIC varchar[])\n RETURNS SETOF integer AS\n$func$\nSELECT column2\nFROM   test_table\nWHERE  column1 = ANY($1);\n$func$  LANGUAGE sql;\n
    \n

    Call (as desired):

    \n
    SELECT * FROM test_function('data1', 'data2', 'data3');\n
    \n

    Using a simple SQL function, plpgsql is not required for the simple example. But VARIADIC works for plpgsql functions, too.

    \n

    Using RETURNS SETOF integer since this can obviously return multiple rows.

    \n

    Details:

    \n\n

    SQL Fiddle demo with additional parameters.

    \n soup wrap:

    Your function wouldn't be created. RETURN after end is syntactical nonsense.

    Either way, a function with a VARIADIC parameter does exactly what you ask for:

    CREATE OR REPLACE FUNCTION test_function(VARIADIC varchar[])
     RETURNS SETOF integer AS
    $func$
    SELECT column2
    FROM   test_table
    WHERE  column1 = ANY($1);
    $func$  LANGUAGE sql;
    

    Call (as desired):

    SELECT * FROM test_function('data1', 'data2', 'data3');
    

    Using a simple SQL function, plpgsql is not required for the simple example. But VARIADIC works for plpgsql functions, too.

    Using RETURNS SETOF integer since this can obviously return multiple rows.

    Details:

    SQL Fiddle demo with additional parameters.

    qid & accept id: (28129007, 28129059) query: Sql function - disregarding the decimal place in returning value soup:

    The problem is the declaration of @ret.

    \n

    In SQL Server, the default scale for a decimal is 0 (see the documentation). So,

    \n
    declare @ret decimal;\n
    \n

    is equivalent to:

    \n
    declare @ret decimal(18, 0);\n
    \n

    Just be explicit about the precision and scale, something like:

    \n
    declare @ret decimal(18, 2);\n
    \n soup wrap:

    The problem is the declaration of @ret.

    In SQL Server, the default scale for a decimal is 0 (see the documentation). So,

    declare @ret decimal;
    

    is equivalent to:

    declare @ret decimal(18, 0);
    

    Just be explicit about the precision and scale, something like:

    declare @ret decimal(18, 2);
    
    qid & accept id: (28147528, 28150079) query: Find MIN value based on two columns in MySQL soup:

    Maybe I don't understand something, but you can add two columns in group by, like the following example. Just add ,(comma) between the two or more columns.

    \n
    SELECT \n* \nFROM\n  performs NATURAL \n  JOIN athletes \n  JOIN \n    (SELECT \n      athlete_id,\n      MIN(perform) AS perform,\n      category_id,\n      discipline_id \n    FROM\n      zaznamy \n    WHERE discipline_id = 4 \n    GROUP BY athlete_id,category_id) rec \n    ON performs.athlete_id = rec.athlete_id \n    AND performs.perform = rec.perform \n    AND performs.category_id = rec.category_id \n    AND performs.discipline_id = rec.discipline_id \nORDER BY performs.perform \nLIMIT 25 enter code here\n
    \n

    EDIT - UPDATE

    \n

    Ok, now I understand your problem. This is quite common, actually, where you want first to find the max value of a group and then request additional info on that value.

    \n

    One way to achieve this is with a nested query as follow

    \n
    SELECT ss.athlete_id,ss.perform,category_id\nFROM performs ss\ninner join\n(SELECT\n    athlete_id, MIN(perform) AS perform\nFROM \n    performs\nWHERE\n    discipline_id = 4 AND category_id IN (1,3,5,7,9) \nGROUP BY\n    athlete_id) tt\n    on tt.athlete_id = ss.athlete_id and ss.perform = tt.perform\n
    \n

    The result is the one you described above.

    \n soup wrap:

    Maybe I don't understand something, but you can add two columns in group by, like the following example. Just add ,(comma) between the two or more columns.

    SELECT 
    * 
    FROM
      performs NATURAL 
      JOIN athletes 
      JOIN 
        (SELECT 
          athlete_id,
          MIN(perform) AS perform,
          category_id,
          discipline_id 
        FROM
          zaznamy 
        WHERE discipline_id = 4 
        GROUP BY athlete_id,category_id) rec 
        ON performs.athlete_id = rec.athlete_id 
        AND performs.perform = rec.perform 
        AND performs.category_id = rec.category_id 
        AND performs.discipline_id = rec.discipline_id 
    ORDER BY performs.perform 
    LIMIT 25 enter code here
    

    EDIT - UPDATE

    Ok, now I understand your problem. This is quite common, actually, where you want first to find the max value of a group and then request additional info on that value.

    One way to achieve this is with a nested query as follow

    SELECT ss.athlete_id,ss.perform,category_id
    FROM performs ss
    inner join
    (SELECT
        athlete_id, MIN(perform) AS perform
    FROM 
        performs
    WHERE
        discipline_id = 4 AND category_id IN (1,3,5,7,9) 
    GROUP BY
        athlete_id) tt
        on tt.athlete_id = ss.athlete_id and ss.perform = tt.perform
    

    The result is the one you described above.

    qid & accept id: (28152970, 28153559) query: Oracle - Format number with fullstop for thousand and comma for decimals soup:

    You can use the FM format modifier to have trailing decimal zeros blanked out:

    \n
    select to_char(1, 'FM9G999G999D999', 'NLS_NUMERIC_CHARACTERS='',.''') from dual;\n\nTO_CHAR(1,'FM9G999G999D999','NLS_NUMERIC_CHARACTERS='',.''')\n------------------------------------------------------------\n1,      \n
    \n

    But as you can see that leaves the decimal character behind; you can trim that off though:

    \n
    with t as (\n select 3.69 as n from dual\n union all select 1000 from dual\n union all select 150.20 from dual\n union all select 1 from dual\n union all select 0.16 from dual\n)\nselect n,\n  to_char(n, '9G999G999D000') original,\n  to_char(n, 'FM9G999G999D999', 'NLS_NUMERIC_CHARACTERS='',.''') new,\n  rtrim(to_char(n, 'FM9G999G999D999', 'NLS_NUMERIC_CHARACTERS='',.'''),\n    ',') as trimmed\nfrom t;\n\n         N ORIGINAL       NEW            TRIMMED       \n---------- -------------- -------------- --------------\n      3.69          3.690 3,69           3,69           \n      1000      1,000.000 1.000,         1.000          \n     150.2        150.200 150,2          150,2          \n         1          1.000 1,             1            \n       .16           .160 ,16            ,16            \n
    \n

    I'm using the optional third NLS argument to the to_char() function to set the G and D characters independently from my session settings.

    \n

    If you want to preserve the zero befor the decimal separator, just make the last 9 before the D into a 0:

    \n
    with t as (\n select 3.69 as n from dual\n union all select 1000 from dual\n union all select 150.20 from dual\n union all select 1 from dual\n union all select 0.16 from dual\n)\nselect n,\n  to_char(n, '9G99G990D000') original,\n  to_char(n, 'FM9G999G990D999', 'NLS_NUMERIC_CHARACTERS='',.''') new,\n  rtrim(to_char(n, 'FM9G999G990D999', 'NLS_NUMERIC_CHARACTERS='',.'''),\n    ',') as trimmed\nfrom t;\n\n         N ORIGINAL      NEW            TRIMMED       \n---------- ------------- -------------- --------------\n      3.69         3.690 3,69           3,69           \n      1000     1,000.000 1.000,         1.000          \n     150.2       150.200 150,2          150,2          \n         1         1.000 1,             1              \n       .16         0.160 0,16           0,16           \n
    \n soup wrap:

    You can use the FM format modifier to have trailing decimal zeros blanked out:

    select to_char(1, 'FM9G999G999D999', 'NLS_NUMERIC_CHARACTERS='',.''') from dual;
    
    TO_CHAR(1,'FM9G999G999D999','NLS_NUMERIC_CHARACTERS='',.''')
    ------------------------------------------------------------
    1,      
    

    But as you can see that leaves the decimal character behind; you can trim that off though:

    with t as (
     select 3.69 as n from dual
     union all select 1000 from dual
     union all select 150.20 from dual
     union all select 1 from dual
     union all select 0.16 from dual
    )
    select n,
      to_char(n, '9G999G999D000') original,
      to_char(n, 'FM9G999G999D999', 'NLS_NUMERIC_CHARACTERS='',.''') new,
      rtrim(to_char(n, 'FM9G999G999D999', 'NLS_NUMERIC_CHARACTERS='',.'''),
        ',') as trimmed
    from t;
    
             N ORIGINAL       NEW            TRIMMED       
    ---------- -------------- -------------- --------------
          3.69          3.690 3,69           3,69           
          1000      1,000.000 1.000,         1.000          
         150.2        150.200 150,2          150,2          
             1          1.000 1,             1            
           .16           .160 ,16            ,16            
    

    I'm using the optional third NLS argument to the to_char() function to set the G and D characters independently from my session settings.

    If you want to preserve the zero befor the decimal separator, just make the last 9 before the D into a 0:

    with t as (
     select 3.69 as n from dual
     union all select 1000 from dual
     union all select 150.20 from dual
     union all select 1 from dual
     union all select 0.16 from dual
    )
    select n,
      to_char(n, '9G99G990D000') original,
      to_char(n, 'FM9G999G990D999', 'NLS_NUMERIC_CHARACTERS='',.''') new,
      rtrim(to_char(n, 'FM9G999G990D999', 'NLS_NUMERIC_CHARACTERS='',.'''),
        ',') as trimmed
    from t;
    
             N ORIGINAL      NEW            TRIMMED       
    ---------- ------------- -------------- --------------
          3.69         3.690 3,69           3,69           
          1000     1,000.000 1.000,         1.000          
         150.2       150.200 150,2          150,2          
             1         1.000 1,             1              
           .16         0.160 0,16           0,16           
    
    qid & accept id: (28190592, 28192460) query: Treat NaN's as NULL in SSIS package soup:

    If you are going with package, in data flow use derived column and in that use replace function

    \n
            replace('NAN',column,null)\n
    \n

    Or if you want to change data in database you can use update statement in "OLEDB command" transformation

    \n
    Update table\nset column_name=null\nwhere column_name='NAN'\n
    \n soup wrap:

    If you are going with package, in data flow use derived column and in that use replace function

            replace('NAN',column,null)
    

    Or if you want to change data in database you can use update statement in "OLEDB command" transformation

    Update table
    set column_name=null
    where column_name='NAN'
    
    qid & accept id: (28219230, 28219390) query: PHP / SQL - Show specific number of rows per query soup:

    I found the answer to this question from gnarly's suggestion of LIMIT on SQL

    \n
    $sql = "SELECT * FROM my_table LIMIT X OFFSET Y";\n
    \n

    Where LIMIT only gives the X number of rows you want, and OFFSET gives the Y starting point. So showing rows 0 through 30:

    \n
    $sql = "SELECT * FROM my_table LIMIT 30 OFFSET 0";\n
    \n

    And showing rows 31 through 60:

    \n
    $sql = "SELECT * FROM my_table LIMIT 30 OFFSET 30";\n
    \n soup wrap:

    I found the answer to this question from gnarly's suggestion of LIMIT on SQL

    $sql = "SELECT * FROM my_table LIMIT X OFFSET Y";
    

    Where LIMIT only gives the X number of rows you want, and OFFSET gives the Y starting point. So showing rows 0 through 30:

    $sql = "SELECT * FROM my_table LIMIT 30 OFFSET 0";
    

    And showing rows 31 through 60:

    $sql = "SELECT * FROM my_table LIMIT 30 OFFSET 30";
    
    qid & accept id: (28243845, 28244390) query: ORDER BY alternative values from main- and subquery soup:

    It should work like this:

    \n
    SELECT id \nFROM   post_table p\nWHERE  post_user_id = $user_id  -- this is your input parameter\nORDER  BY GREATEST(\n   (\n   SELECT max(comment_created_date) \n   FROM   comments_table\n   WHERE  comments_post_id = p.id\n   )\n , post_created_date) DESC NULLS LAST;
    \n

    You will want to add NULLS LAST if date columns can be NULL.

    \n\n

    If comments can only be later than posts (would make sense), you can use COALESCE instead of GREATEST.

    \n

    Cleaner alternative (may or may not be faster, depending on data distribution):

    \n
    SELECT id \nFROM   post_table p\nLEFT   JOIN  (\n   SELECT comments_post_id AS id, max(comment_created_date) AS max_date\n   FROM   comments_table\n   GROUP  BY 1\n   ) c USING (id)\nWHERE  post_user_id = $user_id\nORDER  BY GREATEST(c.max_date, p.post_created_date) DESC NULLS LAST;\n
    \n

    Since you have pg 9.3 you can also use a LATERAL join. Probably faster:

    \n
    SELECT id \nFROM   post_table p\nLEFT   JOIN  LATERAL (\n   SELECT max(comment_created_date) AS max_date\n   FROM   comments_table\n   WHERE  comments_post_id = p.id\n   GROUP  BY comments_post_id\n   ) c ON TRUE\nWHERE  post_user_id = $user_id\nORDER  BY GREATEST(c.max_date, p.post_created_date) DESC NULLS LAST;\n
    \n soup wrap:

    It should work like this:

    SELECT id 
    FROM   post_table p
    WHERE  post_user_id = $user_id  -- this is your input parameter
    ORDER  BY GREATEST(
       (
       SELECT max(comment_created_date) 
       FROM   comments_table
       WHERE  comments_post_id = p.id
       )
     , post_created_date) DESC NULLS LAST;

    You will want to add NULLS LAST if date columns can be NULL.

    If comments can only be later than posts (would make sense), you can use COALESCE instead of GREATEST.

    Cleaner alternative (may or may not be faster, depending on data distribution):

    SELECT id 
    FROM   post_table p
    LEFT   JOIN  (
       SELECT comments_post_id AS id, max(comment_created_date) AS max_date
       FROM   comments_table
       GROUP  BY 1
       ) c USING (id)
    WHERE  post_user_id = $user_id
    ORDER  BY GREATEST(c.max_date, p.post_created_date) DESC NULLS LAST;
    

    Since you have pg 9.3 you can also use a LATERAL join. Probably faster:

    SELECT id 
    FROM   post_table p
    LEFT   JOIN  LATERAL (
       SELECT max(comment_created_date) AS max_date
       FROM   comments_table
       WHERE  comments_post_id = p.id
       GROUP  BY comments_post_id
       ) c ON TRUE
    WHERE  post_user_id = $user_id
    ORDER  BY GREATEST(c.max_date, p.post_created_date) DESC NULLS LAST;
    
    qid & accept id: (28252370, 28255273) query: Selecting hierarchical data using MySQL variable soup:

    You've missed the need to order your data. Try the following: SQL Fiddle

    \n
    select t.nodeid, @pv := t.parentid parentid\nfrom (select * from table1 order by nodeid desc) t\njoin (select @pv := 8) tmp\nwhere t.nodeid = @pv\n
    \n

    Output:

    \n
    | NODEID | PARENTID |\n|--------|----------|\n|      8 |        6 |\n|      6 |        5 |\n|      5 |        3 |\n|      3 |        0 |\n
    \n soup wrap:

    You've missed the need to order your data. Try the following: SQL Fiddle

    select t.nodeid, @pv := t.parentid parentid
    from (select * from table1 order by nodeid desc) t
    join (select @pv := 8) tmp
    where t.nodeid = @pv
    

    Output:

    | NODEID | PARENTID |
    |--------|----------|
    |      8 |        6 |
    |      6 |        5 |
    |      5 |        3 |
    |      3 |        0 |
    
    qid & accept id: (28263926, 28263968) query: If satetment: when date field is 1 month or older soup:

    One way is using a case statement:

    \n
    UPDATE transactions\n    SET transactions = (case when tran_date >= date_sub(CURDATE(), interval 1 month)\n                             then transactions + 1 else 0\n                        end),\n        tran_date = (case when tran_date >= date_sub(CURDATE(), interval 1 month)\n                          then CURDATE() \n                          else '0000-00-00'\n                     end),\n        date_created = (case when tran_date >= date_sub(CURDATE(), interval 1 month)\n                             then date_created \n                             else CURDATE()\n                         end)\n    WHERE name = 'jim';\n
    \n

    An alternative is to do this with two separate updates:

    \n
    UPDATE transactions\n    SET transactions = transactions + 1 \n    WHERE name = 'jim' and tran_date >= date_sub(CURDATE(), interval 1 month)\n\nUPDATE transactions\n    SET transactions = 0,\n        tran_date = '0000-00-00',\n        date_created = CURDATE()\n    WHERE name = 'jim' and tran_date < date_sub(CURDATE(), interval 1 month)\n
    \n

    I think the logic might be a bit clearer, but there is more overhead for two update statements.

    \n soup wrap:

    One way is using a case statement:

    UPDATE transactions
        SET transactions = (case when tran_date >= date_sub(CURDATE(), interval 1 month)
                                 then transactions + 1 else 0
                            end),
            tran_date = (case when tran_date >= date_sub(CURDATE(), interval 1 month)
                              then CURDATE() 
                              else '0000-00-00'
                         end),
            date_created = (case when tran_date >= date_sub(CURDATE(), interval 1 month)
                                 then date_created 
                                 else CURDATE()
                             end)
        WHERE name = 'jim';
    

    An alternative is to do this with two separate updates:

    UPDATE transactions
        SET transactions = transactions + 1 
        WHERE name = 'jim' and tran_date >= date_sub(CURDATE(), interval 1 month)
    
    UPDATE transactions
        SET transactions = 0,
            tran_date = '0000-00-00',
            date_created = CURDATE()
        WHERE name = 'jim' and tran_date < date_sub(CURDATE(), interval 1 month)
    

    I think the logic might be a bit clearer, but there is more overhead for two update statements.

    qid & accept id: (28277247, 28277271) query: Find different values in one column according to same value in second column soup:

    You can do this with a group by and having clause:

    \n
    select colb\nfrom table t\ngroup by colb\nhaving min(cola) <> max(cola);\n
    \n

    This returns all the values in colb that have more than one value in cola. You could also use:

    \n
    having count(distinct cola) > 1\n
    \n

    This works, but count(distinct) is less efficient than min() and max().

    \n soup wrap:

    You can do this with a group by and having clause:

    select colb
    from table t
    group by colb
    having min(cola) <> max(cola);
    

    This returns all the values in colb that have more than one value in cola. You could also use:

    having count(distinct cola) > 1
    

    This works, but count(distinct) is less efficient than min() and max().

    qid & accept id: (28278115, 28278845) query: Populate Date Column in Access with dates from 1980-01-01 to today with sql query soup:

    You could add those dates with Access SQL if you have a suitable table of numbers.

    \n
    INSERT INTO dates ([Date])\nSELECT CDate(n.the_number)\nFROM tblNumbers AS n\nWHERE n.the_number BETWEEN 29221 AND 42037;\n
    \n

    Or start from 1 in tblNumbers and add an offset ...

    \n
    INSERT INTO dates ([Date])\nSELECT CDate(n.the_number + 29220)\nFROM tblNumbers AS n\nWHERE n.the_number BETWEEN 1 AND 12817;\n
    \n

    But if you don't have a suitable numbers table, you would need to create one and populate it. And that is similar to the problem you started with, only loading plain numbers instead of dates.

    \n

    The original task is simple enough that I would use a throwaway VBA procedure instead of Access SQL. This one took less than 2 seconds to load my dates table with the required 12,817 date values:

    \n
    Dim db As DAO.Database\nDim rs As DAO.Recordset\nDim dte As Date\nSet db = CurrentDb\nSet rs = db.OpenRecordset("dates", dbOpenTable, dbAppendOnly)\nWith rs\n    For dte = #1/1/1980# To Date\n        .AddNew\n        ![Date].Value = dte\n        .Update\n    Next\n    .Close\nEnd With\n
    \n soup wrap:

    You could add those dates with Access SQL if you have a suitable table of numbers.

    INSERT INTO dates ([Date])
    SELECT CDate(n.the_number)
    FROM tblNumbers AS n
    WHERE n.the_number BETWEEN 29221 AND 42037;
    

    Or start from 1 in tblNumbers and add an offset ...

    INSERT INTO dates ([Date])
    SELECT CDate(n.the_number + 29220)
    FROM tblNumbers AS n
    WHERE n.the_number BETWEEN 1 AND 12817;
    

    But if you don't have a suitable numbers table, you would need to create one and populate it. And that is similar to the problem you started with, only loading plain numbers instead of dates.

    The original task is simple enough that I would use a throwaway VBA procedure instead of Access SQL. This one took less than 2 seconds to load my dates table with the required 12,817 date values:

    Dim db As DAO.Database
    Dim rs As DAO.Recordset
    Dim dte As Date
    Set db = CurrentDb
    Set rs = db.OpenRecordset("dates", dbOpenTable, dbAppendOnly)
    With rs
        For dte = #1/1/1980# To Date
            .AddNew
            ![Date].Value = dte
            .Update
        Next
        .Close
    End With
    
    qid & accept id: (28285834, 28285986) query: How to work out Percent in PostgreSQL using Count soup:

    Postgres does integer division. An easy way to fix this is by multiplying by 1.0 or casting the value to a float:

    \n
    select ((select 1.0 * count(*) from k12_read where quality_score >= 25) / \n        (select count(*) from k12_read) * 100.0\n       ) as percentage;\n
    \n

    By the way, I think your query is simpler if you use conditional aggregation with avg():

    \n
    select avg(case when quality_score >= 25 then 100.0 else 0 end)\nfrom k12_read;\n
    \n soup wrap:

    Postgres does integer division. An easy way to fix this is by multiplying by 1.0 or casting the value to a float:

    select ((select 1.0 * count(*) from k12_read where quality_score >= 25) / 
            (select count(*) from k12_read) * 100.0
           ) as percentage;
    

    By the way, I think your query is simpler if you use conditional aggregation with avg():

    select avg(case when quality_score >= 25 then 100.0 else 0 end)
    from k12_read;
    
    qid & accept id: (28308752, 28308804) query: Parsing a column value in SQL soup:

    You could use for example DelimitedSplit8K function posted in here (quite far in the end):

    \n

    http://www.sqlservercentral.com/articles/Tally+Table/72993/

    \n

    And join that with your select with something like this:

    \n
    select\n  t.ID,\n  s.Item,\n  t.TEXT\nfrom\n  table t\n  cross apply dbo.DelimitedSplit8K(t.VALUE, ',') s \n
    \n

    Edit, Including the function code:

    \n
    CREATE FUNCTION [dbo].[DelimitedSplit8K]\n--===== Define I/O parameters\n        (@pString VARCHAR(8000), @pDelimiter CHAR(1))\n--WARNING!!! DO NOT USE MAX DATA-TYPES HERE!  IT WILL KILL PERFORMANCE!\nRETURNS TABLE WITH SCHEMABINDING AS\n RETURN\n--===== "Inline" CTE Driven "Tally Table" produces values from 1 up to 10,000...\n     -- enough to cover VARCHAR(8000)\n  WITH E1(N) AS (\n                 SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL\n                 SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL\n                 SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1\n                ),                          --10E+1 or 10 rows\n       E2(N) AS (SELECT 1 FROM E1 a, E1 b), --10E+2 or 100 rows\n       E4(N) AS (SELECT 1 FROM E2 a, E2 b), --10E+4 or 10,000 rows max\n cteTally(N) AS (--==== This provides the "base" CTE and limits the number of rows right up front\n                     -- for both a performance gain and prevention of accidental "overruns"\n                 SELECT TOP (ISNULL(DATALENGTH(@pString),0)) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM E4\n                ),\ncteStart(N1) AS (--==== This returns N+1 (starting position of each "element" just once for each delimiter)\n                 SELECT 1 UNION ALL\n                 SELECT t.N+1 FROM cteTally t \n                 WHERE SUBSTRING(@pString,t.N,1) = @pDelimiter\n                ),\ncteLen(N1,L1) AS(--==== Return start and length (for use in substring)\n                 SELECT s.N1,\n                    ISNULL(NULLIF(CHARINDEX(@pDelimiter,@pString,s.N1),0)-s.N1,8000)\n                   FROM cteStart s\n                )\n--===== Do the actual split. The ISNULL/NULLIF combo handles the length for the final element when no delimiter is found.\n SELECT ItemNumber = ROW_NUMBER() OVER(ORDER BY l.N1),\n        Item       = SUBSTRING(@pString, l.N1, l.L1)\n   FROM cteLen l\n;\n
    \n soup wrap:

    You could use for example DelimitedSplit8K function posted in here (quite far in the end):

    http://www.sqlservercentral.com/articles/Tally+Table/72993/

    And join that with your select with something like this:

    select
      t.ID,
      s.Item,
      t.TEXT
    from
      table t
      cross apply dbo.DelimitedSplit8K(t.VALUE, ',') s 
    

    Edit, Including the function code:

    CREATE FUNCTION [dbo].[DelimitedSplit8K]
    --===== Define I/O parameters
            (@pString VARCHAR(8000), @pDelimiter CHAR(1))
    --WARNING!!! DO NOT USE MAX DATA-TYPES HERE!  IT WILL KILL PERFORMANCE!
    RETURNS TABLE WITH SCHEMABINDING AS
     RETURN
    --===== "Inline" CTE Driven "Tally Table" produces values from 1 up to 10,000...
         -- enough to cover VARCHAR(8000)
      WITH E1(N) AS (
                     SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
                     SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
                     SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1
                    ),                          --10E+1 or 10 rows
           E2(N) AS (SELECT 1 FROM E1 a, E1 b), --10E+2 or 100 rows
           E4(N) AS (SELECT 1 FROM E2 a, E2 b), --10E+4 or 10,000 rows max
     cteTally(N) AS (--==== This provides the "base" CTE and limits the number of rows right up front
                         -- for both a performance gain and prevention of accidental "overruns"
                     SELECT TOP (ISNULL(DATALENGTH(@pString),0)) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM E4
                    ),
    cteStart(N1) AS (--==== This returns N+1 (starting position of each "element" just once for each delimiter)
                     SELECT 1 UNION ALL
                     SELECT t.N+1 FROM cteTally t 
                     WHERE SUBSTRING(@pString,t.N,1) = @pDelimiter
                    ),
    cteLen(N1,L1) AS(--==== Return start and length (for use in substring)
                     SELECT s.N1,
                        ISNULL(NULLIF(CHARINDEX(@pDelimiter,@pString,s.N1),0)-s.N1,8000)
                       FROM cteStart s
                    )
    --===== Do the actual split. The ISNULL/NULLIF combo handles the length for the final element when no delimiter is found.
     SELECT ItemNumber = ROW_NUMBER() OVER(ORDER BY l.N1),
            Item       = SUBSTRING(@pString, l.N1, l.L1)
       FROM cteLen l
    ;
    
    qid & accept id: (28339885, 28340000) query: SQL join using UNION ALL with some columns common and some outer soup:

    Use a full outer join, like so:

    \n
    select *\nfrom table1 t1 \nfull outer join table2 t2\non t1.c4 = t2.c1 and t1.c5 = t2.c2\n
    \n

    While SQL Server supports full outer joins, MySQL does not. This query can be rewritten in that situation as follows:

    \n
    select *\nfrom table1 t1 \nleft outer join table2 t2\non t1.c4 = t2.c1 and t1.c5 = t2.c2\nunion\nselect *\nfrom table1 t1 \nright outer join table2 t2\non t1.c4 = t2.c1 and t1.c5 = t2.c2\n
    \n

    Based on your updated requirements, the form of this join specified above can be used with slight modifications like so:

    \n
    select null,null,null,t.* from table1 s\nright outer join table2  t on s.c4 = t.c1  and s.c5 = t.c2\nunion\nselect s.*,null,null from table1 s\nleft outer join table2  t on s.c4 = t.c1  and s.c5 = t.c2\n
    \n

    Note that you will still need to include the literal value null in your select clause, once for each column that needs to be defaulted to null.

    \n

    Demo

    \n soup wrap:

    Use a full outer join, like so:

    select *
    from table1 t1 
    full outer join table2 t2
    on t1.c4 = t2.c1 and t1.c5 = t2.c2
    

    While SQL Server supports full outer joins, MySQL does not. This query can be rewritten in that situation as follows:

    select *
    from table1 t1 
    left outer join table2 t2
    on t1.c4 = t2.c1 and t1.c5 = t2.c2
    union
    select *
    from table1 t1 
    right outer join table2 t2
    on t1.c4 = t2.c1 and t1.c5 = t2.c2
    

    Based on your updated requirements, the form of this join specified above can be used with slight modifications like so:

    select null,null,null,t.* from table1 s
    right outer join table2  t on s.c4 = t.c1  and s.c5 = t.c2
    union
    select s.*,null,null from table1 s
    left outer join table2  t on s.c4 = t.c1  and s.c5 = t.c2
    

    Note that you will still need to include the literal value null in your select clause, once for each column that needs to be defaulted to null.

    Demo

    qid & accept id: (28371337, 28371601) query: How to select none if condition is not? soup:

    There WHERE condition is done to select the rows first, then MAX() is applied to those results. Use HAVING to operate on the results after aggregating.

    \n
    SELECT id, account, MAX(mydate) AS maxdate\nFROM awesometable\nGROUP BY account\nHAVING maxdate < DATE_SUB(NOW(), INTERVAL 12 MONTH)\n
    \n

    Note that this will not necessarily show the id of the line with the maximum date. For that, you need a join:

    \n
    SELECT a.id, a.account, a.mydate\nFROM awesometable AS a\nJOIN (\n    SELECT account, MAX(mydate) AS maxdate\n    FROM awesometable\n    GROUP BY account\n    HAVING maxdate < DATE_SUB(NOW(), INTERVAL 12 MONTH)) AS b\nON a.account = b.account AND a.mydate = b.maxdate\n
    \n soup wrap:

    There WHERE condition is done to select the rows first, then MAX() is applied to those results. Use HAVING to operate on the results after aggregating.

    SELECT id, account, MAX(mydate) AS maxdate
    FROM awesometable
    GROUP BY account
    HAVING maxdate < DATE_SUB(NOW(), INTERVAL 12 MONTH)
    

    Note that this will not necessarily show the id of the line with the maximum date. For that, you need a join:

    SELECT a.id, a.account, a.mydate
    FROM awesometable AS a
    JOIN (
        SELECT account, MAX(mydate) AS maxdate
        FROM awesometable
        GROUP BY account
        HAVING maxdate < DATE_SUB(NOW(), INTERVAL 12 MONTH)) AS b
    ON a.account = b.account AND a.mydate = b.maxdate
    
    qid & accept id: (28397342, 28397484) query: Update column according to another column soup:

    Just tried it with SQL Fiddle:

    \n
    create table tbl_SO_19\n(\ncol1 int,\ncol2 varchar(50),\ncol3 bit,\ncol4 int\n)\ngo\ninsert into tbl_SO_19\nvalues\n(1,'John',0,null),\n(2,'Hony',0,null),\n(3,'John',1,null),\n(4,'Rohn',0,null),\n(5,'Hony',1,null)\n
    \n

    now you can use below query to update it like as you wanted:

    \n
    Update tbl_SO_19\nset col4 = t.col1\nfrom tbl_SO_19 join tbl_SO_19 t on t.col2=tbl_SO_19.col2 and t.col3=1\nwhere tbl_SO_19.col3 = 0\n
    \n soup wrap:

    Just tried it with SQL Fiddle:

    create table tbl_SO_19
    (
    col1 int,
    col2 varchar(50),
    col3 bit,
    col4 int
    )
    go
    insert into tbl_SO_19
    values
    (1,'John',0,null),
    (2,'Hony',0,null),
    (3,'John',1,null),
    (4,'Rohn',0,null),
    (5,'Hony',1,null)
    

    now you can use below query to update it like as you wanted:

    Update tbl_SO_19
    set col4 = t.col1
    from tbl_SO_19 join tbl_SO_19 t on t.col2=tbl_SO_19.col2 and t.col3=1
    where tbl_SO_19.col3 = 0
    
    qid & accept id: (28474871, 28474949) query: Select multiple records grouped by primary key with max value on a column soup:

    Isn't this just a plain group by?

    \n
    SELECT a, b, c, d, MAX(issue)\nFROM tablename\nGROUP BY a, b, c, d\n
    \n

    If contents also is required:

    \n
    SELECT a, b, c, d, issue, contents\nFROM tablename t1\nWHERE issue = (select max(issue) from tablename t2\n               where t1.a = t2.a\n                 and t1.b = t2.b\n                 and t1.c = t2.c\n                 and t1.d = t2.d)\n
    \n

    Will list both rows if it's a tie!

    \n soup wrap:

    Isn't this just a plain group by?

    SELECT a, b, c, d, MAX(issue)
    FROM tablename
    GROUP BY a, b, c, d
    

    If contents also is required:

    SELECT a, b, c, d, issue, contents
    FROM tablename t1
    WHERE issue = (select max(issue) from tablename t2
                   where t1.a = t2.a
                     and t1.b = t2.b
                     and t1.c = t2.c
                     and t1.d = t2.d)
    

    Will list both rows if it's a tie!

    qid & accept id: (28481148, 28482551) query: Set based solution to generate batch number based on proximity and type of record in SQL server soup:

    You could do this using Recursive CTE. I also used the lead function to check the next row and determine if you transcode changed.

    \n

    Query:

    \n
    WITH A\nAS (\n    SELECT id\n        ,trancode\n        ,trandate\n        ,lead(trancode) OVER (ORDER BY id,trancode) leadcode\n    FROM #t\n    )\n    ,cte\nAS (\n    SELECT id\n        ,trandate\n        ,trancode\n        ,lead(trancode) OVER (ORDER BY id,trancode) leadcode\n        ,1 batchnum\n        ,1 nextbatchnum\n        ,id + 1 nxtId\n    FROM #t\n    WHERE id = 1\n\n    UNION ALL\n\n    SELECT A.id\n        ,A.trandate\n        ,A.trancode\n        ,A.leadcode\n        ,nextbatchnum\n        ,CASE \n            WHEN A.trancode <> A.leadcode THEN nextbatchnum + 1 ELSE nextbatchnum END nextbatchnum\n        ,A.id + 1 nxtid\n    FROM A\n    INNER JOIN CTE B ON A.id = B.nxtId\n    )\nSELECT id\n    ,trandate\n    ,trancode\n    ,batchnum\nFROM CTE\nOPTION (MAXRECURSION 100)\n
    \n

    Result:

    \n
    id  trandate    trancode    batchnum\n1   2015-02-12 10:19:06.717 1   1\n2   2015-02-12 10:20:06.717 1   1\n3   2015-02-12 10:21:06.717 1   1\n4   2015-02-12 10:22:06.717 1   1\n5   2015-02-12 10:23:06.717 2   2\n6   2015-02-12 10:24:06.717 2   2\n7   2015-02-12 10:25:06.717 2   2\n8   2015-02-12 10:26:06.717 2   2\n9   2015-02-12 10:27:06.717 2   2\n10  2015-02-12 10:28:06.717 1   3\n11  2015-02-12 10:29:06.717 1   3\n12  2015-02-12 10:30:06.717 1   3\n13  2015-02-12 10:31:06.717 2   4\n14  2015-02-12 10:32:06.717 2   4\n15  2015-02-12 10:33:06.717 1   5\n16  2015-02-12 10:34:06.717 1   5\n17  2015-02-12 10:35:06.717 1   5\n18  2015-02-12 10:36:06.717 2   6\n19  2015-02-12 10:37:06.717 2   6\n20  2015-02-12 10:38:06.717 1   7\n21  2015-02-12 10:39:06.717 1   7\n22  2015-02-12 10:40:06.717 1   7\n23  2015-02-12 10:40:06.717 1   7\n
    \n soup wrap:

    You could do this using Recursive CTE. I also used the lead function to check the next row and determine if you transcode changed.

    Query:

    WITH A
    AS (
        SELECT id
            ,trancode
            ,trandate
            ,lead(trancode) OVER (ORDER BY id,trancode) leadcode
        FROM #t
        )
        ,cte
    AS (
        SELECT id
            ,trandate
            ,trancode
            ,lead(trancode) OVER (ORDER BY id,trancode) leadcode
            ,1 batchnum
            ,1 nextbatchnum
            ,id + 1 nxtId
        FROM #t
        WHERE id = 1
    
        UNION ALL
    
        SELECT A.id
            ,A.trandate
            ,A.trancode
            ,A.leadcode
            ,nextbatchnum
            ,CASE 
                WHEN A.trancode <> A.leadcode THEN nextbatchnum + 1 ELSE nextbatchnum END nextbatchnum
            ,A.id + 1 nxtid
        FROM A
        INNER JOIN CTE B ON A.id = B.nxtId
        )
    SELECT id
        ,trandate
        ,trancode
        ,batchnum
    FROM CTE
    OPTION (MAXRECURSION 100)
    

    Result:

    id  trandate    trancode    batchnum
    1   2015-02-12 10:19:06.717 1   1
    2   2015-02-12 10:20:06.717 1   1
    3   2015-02-12 10:21:06.717 1   1
    4   2015-02-12 10:22:06.717 1   1
    5   2015-02-12 10:23:06.717 2   2
    6   2015-02-12 10:24:06.717 2   2
    7   2015-02-12 10:25:06.717 2   2
    8   2015-02-12 10:26:06.717 2   2
    9   2015-02-12 10:27:06.717 2   2
    10  2015-02-12 10:28:06.717 1   3
    11  2015-02-12 10:29:06.717 1   3
    12  2015-02-12 10:30:06.717 1   3
    13  2015-02-12 10:31:06.717 2   4
    14  2015-02-12 10:32:06.717 2   4
    15  2015-02-12 10:33:06.717 1   5
    16  2015-02-12 10:34:06.717 1   5
    17  2015-02-12 10:35:06.717 1   5
    18  2015-02-12 10:36:06.717 2   6
    19  2015-02-12 10:37:06.717 2   6
    20  2015-02-12 10:38:06.717 1   7
    21  2015-02-12 10:39:06.717 1   7
    22  2015-02-12 10:40:06.717 1   7
    23  2015-02-12 10:40:06.717 1   7
    
    qid & accept id: (28483426, 28483682) query: Specify value to appear last in ordered results soup:

    You'll need to use some conditional logic in your ORDER BY. This will sort the data in the specific order that you want, Red always being last:

    \n
    SELECT id, colour\nFROM colours\nORDER BY \n  CASE \n    WHEN colour <> 'Red' \n    THEN 1 ELSE 2 END, colour;\n
    \n

    See SQL Fiddle with Demo. This uses a CASE expression to assign a value to each row that is used for the ordering. Red is assigned a higher value, so it will appear a the end of the list.

    \n

    This could also be written testing for the Colour being equal to Red first:

    \n
    SELECT id, colour\nFROM colours\nORDER BY \n  CASE \n    WHEN colour = 'Red' \n    THEN 2 ELSE 1 END, colour;\n
    \n

    See Demo. Both versions will return:

    \n
    | ID | COLOUR |\n|----|--------|\n|  1 |   Blue |\n|  4 |  Green |\n|  5 | Orange |\n|  6 |   Teal |\n|  3 | Yellow |\n|  2 |    Red |\n
    \n soup wrap:

    You'll need to use some conditional logic in your ORDER BY. This will sort the data in the specific order that you want, Red always being last:

    SELECT id, colour
    FROM colours
    ORDER BY 
      CASE 
        WHEN colour <> 'Red' 
        THEN 1 ELSE 2 END, colour;
    

    See SQL Fiddle with Demo. This uses a CASE expression to assign a value to each row that is used for the ordering. Red is assigned a higher value, so it will appear a the end of the list.

    This could also be written testing for the Colour being equal to Red first:

    SELECT id, colour
    FROM colours
    ORDER BY 
      CASE 
        WHEN colour = 'Red' 
        THEN 2 ELSE 1 END, colour;
    

    See Demo. Both versions will return:

    | ID | COLOUR |
    |----|--------|
    |  1 |   Blue |
    |  4 |  Green |
    |  5 | Orange |
    |  6 |   Teal |
    |  3 | Yellow |
    |  2 |    Red |
    
    qid & accept id: (28537722, 28538529) query: Multisearch form for a query in access soup:

    So if I gather correctly you need to perform a multi search where if any of the boxes are null you would like to return all the values. and more than one text box can be used simultaneously. TO do this you have to amend do the following.

    \n

    Amend the Query Field (Note i'm referring to field and not criteria)\nFor the first Text Box Assuming name is COD and Field Name is also COD\nIf the Current field name is COD insert another field with the same name and amend to

    \n
      [COD]=[Forms]![frmRICmp]![cod] OR [Forms]![frmRICmp]![cod] Is NULL\nthen in the criteria field use the following value\n  TRUE\n
    \n

    For the second Text Box Assuming name is COD2 and Field Name is also COD2\nIf the Current field name is COD2 insert another field with the same name and amend to

    \n
    [COD2]=[Forms]![frmRICmp]![cod2] OR [Forms]![frmRICmp]![cod2] Is NULL\nthen in the criteria field use the following value\n  TRUE\n
    \n

    and continue the same process for all 4 text boxes.

    \n soup wrap:

    So if I gather correctly you need to perform a multi search where if any of the boxes are null you would like to return all the values. and more than one text box can be used simultaneously. TO do this you have to amend do the following.

    Amend the Query Field (Note i'm referring to field and not criteria) For the first Text Box Assuming name is COD and Field Name is also COD If the Current field name is COD insert another field with the same name and amend to

      [COD]=[Forms]![frmRICmp]![cod] OR [Forms]![frmRICmp]![cod] Is NULL
    then in the criteria field use the following value
      TRUE
    

    For the second Text Box Assuming name is COD2 and Field Name is also COD2 If the Current field name is COD2 insert another field with the same name and amend to

    [COD2]=[Forms]![frmRICmp]![cod2] OR [Forms]![frmRICmp]![cod2] Is NULL
    then in the criteria field use the following value
      TRUE
    

    and continue the same process for all 4 text boxes.

    qid & accept id: (28544816, 28545284) query: How to remove constraint based on columns from oracle database? soup:

    I didn't think this was possible with a single statement, but it turns out it is, as shown in the examples in the documentation:

    \n
    ALTER TABLE MY_TABLE DROP UNIQUE(col1, col2);\n
    \n

    A complete example:\n ALTER TABLE MY_TABLE ADD UNIQUE (col1, col2);

    \n
    Table my_table altered.\n\nSELECT CONSTRAINT_NAME, INDEX_NAME\nFROM USER_CONSTRAINTS\nWHERE TABLE_NAME = 'MY_TABLE';\n\nCONSTRAINT_NAME                INDEX_NAME                    \n------------------------------ ------------------------------\nSYS_C0092455                   SYS_C0092455                   \n\nALTER TABLE MY_TABLE DROP UNIQUE(col1, col2);\n\nTable my_table altered.\n\nSELECT CONSTRAINT_NAME, INDEX_NAME\nFROM USER_CONSTRAINTS\nWHERE TABLE_NAME = 'MY_TABLE';\n\nno rows selected\n
    \n

    An alternative approach is to query the USER_CONSTRAINTS and USER_CONS_COLUMNS views to find the matching constraint name - presumably system-generated or you would already know it - and then use that name. If you need to do this as a script then you could query in a PL/SQL block, and plug the found constraint name into a dynamic ALTER TABLE statement.

    \n soup wrap:

    I didn't think this was possible with a single statement, but it turns out it is, as shown in the examples in the documentation:

    ALTER TABLE MY_TABLE DROP UNIQUE(col1, col2);
    

    A complete example: ALTER TABLE MY_TABLE ADD UNIQUE (col1, col2);

    Table my_table altered.
    
    SELECT CONSTRAINT_NAME, INDEX_NAME
    FROM USER_CONSTRAINTS
    WHERE TABLE_NAME = 'MY_TABLE';
    
    CONSTRAINT_NAME                INDEX_NAME                    
    ------------------------------ ------------------------------
    SYS_C0092455                   SYS_C0092455                   
    
    ALTER TABLE MY_TABLE DROP UNIQUE(col1, col2);
    
    Table my_table altered.
    
    SELECT CONSTRAINT_NAME, INDEX_NAME
    FROM USER_CONSTRAINTS
    WHERE TABLE_NAME = 'MY_TABLE';
    
    no rows selected
    

    An alternative approach is to query the USER_CONSTRAINTS and USER_CONS_COLUMNS views to find the matching constraint name - presumably system-generated or you would already know it - and then use that name. If you need to do this as a script then you could query in a PL/SQL block, and plug the found constraint name into a dynamic ALTER TABLE statement.

    qid & accept id: (28566000, 28566216) query: Convert a BINARY stored as VARCHAR to BINARY soup:

    The result you get is because the string "0003f80075177fe6" (a VARCHAR value) is converted to code points, and these code points are served up as a binary value. Since you're probably using an ASCII-compatible collation, that means you get the ASCII code points: 0 is 48 (30 hex), f is 102 (66 hex) and so on. This explains the 30 30 30 33 66 38 30 30...

    \n

    What you want to do instead is parse the string as a hexadecimal representation of the bytes (00 03 f8 00 75 71 77 fe 66). CONVERT accepts an extra "style" parameter that allows you to convert hexstrings:

    \n
    SELECT CONVERT(BINARY(16), '0003f80075177fe6', 2)\n
    \n

    Style 2 converts a hexstring to binary. (Style 1 does the same for strings that start with "0x", which is not the case here.)

    \n

    Note that if there are less than 16 bytes (as in this case), the value is right-padded with zeroes (0x0003F80075177FE60000000000000000). If you need it left-padded instead, you have to do that yourself:

    \n
    SELECT CONVERT(BINARY(16), RIGHT(REPLICATE('00', 16) + '0003f80075177fe6', 32), 2)\n
    \n

    Finally, note that binary literals can be specified without conversion simply by prefixing them with "0x" and not using quotes: SELECT 0x0003f80075177fe6 will return a column of type BINARY(8). Not relevant for this query, but just for completeness.

    \n soup wrap:

    The result you get is because the string "0003f80075177fe6" (a VARCHAR value) is converted to code points, and these code points are served up as a binary value. Since you're probably using an ASCII-compatible collation, that means you get the ASCII code points: 0 is 48 (30 hex), f is 102 (66 hex) and so on. This explains the 30 30 30 33 66 38 30 30...

    What you want to do instead is parse the string as a hexadecimal representation of the bytes (00 03 f8 00 75 71 77 fe 66). CONVERT accepts an extra "style" parameter that allows you to convert hexstrings:

    SELECT CONVERT(BINARY(16), '0003f80075177fe6', 2)
    

    Style 2 converts a hexstring to binary. (Style 1 does the same for strings that start with "0x", which is not the case here.)

    Note that if there are less than 16 bytes (as in this case), the value is right-padded with zeroes (0x0003F80075177FE60000000000000000). If you need it left-padded instead, you have to do that yourself:

    SELECT CONVERT(BINARY(16), RIGHT(REPLICATE('00', 16) + '0003f80075177fe6', 32), 2)
    

    Finally, note that binary literals can be specified without conversion simply by prefixing them with "0x" and not using quotes: SELECT 0x0003f80075177fe6 will return a column of type BINARY(8). Not relevant for this query, but just for completeness.

    qid & accept id: (28611170, 28612097) query: loop select results as arguments for script soup:

    EDIT:\nBased on your comment I think you just need to use a for loop in you .sh script. First you'd assign the query results and then use the for loop to call the .pl script for each argument in your results variable.

    \n
    for ARG in $RESULTS\n  do\n     perl /path/to/script.pl $ARG\n  done\n
    \n

    I'm not as familiar with bash scripting so the code might not be quite right but the approach should be sound.

    \n

    Original answer:

    \n

    If you're query script and script.pl must be separate but are both Perl scripts you can have the query script run script.pl for you using system().\nAssuming your arguments are separated by / you could do something like this in your query script:

    \n
    #!/usr/bin/perl\n\n#query code to get arguments\n$arguments =~ s/\// /;\n\nsystem ("perl /path/to/script.pl $arguments") or die ("Something went wrong: $?\n");\n
    \n

    Then in script.pl you can just loop through @_:

    \n
    #!/usr/bin/perl\n\nfor my $arg (@_)\n{\n  #script.pl code\n}\n
    \n

    Here's the link to perldoc for using system. Although I'd only recommend this approach if everything is internal.

    \n soup wrap:

    EDIT: Based on your comment I think you just need to use a for loop in you .sh script. First you'd assign the query results and then use the for loop to call the .pl script for each argument in your results variable.

    for ARG in $RESULTS
      do
         perl /path/to/script.pl $ARG
      done
    

    I'm not as familiar with bash scripting so the code might not be quite right but the approach should be sound.

    Original answer:

    If you're query script and script.pl must be separate but are both Perl scripts you can have the query script run script.pl for you using system(). Assuming your arguments are separated by / you could do something like this in your query script:

    #!/usr/bin/perl
    
    #query code to get arguments
    $arguments =~ s/\// /;
    
    system ("perl /path/to/script.pl $arguments") or die ("Something went wrong: $?\n");
    

    Then in script.pl you can just loop through @_:

    #!/usr/bin/perl
    
    for my $arg (@_)
    {
      #script.pl code
    }
    

    Here's the link to perldoc for using system. Although I'd only recommend this approach if everything is internal.

    qid & accept id: (28619830, 28619883) query: Querying and grouping by date (Y/m/j) when date is in a Y/m/j H:i:s format soup:

    If timestamp is a date/time column -- which it should be. You should not be storing date/times as strings. Then you can do:

    \n
    SELECT DATE(timestamp), COUNT(eventid)\nFROM `tablex`\nWHERE timestamp >= date_sub(CURRENT_DATE, interval 30 day)\nGROUP BY DATE(timestamp) \n
    \n

    Note that this query includes the date in the select.

    \n

    If your timestamp is stored as a string, it is in a sort-of reasonable format. I would be inclined to translate it using a subquery and just use that.

    \n
    SELECT thedate, COUNT(eventid)\nFROM (select x.*, date(replace(left(timestamp, '/', '-'), 10) as thedate\n      from `tablex` x\n     ) x\nWHERE thedate >= date_sub(CURRENT_DATE, interval 30 day)\nGROUP BY thedate;\n
    \n

    Note that you can also use str_to_date() to convert the string to a date. I just find it easier in this case to use date() and replace().

    \n soup wrap:

    If timestamp is a date/time column -- which it should be. You should not be storing date/times as strings. Then you can do:

    SELECT DATE(timestamp), COUNT(eventid)
    FROM `tablex`
    WHERE timestamp >= date_sub(CURRENT_DATE, interval 30 day)
    GROUP BY DATE(timestamp) 
    

    Note that this query includes the date in the select.

    If your timestamp is stored as a string, it is in a sort-of reasonable format. I would be inclined to translate it using a subquery and just use that.

    SELECT thedate, COUNT(eventid)
    FROM (select x.*, date(replace(left(timestamp, '/', '-'), 10) as thedate
          from `tablex` x
         ) x
    WHERE thedate >= date_sub(CURRENT_DATE, interval 30 day)
    GROUP BY thedate;
    

    Note that you can also use str_to_date() to convert the string to a date. I just find it easier in this case to use date() and replace().

    qid & accept id: (28628081, 28628464) query: Check Constraint Referencing Unique Column on another Table soup:

    First, like GordonLinoff comments, the better approach is to include TitleID in the Customer table. Below is an option if you can't change the layout of the Customer table. A foreign key is definitely better than using dynamic T-SQL to keep a check constraint up to date.

    \n
    \n

    I cannot reference the title.label column with a foreign key as it is\n not a primary key.

    \n
    \n

    A foreign key can reference any candidate key. It doesn't have to reference the primary key.

    \n

    To tell the database about candidate keys, you can create a unique index:

    \n
    create table title (\n    id int primary key, \n    label varchar(50));\ncreate table customer (\n    id int primary key, \n    title varchar(50));\ncreate unique index ux_title_label on title(label);\nalter table customer add constraint fk_customer_title \n    foreign key (title) references title(label);\n
    \n

    Another way to tell the database about a candidate key is a unique constraint:

    \n
    alter table title add constraint uc_title_label unique (label);\n
    \n soup wrap:

    First, like GordonLinoff comments, the better approach is to include TitleID in the Customer table. Below is an option if you can't change the layout of the Customer table. A foreign key is definitely better than using dynamic T-SQL to keep a check constraint up to date.

    I cannot reference the title.label column with a foreign key as it is not a primary key.

    A foreign key can reference any candidate key. It doesn't have to reference the primary key.

    To tell the database about candidate keys, you can create a unique index:

    create table title (
        id int primary key, 
        label varchar(50));
    create table customer (
        id int primary key, 
        title varchar(50));
    create unique index ux_title_label on title(label);
    alter table customer add constraint fk_customer_title 
        foreign key (title) references title(label);
    

    Another way to tell the database about a candidate key is a unique constraint:

    alter table title add constraint uc_title_label unique (label);
    
    qid & accept id: (28663276, 28663759) query: Change column datatype in SELECT in SQL Server soup:

    1) Simplest solution would be a simple join thus:

    \n
    SELECT  c.name, c.categoryID, category.name AS category_name\nFROM    customer c\nINNER JOIN -- or LEFT JOIN if categoryID allows NULLs\n(\nSELECT 1, 'First category' UNION ALL\nSELECT 2, 'Second category' UNION ALL\nSELECT 3, 'Third category'\n) category(categoryID, name) ON c.categoryID = category.categoryID\n
    \n

    I would use this solution if list of categories is small, static and if it is needed only for this query.

    \n

    2) Otherwise, I would create a new table thus

    \n
    CREATE TABLE category -- or dbo.cateogory (note: you should use object's/table's schema)\n(\n    categoryID INT NOT NULL,\n        CONSTRAINT PK_category_categoryID PRIMARY KEY(categoryID),\n    name NVARCHAR(50) NOT NULL -- you should use the propper type (varchar maybe) and max length (100 maybe)\n    --,      CONSTRAINT IUN_category_name UNIQUE(name) -- uncomment this line if you want to have unique categories (nu duplicate values in column [name])\n);\nGO\n
    \n

    plus I would create a foreign key in order to be sure that categories from [customer] table exist also in [category] table:

    \n
    ALTER TABLE customer \nADD CONSTRAINT FK_customer_categoryID \nFOREIGN KEY (categoryID) REFERENCES category(categoryID)\nGO\n\nINSERT category (categoryID, name)\nSELECT 1, 'First category' UNION ALL\nSELECT 2, 'Second category' UNION ALL\nSELECT 3, 'Third category'\nGO\n
    \n

    and your query will be

    \n
    SELECT  c.name, c.categoryID, ctg.name AS category_name\nFROM    customer c\nINNER JOIN ctg ON c.categoryID = ctg.categoryID -- or LEFT JOIN if c.categoryID allows NULLs \n
    \n

    I would use solution #2.

    \n soup wrap:

    1) Simplest solution would be a simple join thus:

    SELECT  c.name, c.categoryID, category.name AS category_name
    FROM    customer c
    INNER JOIN -- or LEFT JOIN if categoryID allows NULLs
    (
    SELECT 1, 'First category' UNION ALL
    SELECT 2, 'Second category' UNION ALL
    SELECT 3, 'Third category'
    ) category(categoryID, name) ON c.categoryID = category.categoryID
    

    I would use this solution if list of categories is small, static and if it is needed only for this query.

    2) Otherwise, I would create a new table thus

    CREATE TABLE category -- or dbo.cateogory (note: you should use object's/table's schema)
    (
        categoryID INT NOT NULL,
            CONSTRAINT PK_category_categoryID PRIMARY KEY(categoryID),
        name NVARCHAR(50) NOT NULL -- you should use the propper type (varchar maybe) and max length (100 maybe)
        --,      CONSTRAINT IUN_category_name UNIQUE(name) -- uncomment this line if you want to have unique categories (nu duplicate values in column [name])
    );
    GO
    

    plus I would create a foreign key in order to be sure that categories from [customer] table exist also in [category] table:

    ALTER TABLE customer 
    ADD CONSTRAINT FK_customer_categoryID 
    FOREIGN KEY (categoryID) REFERENCES category(categoryID)
    GO
    
    INSERT category (categoryID, name)
    SELECT 1, 'First category' UNION ALL
    SELECT 2, 'Second category' UNION ALL
    SELECT 3, 'Third category'
    GO
    

    and your query will be

    SELECT  c.name, c.categoryID, ctg.name AS category_name
    FROM    customer c
    INNER JOIN ctg ON c.categoryID = ctg.categoryID -- or LEFT JOIN if c.categoryID allows NULLs 
    

    I would use solution #2.

    qid & accept id: (28675581, 28675776) query: Multi column ORDER BY for wp_postmeta values soup:

    Simply include the columns you need sorted in the ORDER BY clause. It looks like you have three metadata columns that constitute year, month, and day. So try this:

    \n
    ORDER BY mt1.meta_value DESC, \n         mt2.meta_value DESC,\n         mt3.meta_value DESC\nLIMIT 0, 10 \n
    \n

    Now look, meta_values are text. Your months and days (mt2, mt3) values might look like '1', '2' ... '10', '11' etc. In that case you have to trick MySQL into thinking your values are numbers, or your sorting will come up wonky. This is easy: add zero to the text value. This will typecast your text to integer. The TRIM() function gets rid of leading and trailing spaces.

    \n
    ORDER BY 0+TRIM(mt1.meta_value) DESC, \n         0+TRIM(mt2.meta_value) DESC,\n         0+TRIM(mt3.meta_value) DESC\nLIMIT 0, 10 \n
    \n

    Or, as Marcus suggested, you could use a DATE object for ordering. You can make a date object out of your three metadata columns like this:

    \n
     STR_TO_DATE(CONCAT_WS('-',\n                       TRIM(mt1.meta_value),\n                       TRIM(mt2.meta_value),\n                       TRIM(mt3_meta_value)),\n             '%Y-%m-%d)\n
    \n

    CONCAT_WS turns your three values into '2015-02-14'. Then, STR_TO_DATE(arg,'%Y-%m-%d') turns that string into a date.

    \n

    This is cool because you can then order by it, like so:

    \n
     ORDER BY STR_TO_DATE(CONCAT_WS('-',\n                       TRIM(mt1.meta_value),\n                       TRIM(mt2.meta_value),\n                       TRIM(mt3_meta_value)),\n             '%Y-%m-%d) DESC\n
    \n

    You can also use it in WHERE clauses with date arithmetic, for example ....

    \n
     WHERE (thatBigDateExpression) >= NOW() - INTERVAL 2 MONTH\n
    \n soup wrap:

    Simply include the columns you need sorted in the ORDER BY clause. It looks like you have three metadata columns that constitute year, month, and day. So try this:

    ORDER BY mt1.meta_value DESC, 
             mt2.meta_value DESC,
             mt3.meta_value DESC
    LIMIT 0, 10 
    

    Now look, meta_values are text. Your months and days (mt2, mt3) values might look like '1', '2' ... '10', '11' etc. In that case you have to trick MySQL into thinking your values are numbers, or your sorting will come up wonky. This is easy: add zero to the text value. This will typecast your text to integer. The TRIM() function gets rid of leading and trailing spaces.

    ORDER BY 0+TRIM(mt1.meta_value) DESC, 
             0+TRIM(mt2.meta_value) DESC,
             0+TRIM(mt3.meta_value) DESC
    LIMIT 0, 10 
    

    Or, as Marcus suggested, you could use a DATE object for ordering. You can make a date object out of your three metadata columns like this:

     STR_TO_DATE(CONCAT_WS('-',
                           TRIM(mt1.meta_value),
                           TRIM(mt2.meta_value),
                           TRIM(mt3_meta_value)),
                 '%Y-%m-%d)
    

    CONCAT_WS turns your three values into '2015-02-14'. Then, STR_TO_DATE(arg,'%Y-%m-%d') turns that string into a date.

    This is cool because you can then order by it, like so:

     ORDER BY STR_TO_DATE(CONCAT_WS('-',
                           TRIM(mt1.meta_value),
                           TRIM(mt2.meta_value),
                           TRIM(mt3_meta_value)),
                 '%Y-%m-%d) DESC
    

    You can also use it in WHERE clauses with date arithmetic, for example ....

     WHERE (thatBigDateExpression) >= NOW() - INTERVAL 2 MONTH
    
    qid & accept id: (28679208, 28679510) query: Add multiple CHECK constraints on one column depending on the values of another column soup:

    You need to use a case statement, eg. something like:

    \n
    create table test1 (col1 varchar2(2),\n                    col2 number);\n\nalter table test1 add constraint test1_chk check (col2 < case when col1 = 'A' then 50\n                                                              when col1 = 'B' then 100\n                                                              when col1 = 'C' then 150\n                                                              else col2 + 1\n                                                         end);\n\ninsert into test1 values ('A', 49);\ninsert into test1 values ('A', 50);\ninsert into test1 values ('B', 99);\ninsert into test1 values ('B', 100);\ninsert into test1 values ('C', 149);\ninsert into test1 values ('C', 150);\ninsert into test1 values ('D', 5000);\n\ncommit;\n
    \n

    Output:

    \n
    1 row created.\n\ninsert into test1 values ('A', 50)\nError at line 2\nORA-02290: check constraint (MY_USER.TEST1_CHK) violated\n\n1 row created.\n\ninsert into test1 values ('B', 100)\nError at line 4\nORA-02290: check constraint (MY_USER.TEST1_CHK) violated\n\n1 row created.\n\ninsert into test1 values ('C', 150)\nError at line 6\nORA-02290: check constraint (MY_USER.TEST1_CHK) violated\n\n1 row created.\n\nCommit complete.\n
    \n soup wrap:

    You need to use a case statement, eg. something like:

    create table test1 (col1 varchar2(2),
                        col2 number);
    
    alter table test1 add constraint test1_chk check (col2 < case when col1 = 'A' then 50
                                                                  when col1 = 'B' then 100
                                                                  when col1 = 'C' then 150
                                                                  else col2 + 1
                                                             end);
    
    insert into test1 values ('A', 49);
    insert into test1 values ('A', 50);
    insert into test1 values ('B', 99);
    insert into test1 values ('B', 100);
    insert into test1 values ('C', 149);
    insert into test1 values ('C', 150);
    insert into test1 values ('D', 5000);
    
    commit;
    

    Output:

    1 row created.
    
    insert into test1 values ('A', 50)
    Error at line 2
    ORA-02290: check constraint (MY_USER.TEST1_CHK) violated
    
    1 row created.
    
    insert into test1 values ('B', 100)
    Error at line 4
    ORA-02290: check constraint (MY_USER.TEST1_CHK) violated
    
    1 row created.
    
    insert into test1 values ('C', 150)
    Error at line 6
    ORA-02290: check constraint (MY_USER.TEST1_CHK) violated
    
    1 row created.
    
    Commit complete.
    
    qid & accept id: (28698635, 28700559) query: Oracle query using Window function soup:

    You can use case with lead and lag:

    \n
    SELECT   D.*,\n         CASE\n            WHEN LAG (D1) OVER (ORDER BY D1) IS NOT NULL\n                 AND (LAG (D1) OVER (ORDER BY D1), LAG (D2) OVER (ORDER BY D1))\n                       OVERLAPS (D1, D2)\n                 OR LEAD (D1) OVER (ORDER BY D1) IS NOT NULL\n                   AND (LEAD (D1) OVER (ORDER BY D1),\n                        LEAD (D2) OVER (ORDER BY D1))\n                         OVERLAPS (D1, D2)\n            THEN\n               'S'\n            ELSE\n               'N'\n         END\n            OVERLAP\n  FROM   MYDATA D;\n
    \n

    Results:

    \n
    NAME                                               D1        D2        OVERLAP\n-------------------------------------------------- --------- --------- -------\nA                                                  01-JAN-10 02-MAR-10 N      \nB                                                  03-MAR-10 20-MAR-10 S      \nC                                                  10-MAR-10 20-SEP-10 S      \nD                                                  10-DEC-10 31-DEC-10 N  \n
    \n soup wrap:

    You can use case with lead and lag:

    SELECT   D.*,
             CASE
                WHEN LAG (D1) OVER (ORDER BY D1) IS NOT NULL
                     AND (LAG (D1) OVER (ORDER BY D1), LAG (D2) OVER (ORDER BY D1))
                           OVERLAPS (D1, D2)
                     OR LEAD (D1) OVER (ORDER BY D1) IS NOT NULL
                       AND (LEAD (D1) OVER (ORDER BY D1),
                            LEAD (D2) OVER (ORDER BY D1))
                             OVERLAPS (D1, D2)
                THEN
                   'S'
                ELSE
                   'N'
             END
                OVERLAP
      FROM   MYDATA D;
    

    Results:

    NAME                                               D1        D2        OVERLAP
    -------------------------------------------------- --------- --------- -------
    A                                                  01-JAN-10 02-MAR-10 N      
    B                                                  03-MAR-10 20-MAR-10 S      
    C                                                  10-MAR-10 20-SEP-10 S      
    D                                                  10-DEC-10 31-DEC-10 N  
    
    qid & accept id: (28749544, 28749827) query: Trying to get pricing from quantity breakdown soup:
    declare @prices table\n(\n    id int identity(1,1),\n    item int,\n    Qty int,\n    Price float\n)\n\ndeclare @orders table\n(\n    id int identity(1000,1),\n    item int,\n    item_qty int\n)\n\ninsert into @prices (item, Qty, Price)\nvalues \n(525001,1, 59),\n(525001,8, 55),\n(525001,13, 45)\n\ninsert into @orders (item, item_qty)\nvalues\n(525001,9),\n(525001,2),\n(525001,50000)\n
    \n
    \n
    select Id, max(Price) as retail_price, sum(Item_qty) as items_sold, count(IdOrder) as orders_count\n from\n(\n    select \n        o.item_qty,\n        o.Id as idOrder,\n        p.*,\n        ROW_NUMBER() over (partition by o.Item, o.Id order by p.Qty desc) as num\n    from @orders o\n    join @prices p on p.Item = o.Item and p.Qty <= o.item_qty   \n) T\nwhere t.num = 1\ngroup by id, item\n\n/* \nId  retail_price    items_sold  orders_count\n1   59              2           1\n2   55              9           1\n3   45              50000       1\n*/\n
    \n soup wrap:
    declare @prices table
    (
        id int identity(1,1),
        item int,
        Qty int,
        Price float
    )
    
    declare @orders table
    (
        id int identity(1000,1),
        item int,
        item_qty int
    )
    
    insert into @prices (item, Qty, Price)
    values 
    (525001,1, 59),
    (525001,8, 55),
    (525001,13, 45)
    
    insert into @orders (item, item_qty)
    values
    (525001,9),
    (525001,2),
    (525001,50000)
    

    select Id, max(Price) as retail_price, sum(Item_qty) as items_sold, count(IdOrder) as orders_count
     from
    (
        select 
            o.item_qty,
            o.Id as idOrder,
            p.*,
            ROW_NUMBER() over (partition by o.Item, o.Id order by p.Qty desc) as num
        from @orders o
        join @prices p on p.Item = o.Item and p.Qty <= o.item_qty   
    ) T
    where t.num = 1
    group by id, item
    
    /* 
    Id  retail_price    items_sold  orders_count
    1   59              2           1
    2   55              9           1
    3   45              50000       1
    */
    
    qid & accept id: (28752104, 28752242) query: Oracle aggregate functions on strings soup:

    Use row_number analytical function instead:

    \n
    with t(person  ,Mgr_name ,   Mgr_email) as (\nselect 111     ,'brad,pitt'  , 'pitt.brad@test.com' from dual union all\nselect 111     ,'mike,clark' , 'clark.mike@test.com' from dual )\n\nselect person  ,Mgr_name ,   Mgr_email from (\nselect t1.*, row_number() over (order by mgr_name) num from t t1)\nwhere num = 1\n
    \n

    This get max mgr_name with correct email.

    \n

    Output:

    \n
        PERSON MGR_NAME   MGR_EMAIL          \n---------- ---------- -------------------\n       111 brad,pitt  pitt.brad@test.com \n
    \n soup wrap:

    Use row_number analytical function instead:

    with t(person  ,Mgr_name ,   Mgr_email) as (
    select 111     ,'brad,pitt'  , 'pitt.brad@test.com' from dual union all
    select 111     ,'mike,clark' , 'clark.mike@test.com' from dual )
    
    select person  ,Mgr_name ,   Mgr_email from (
    select t1.*, row_number() over (order by mgr_name) num from t t1)
    where num = 1
    

    This get max mgr_name with correct email.

    Output:

        PERSON MGR_NAME   MGR_EMAIL          
    ---------- ---------- -------------------
           111 brad,pitt  pitt.brad@test.com 
    
    qid & accept id: (28775590, 28776160) query: Get greatest value between columns and associated column name soup:

    One way (not claiming it's the best way) to attack your problem is to rank your column values, and then select what we want from that data set.

    \n

    First to unpivot your record(s):

    \n
    select id\n       , columnName\n       , columnValue\nfrom mytable\n     unpivot\n     (\n        columnValue for columnName in(a,b,c)\n     ) as unpvt\n
    \n

    Next, we can assign a ranking to the values based on what we want to see output. To rank the largest column value for an ID 1st, we add:

    \n
    select id\n       , columnName\n       , columnValue\n       , rank() over (partition by id order by columnValue DESC, columnName DESC) as rankVal\n    from mytable\n    unpivot\n    (\n        columnValue for columnName in(a,b,c)\n    ) as unpvt\n
    \n

    Note that above in our rank(), if we order just by columnValue, you would end up with two rank 1s if two columns had the same max value. The next step would then return two records for an ID. If this is the output you would want to see, remove the , columnName DESC from the rank() order by.

    \n

    Now that we have our values ranked, we can select what we want from that result set:

    \n
    with cteUnpivot(id, columnName, columnValue, rankVal)\nAS\n(\n    select id\n           , columnName\n           , columnValue\n           , rank() over (partition by id order by columnValue DESC, columnName DESC) as rankVal\n    from mytable\n    unpivot\n    (\n        columnValue for columnName in(a,b,c)\n    ) as unpvt\n)\n\nselect id\n       , columnName as [Column]\n       , columnValue as [Value]\nfrom cteUnpivot\nwhere rankVal = 1\n
    \n soup wrap:

    One way (not claiming it's the best way) to attack your problem is to rank your column values, and then select what we want from that data set.

    First to unpivot your record(s):

    select id
           , columnName
           , columnValue
    from mytable
         unpivot
         (
            columnValue for columnName in(a,b,c)
         ) as unpvt
    

    Next, we can assign a ranking to the values based on what we want to see output. To rank the largest column value for an ID 1st, we add:

    select id
           , columnName
           , columnValue
           , rank() over (partition by id order by columnValue DESC, columnName DESC) as rankVal
        from mytable
        unpivot
        (
            columnValue for columnName in(a,b,c)
        ) as unpvt
    

    Note that above in our rank(), if we order just by columnValue, you would end up with two rank 1s if two columns had the same max value. The next step would then return two records for an ID. If this is the output you would want to see, remove the , columnName DESC from the rank() order by.

    Now that we have our values ranked, we can select what we want from that result set:

    with cteUnpivot(id, columnName, columnValue, rankVal)
    AS
    (
        select id
               , columnName
               , columnValue
               , rank() over (partition by id order by columnValue DESC, columnName DESC) as rankVal
        from mytable
        unpivot
        (
            columnValue for columnName in(a,b,c)
        ) as unpvt
    )
    
    select id
           , columnName as [Column]
           , columnValue as [Value]
    from cteUnpivot
    where rankVal = 1
    
    qid & accept id: (28805092, 28805374) query: Populate extra database column depending on other column values soup:

    You can create scalar function:

    \n
    ALTER FUNCTION [dbo].[Test] ( @column1 INT, @column2 INT)\nRETURNS INT\n    WITH SCHEMABINDING\nAS\n    BEGIN\n\n        DECLARE @r INT\n\n        IF @column1 = 15 AND @column2 = 3\n            SET @r = 100\n        ELSE\n            SET @r = NULL\n\n        RETURN @r\n    END\n
    \n

    And then add new computed column:

    \n
    ALTER TABLE TableName ADD ColumnName AS dbo.Test(column1, column2) PERSISTED\n
    \n

    Persisted means, that column is not calculated on the fly, but data is saved.\nThat's why you used WITH SCHEMABINDING. Without binding you can not make the column persisted.

    \n

    You can also update your current data with simple update statement like in @Rhys Jones answer and add trigger on table like:

    \n
    ALTER TRIGGER trTest ON TableName\nAFTER INSERT, UPDATE\nAS\nBEGIN\n    IF UPDATE(column1) AND UPDATE(column2)\n        BEGIN\n            UPDATE  TableName\n            SET     NewColumn = CASE\n                                  WHEN column1 = 15 and column2 = 3 then 100\n                                  ELSE NULL\n                                END\n            FROM    Inserted i\n                    JOIN TableName t ON t.id = i.id\n        END\nEND \n
    \n soup wrap:

    You can create scalar function:

    ALTER FUNCTION [dbo].[Test] ( @column1 INT, @column2 INT)
    RETURNS INT
        WITH SCHEMABINDING
    AS
        BEGIN
    
            DECLARE @r INT
    
            IF @column1 = 15 AND @column2 = 3
                SET @r = 100
            ELSE
                SET @r = NULL
    
            RETURN @r
        END
    

    And then add new computed column:

    ALTER TABLE TableName ADD ColumnName AS dbo.Test(column1, column2) PERSISTED
    

    Persisted means, that column is not calculated on the fly, but data is saved. That's why you used WITH SCHEMABINDING. Without binding you can not make the column persisted.

    You can also update your current data with simple update statement like in @Rhys Jones answer and add trigger on table like:

    ALTER TRIGGER trTest ON TableName
    AFTER INSERT, UPDATE
    AS
    BEGIN
        IF UPDATE(column1) AND UPDATE(column2)
            BEGIN
                UPDATE  TableName
                SET     NewColumn = CASE
                                      WHEN column1 = 15 and column2 = 3 then 100
                                      ELSE NULL
                                    END
                FROM    Inserted i
                        JOIN TableName t ON t.id = i.id
            END
    END 
    
    qid & accept id: (28840985, 28841040) query: Add rows count in SQL view soup:

    My favorite, correlated sub-query to get count:

    \n
    CREATE VIEW [dbo].[Question]\nAS\nSELECT (select COUNT(*) from Answers\n        where QuestionId = question.Id) as 'Answers',\n       question.Id,\n       question.CreatorId,\n       question.Title,\n       question.Content,\n       question.CreationDate\nFROM Questions AS question;\n
    \n

    Or, a join with a group by;

    \n
    CREATE VIEW [dbo].[Question]\nAS\nSELECT COUNT(answer.Id) as 'Answers',\n       question.Id,\n       question.CreatorId,\n       question.Title,\n       question.Content,\n       question.CreationDate\nFROM Questions AS question \nJOIN Answers AS answer\nON  answer.QuestionId = question.Id\nGROUP BY question.Id,\n         question.CreatorId,\n         question.Title,\n         question.Content,\n         question.CreationDate;\n
    \n

    Note that columns in select list are either argument to aggregate functions, or also listed in GROUP BY clause.

    \n soup wrap:

    My favorite, correlated sub-query to get count:

    CREATE VIEW [dbo].[Question]
    AS
    SELECT (select COUNT(*) from Answers
            where QuestionId = question.Id) as 'Answers',
           question.Id,
           question.CreatorId,
           question.Title,
           question.Content,
           question.CreationDate
    FROM Questions AS question;
    

    Or, a join with a group by;

    CREATE VIEW [dbo].[Question]
    AS
    SELECT COUNT(answer.Id) as 'Answers',
           question.Id,
           question.CreatorId,
           question.Title,
           question.Content,
           question.CreationDate
    FROM Questions AS question 
    JOIN Answers AS answer
    ON  answer.QuestionId = question.Id
    GROUP BY question.Id,
             question.CreatorId,
             question.Title,
             question.Content,
             question.CreationDate;
    

    Note that columns in select list are either argument to aggregate functions, or also listed in GROUP BY clause.

    qid & accept id: (28845170, 28845521) query: SQL Server 2008 - Migrating AuditLog table to each table soup:

    You could to this using a CURSOR and Dynamic SQL:

    \n
    DECLARE @sql VARCHAR(MAX) = ''\nDECLARE @tableName VARCHAR(100)\n\nDECLARE cur CURSOR LOCAL FORWARD_ONLY FOR\n    SELECT TableName FROM TablesWithAuditLogs\n\nOPEN cur\nFETCH FROM cur INTO @tableName\n\nWHILE @@FETCH_STATUS = 0 BEGIN\n    SELECT @sql ='\n    UPDATE t\n        SET t.CreatedOn = a.CreatedOn\n    FROM [' + @tableName + '] t\n    INNER JOIN AuditLog a\n        ON a.ID = t.AuditLogID'\n\n    EXEC(@sql)\n\n    FETCH FROM cur INTO @tableName\nEND\n\nCLOSE cur\nDEALLOCATE cur\n
    \n

    RESULT

    \n
    Contacts\n----------------------------------\nID          AuditLogID  CreatedOn\n----------- ----------- ----------\n10          1           2015-01-02\n11          3           2015-05-06\n\nAddresses\n----------------------------------\nID          AuditLogID  CreatedOn\n----------- ----------- ----------\n20          4           2014-02-01\n21          5           2010-01-01\n\nItems\n----------------------------------\nID          AuditLogID  CreatedOn\n----------- ----------- ----------\n30          2           2015-03-04\n31          6           2011-03-04\n
    \n

    SQL FIDDLE

    \n soup wrap:

    You could to this using a CURSOR and Dynamic SQL:

    DECLARE @sql VARCHAR(MAX) = ''
    DECLARE @tableName VARCHAR(100)
    
    DECLARE cur CURSOR LOCAL FORWARD_ONLY FOR
        SELECT TableName FROM TablesWithAuditLogs
    
    OPEN cur
    FETCH FROM cur INTO @tableName
    
    WHILE @@FETCH_STATUS = 0 BEGIN
        SELECT @sql ='
        UPDATE t
            SET t.CreatedOn = a.CreatedOn
        FROM [' + @tableName + '] t
        INNER JOIN AuditLog a
            ON a.ID = t.AuditLogID'
    
        EXEC(@sql)
    
        FETCH FROM cur INTO @tableName
    END
    
    CLOSE cur
    DEALLOCATE cur
    

    RESULT

    Contacts
    ----------------------------------
    ID          AuditLogID  CreatedOn
    ----------- ----------- ----------
    10          1           2015-01-02
    11          3           2015-05-06
    
    Addresses
    ----------------------------------
    ID          AuditLogID  CreatedOn
    ----------- ----------- ----------
    20          4           2014-02-01
    21          5           2010-01-01
    
    Items
    ----------------------------------
    ID          AuditLogID  CreatedOn
    ----------- ----------- ----------
    30          2           2015-03-04
    31          6           2011-03-04
    

    SQL FIDDLE

    qid & accept id: (28846567, 28847435) query: chaining AND EXISTS/AND NOT EXISTS in SQL soup:

    Simple Boolean algebra. Your current query is this:

    \n
    valueA = a_tble.id and valueB <> b_tble.id\n
    \n

    and, if I'm reading your requirements correctly, you want it to be this:

    \n
    valueA = a_tble.id and (valueB <> b_tble.id or valueC <> c_tble.id)\n
    \n

    which translates into:

    \n
    WHERE EXISTS (SELECT 1 FROM a_tbl WHERE valueA = a_tbl.id)\n    AND (NOT EXISTS (SELECT 1 FROM b_tbl WHERE valueB = b_tbl.id)\n     OR  NOT EXISTS (SELECT 1 from c_tbl WHERE valueC = c_tbl.id))\n
    \n soup wrap:

    Simple Boolean algebra. Your current query is this:

    valueA = a_tble.id and valueB <> b_tble.id
    

    and, if I'm reading your requirements correctly, you want it to be this:

    valueA = a_tble.id and (valueB <> b_tble.id or valueC <> c_tble.id)
    

    which translates into:

    WHERE EXISTS (SELECT 1 FROM a_tbl WHERE valueA = a_tbl.id)
        AND (NOT EXISTS (SELECT 1 FROM b_tbl WHERE valueB = b_tbl.id)
         OR  NOT EXISTS (SELECT 1 from c_tbl WHERE valueC = c_tbl.id))
    
    qid & accept id: (28848937, 28849297) query: Remove duplicate row from output table soup:

    You might be better of using group_concat:

    \n
    $query = mysql_query("\nSELECT \n    customer.customerId, customer.customerName, order.orderNo, group_concat(order.item SEPARATOR ',') as order_items\nFROM \n    customer\nINNER JOIN\n    orderInfo on orderInfo.customerId = customer.customerId\nGROUP BY customer.customerId\n");\n
    \n

    And in your code, just replace the separator:

    \n
    while($order = mysql_fetch_assoc($query))\n{\n echo '\n  '.$order['orderNo'].'\n  '.$order['customerName'].'\n  '.str_replace(',','
    ',$order['item']).'\n ';\n}\n
    \n

    You won't need rowspan=2 in this case.

    \n soup wrap:

    You might be better of using group_concat:

    $query = mysql_query("
    SELECT 
        customer.customerId, customer.customerName, order.orderNo, group_concat(order.item SEPARATOR ',') as order_items
    FROM 
        customer
    INNER JOIN
        orderInfo on orderInfo.customerId = customer.customerId
    GROUP BY customer.customerId
    ");
    

    And in your code, just replace the separator:

    while($order = mysql_fetch_assoc($query))
    {
     echo '
      '.$order['orderNo'].'
      '.$order['customerName'].'
      '.str_replace(',','
    ',$order['item']).' '; }

    You won't need rowspan=2 in this case.

    qid & accept id: (28927069, 28927303) query: How to full join two tables and return one column for joined field in sql? soup:

    As both join columns have the same name, you can use the using operator in the join which does exactly that: remove the duplicate columns.

    \n
    select *\nfrom a\n  full outer join b using (id);\n
    \n
    c:\>psql postgres\npsql (9.4.1)\nType "help" for help.\n\npostgres=> create table a (id integer, usage_a text);\nCREATE TABLE\npostgres=> create table b (id integer, usage_b text);\nCREATE TABLE\npostgres=>\npostgres=> insert into a\npostgres-> values (1,'v1'), (2,'v2'), (3,'v3'), (4,'v4');\nINSERT 0 4\npostgres=>\npostgres=> insert into b\npostgres-> values (3,'v5'), (4,'v6'), (5,'v7'), (6,'v8');\nINSERT 0 4\npostgres=>\npostgres=> select *\npostgres-> from a full outer join b using (id);\n\n id | usage_a | usage_b\n----+---------+---------\n  1 | v1      |\n  2 | v2      |\n  3 | v3      | v5\n  4 | v4      | v6\n  5 |         | v7\n  6 |         | v8\n(6 rows)\n\npostgres=>\n
    \n soup wrap:

    As both join columns have the same name, you can use the using operator in the join which does exactly that: remove the duplicate columns.

    select *
    from a
      full outer join b using (id);
    
    c:\>psql postgres
    psql (9.4.1)
    Type "help" for help.
    
    postgres=> create table a (id integer, usage_a text);
    CREATE TABLE
    postgres=> create table b (id integer, usage_b text);
    CREATE TABLE
    postgres=>
    postgres=> insert into a
    postgres-> values (1,'v1'), (2,'v2'), (3,'v3'), (4,'v4');
    INSERT 0 4
    postgres=>
    postgres=> insert into b
    postgres-> values (3,'v5'), (4,'v6'), (5,'v7'), (6,'v8');
    INSERT 0 4
    postgres=>
    postgres=> select *
    postgres-> from a full outer join b using (id);
    
     id | usage_a | usage_b
    ----+---------+---------
      1 | v1      |
      2 | v2      |
      3 | v3      | v5
      4 | v4      | v6
      5 |         | v7
      6 |         | v8
    (6 rows)
    
    postgres=>
    
    qid & accept id: (28961739, 28962004) query: Select DB records where some key values are the same soup:

    You can use count in analytic version:

    \n
    select f1, f2 \n  from (\n    select tab.*, count(1) over (partition by f1) cnt from tab \n    ) \n  where cnt>1\n
    \n

    Results:

    \n
    F1            F2\n----- ----------\nb            123\nb            456\nc            123\nc            789\n
    \n soup wrap:

    You can use count in analytic version:

    select f1, f2 
      from (
        select tab.*, count(1) over (partition by f1) cnt from tab 
        ) 
      where cnt>1
    

    Results:

    F1            F2
    ----- ----------
    b            123
    b            456
    c            123
    c            789
    
    qid & accept id: (28986618, 28986949) query: SQL- Need numbers from a column for string soup:

    Patindex only takes 2 parameters. If you want to search step by step, using CHARINDEX will work better since you can give it a starting position. However you can split using XML. You will need to filter out text before '=' and after '/', then replace all characters you don't want included with nothing.

    \n

    Try this:

    \n
    DECLARE @t table(MES_id int, MES_for_col varchar(max))\nINSERT @t values\n(4717, '4717 = ( 4711 + 4712 + 4713)/ 3'),\n(4729, '4729 = ( 4723 + 4724 + 4725 + 4726)/4'),\n(4788, '4788 = ( 4780 + 4781 + 4782 + 4783 + 4784 + 4785 )/6'),\n(4795, '4795 = ( 4793 + 4794 ) / 2')\n\nSELECT MES_id, t.c.value('.', 'VARCHAR(2000)') as column2\nFROM (\n    SELECT MES_id, x = CAST('' + \n        REPLACE(REPLACE(REPLACE(REPLACE(STUFF(SUBSTRING(MES_for_col, 0,\n        CHARINDEX('/', MES_for_col)), 1, CHARINDEX('=', MES_for_col), ''), \n        ' ', ''), ')', ''), '(', ''),  '+', '') + '' AS XML)\n    FROM @t\n) a\nCROSS APPLY x.nodes('/t') t(c)\n
    \n

    Result:

    \n
    MES_id  column2\n4717    4711\n4717    4712\n4717    4713\n4729    4723\n4729    4724\n....\n
    \n soup wrap:

    Patindex only takes 2 parameters. If you want to search step by step, using CHARINDEX will work better since you can give it a starting position. However you can split using XML. You will need to filter out text before '=' and after '/', then replace all characters you don't want included with nothing.

    Try this:

    DECLARE @t table(MES_id int, MES_for_col varchar(max))
    INSERT @t values
    (4717, '4717 = ( 4711 + 4712 + 4713)/ 3'),
    (4729, '4729 = ( 4723 + 4724 + 4725 + 4726)/4'),
    (4788, '4788 = ( 4780 + 4781 + 4782 + 4783 + 4784 + 4785 )/6'),
    (4795, '4795 = ( 4793 + 4794 ) / 2')
    
    SELECT MES_id, t.c.value('.', 'VARCHAR(2000)') as column2
    FROM (
        SELECT MES_id, x = CAST('' + 
            REPLACE(REPLACE(REPLACE(REPLACE(STUFF(SUBSTRING(MES_for_col, 0,
            CHARINDEX('/', MES_for_col)), 1, CHARINDEX('=', MES_for_col), ''), 
            ' ', ''), ')', ''), '(', ''),  '+', '') + '' AS XML)
        FROM @t
    ) a
    CROSS APPLY x.nodes('/t') t(c)
    

    Result:

    MES_id  column2
    4717    4711
    4717    4712
    4717    4713
    4729    4723
    4729    4724
    ....
    
    qid & accept id: (29012160, 29012503) query: SQL - I need to see how many users are associated with a specific set of ids soup:

    So you want users with 3 IDs as long as one of the IDs is not D. How about;

    \n
    select user\nfrom table\ngroup by user\nhaving count(*) = 3 and max(ID) <> 'D'\n
    \n

    The HAVING clause is useful in situations like this. This approach will work as long as the excluded ID is the max (or an easy change for min).

    \n

    Following your comment, if the min/max(ID) approach isn't viable then you could use NOT IN;

    \n
    select user\nfrom table\nwhere user not in (select user from table where ID = 'D')\ngroup by user\nhaving count(*) = 3\n
    \n

    Following the updated question, if I've understood the mapping between the initial example and reality correctly then the query should be something like this;

    \n
    SELECT user_id\nFROM user_id_type\nWHERE user_id not in (select user_id from user_id_type where user_id_type in ('1','2','3','4','5'))\nGROUP BY user_id\nHAVING COUNT(user_id_type)='16'\n
    \n

    What is odd is that you appear to have both a table and a column in the table with the same name 'user_id_type'. This isn't the clearest of designs.

    \n soup wrap:

    So you want users with 3 IDs as long as one of the IDs is not D. How about;

    select user
    from table
    group by user
    having count(*) = 3 and max(ID) <> 'D'
    

    The HAVING clause is useful in situations like this. This approach will work as long as the excluded ID is the max (or an easy change for min).

    Following your comment, if the min/max(ID) approach isn't viable then you could use NOT IN;

    select user
    from table
    where user not in (select user from table where ID = 'D')
    group by user
    having count(*) = 3
    

    Following the updated question, if I've understood the mapping between the initial example and reality correctly then the query should be something like this;

    SELECT user_id
    FROM user_id_type
    WHERE user_id not in (select user_id from user_id_type where user_id_type in ('1','2','3','4','5'))
    GROUP BY user_id
    HAVING COUNT(user_id_type)='16'
    

    What is odd is that you appear to have both a table and a column in the table with the same name 'user_id_type'. This isn't the clearest of designs.

    qid & accept id: (29027827, 29032001) query: How can I change the output of SqlPlusresults with PLSQL? soup:

    You're simply missing a -s flag in your call to sqlplus.
    \nExample code:

    \n
    oracle@***:/home/oracle/testing> cat test.sh\n$ORACLE_HOME/bin/sqlplus -s<
    \n

    Example output without the -s flag:

    \n
    oracle@***:/home/oracle/testing> sh test.sh\n\nSQL*Plus: Release 11.2.0.3.0 Production on Fri Mar 13 08:12:21 2015\n\nCopyright (c) 1982, 2011, Oracle.  All rights reserved.\n\nEnter user-name:\nConnected to:\nOracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production\nWith the Partitioning, OLAP, Data Mining and Real Application Testing options\n\nSQL> SQL> SQL> SQL> SQL> SQL> SQL> SQL> testing\nSQL> Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production\nWith the Partitioning, OLAP, Data Mining and Real Application Testing options\n
    \n

    Example with the -s flag:

    \n
    oracle@XXX:/home/oracle/testing> sh test.sh\ntesting\n
    \n soup wrap:

    You're simply missing a -s flag in your call to sqlplus.
    Example code:

    oracle@***:/home/oracle/testing> cat test.sh
    $ORACLE_HOME/bin/sqlplus -s<

    Example output without the -s flag:

    oracle@***:/home/oracle/testing> sh test.sh
    
    SQL*Plus: Release 11.2.0.3.0 Production on Fri Mar 13 08:12:21 2015
    
    Copyright (c) 1982, 2011, Oracle.  All rights reserved.
    
    Enter user-name:
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    
    SQL> SQL> SQL> SQL> SQL> SQL> SQL> SQL> testing
    SQL> Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    

    Example with the -s flag:

    oracle@XXX:/home/oracle/testing> sh test.sh
    testing
    
    qid & accept id: (29065148, 29562934) query: Database internals: implementation of a foreign key constraint soup:

    I checked that, indeed, there is no index created by PostgreSQL for a foreign key (using this query: https://stackoverflow.com/a/25596855/1245175).

    \n

    On the other hand, a few triggers are created for a foreign key:

    \n
    test=# SELECT tgname AS trigger_name\n  FROM pg_trigger\n WHERE tgname !~ '^pg_';\n trigger_name\n--------------\n(0 rows)\n\ntest=# ALTER TABLE LINEITEM ADD CONSTRAINT LINEITEM_FK1 FOREIGN KEY (L_ORDERKEY)  REFERENCES ORDERS;\nALTER TABLE\ntest=# SELECT tgname AS trigger_name                                                               \n  FROM pg_trigger\n WHERE tgname !~ '^pg_';\n         trigger_name        \n------------------------------\n RI_ConstraintTrigger_a_16419\n RI_ConstraintTrigger_a_16420\n RI_ConstraintTrigger_c_16421\n RI_ConstraintTrigger_c_16422\n
    \n

    So, I suppose that during the foreign key creation in PostgreSQL, a hash map is created for the referenced table and then a probing is executed for each row of the referencing table.

    \n

    Interestingly enough, MonetDB creates indexes of different types for primary and foreign keys (probably join-index and hash-index, respectively).

    \n
    sql>select * from sys.idxs;\n+------+----------+------+-------------+\n| id   | table_id | type | name        |\n+======+==========+======+=============+\n| 6467 |     6446 |    0 | orders_pk   |\n| 6470 |     6464 |    1 | lineitem_fk |\n+------+----------+------+-------------+\n2 tuples (3.921ms)\n
    \n

    What's more, Oracle enforces primary key constraints using indexes and by default it does not create any index for a foreign key, however, there are some foreign key indexing tips: https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:292016138754

    \n soup wrap:

    I checked that, indeed, there is no index created by PostgreSQL for a foreign key (using this query: https://stackoverflow.com/a/25596855/1245175).

    On the other hand, a few triggers are created for a foreign key:

    test=# SELECT tgname AS trigger_name
      FROM pg_trigger
     WHERE tgname !~ '^pg_';
     trigger_name
    --------------
    (0 rows)
    
    test=# ALTER TABLE LINEITEM ADD CONSTRAINT LINEITEM_FK1 FOREIGN KEY (L_ORDERKEY)  REFERENCES ORDERS;
    ALTER TABLE
    test=# SELECT tgname AS trigger_name                                                               
      FROM pg_trigger
     WHERE tgname !~ '^pg_';
             trigger_name        
    ------------------------------
     RI_ConstraintTrigger_a_16419
     RI_ConstraintTrigger_a_16420
     RI_ConstraintTrigger_c_16421
     RI_ConstraintTrigger_c_16422
    

    So, I suppose that during the foreign key creation in PostgreSQL, a hash map is created for the referenced table and then a probing is executed for each row of the referencing table.

    Interestingly enough, MonetDB creates indexes of different types for primary and foreign keys (probably join-index and hash-index, respectively).

    sql>select * from sys.idxs;
    +------+----------+------+-------------+
    | id   | table_id | type | name        |
    +======+==========+======+=============+
    | 6467 |     6446 |    0 | orders_pk   |
    | 6470 |     6464 |    1 | lineitem_fk |
    +------+----------+------+-------------+
    2 tuples (3.921ms)
    

    What's more, Oracle enforces primary key constraints using indexes and by default it does not create any index for a foreign key, however, there are some foreign key indexing tips: https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:292016138754

    qid & accept id: (29113209, 29115368) query: How to transfer ASP.NET MVC Database from LocalDb to SQL Server? soup:

    Got it!

    \n

    Based on @warheat1990's answer, you just have to change the connection string. But @warheat1990's answer had a little too much change. So here's my original (LocalDb) connection string:

    \n
    \n
    \n

    To connect it to SQL Server instead of LocalDB, I modified the connection string into:

    \n
    \n
    \n

    Thanks to @warheat1990 for the idea of simply changing the Web.config. My first thoughts were to identify and use the feature that VS supplies, if theres any. Because Microsoft doesnt have a concise documentation on how to do this.

    \n soup wrap:

    Got it!

    Based on @warheat1990's answer, you just have to change the connection string. But @warheat1990's answer had a little too much change. So here's my original (LocalDb) connection string:

    
    

    To connect it to SQL Server instead of LocalDB, I modified the connection string into:

    
    

    Thanks to @warheat1990 for the idea of simply changing the Web.config. My first thoughts were to identify and use the feature that VS supplies, if theres any. Because Microsoft doesnt have a concise documentation on how to do this.

    qid & accept id: (29141134, 29141278) query: SQL: Link 2 data sources by Date soup:

    You could JOIN Table1 and Table2 tables and use UPDATE on Table2 table.

    \n
    UPDATE Table2\nSET Id = (\n          SELECT Id\n          FROM Table1 t1 \n          JOIN Table2 t2\n             ON (t1.Group = t2.Group) AND\n                (t1.StartDate = t2.Date) AND\n                (t1.StartTime = t2.Time)\n          )\n
    \n
    \n

    Or something like this:

    \n
    UPDATE Table2 t2\nJOIN Table1 t1\nON (t1.Group = t2.Group) AND\n   (t1.StartDate = t2.Date) \nSET t2.Id = t1.Id\nWHERE t2.time BETWEEN t1.StartTime AND t1.EndTime\n
    \n soup wrap:

    You could JOIN Table1 and Table2 tables and use UPDATE on Table2 table.

    UPDATE Table2
    SET Id = (
              SELECT Id
              FROM Table1 t1 
              JOIN Table2 t2
                 ON (t1.Group = t2.Group) AND
                    (t1.StartDate = t2.Date) AND
                    (t1.StartTime = t2.Time)
              )
    

    Or something like this:

    UPDATE Table2 t2
    JOIN Table1 t1
    ON (t1.Group = t2.Group) AND
       (t1.StartDate = t2.Date) 
    SET t2.Id = t1.Id
    WHERE t2.time BETWEEN t1.StartTime AND t1.EndTime
    
    qid & accept id: (29169309, 29214796) query: Query and Sort in MongoDB for a many-to-many relationship soup:

    I would use a denormalized version of #2. Have a like document:

    \n
    {\n    "_id" : ObjectId(...),\n    "account_id" : 1234,\n    "post_id" : 4321,\n    "ts" : ISODate(...),\n    // additional info about post needed for basic display\n    "post_title" : "The 10 Worst-Kept Secrets of Cheesemongers"\n    // etc.\n}\n
    \n

    With an index on { "account_id" : 1, "ts" : 1 }, you can efficiently find like documents for a specific user ordered by like time.

    \n
    db.likes.find({ "account_id" : 1234 }).sort({ "ts" : -1 })\n
    \n

    If you put the basic info about the post into the like document, you don't need to retrieve the post document until, say, the user clicks on a link to be shown the entire post.

    \n

    The tradeoff is that, if some like-embedded information about a post changes, it needs to be changed in every like. This could be nothing or it could be cumbersome, depending on what you choose to embed and how often posts are modified after they have a lot of likes.

    \n soup wrap:

    I would use a denormalized version of #2. Have a like document:

    {
        "_id" : ObjectId(...),
        "account_id" : 1234,
        "post_id" : 4321,
        "ts" : ISODate(...),
        // additional info about post needed for basic display
        "post_title" : "The 10 Worst-Kept Secrets of Cheesemongers"
        // etc.
    }
    

    With an index on { "account_id" : 1, "ts" : 1 }, you can efficiently find like documents for a specific user ordered by like time.

    db.likes.find({ "account_id" : 1234 }).sort({ "ts" : -1 })
    

    If you put the basic info about the post into the like document, you don't need to retrieve the post document until, say, the user clicks on a link to be shown the entire post.

    The tradeoff is that, if some like-embedded information about a post changes, it needs to be changed in every like. This could be nothing or it could be cumbersome, depending on what you choose to embed and how often posts are modified after they have a lot of likes.

    qid & accept id: (29240487, 29240764) query: How to SELECT (SQL) items only if they are in increasing order? soup:

    We'll do this in two steps.

    \n
    --Step 1: Find records the violate the rule\nWith BadIDs AS (\n    --IDs where there is another record with a matching ID and lower number, but greater date\n    select t1.id\n    from [table] t1\n    inner join [table] t2 on t2.id = t1.id \n    where t1.number > t2.number and t1.numberDate < t2.numberDate\n)\n-- Step 2: All IDs not part of the first step:\nselect distinct ID from [table] WHERE ID NOT IN (select ID from BadIDs)\n
    \n

    Unfortunately, MySql doesn't support CTEs (Common Table Expressions). Here's a version that will work with MySql:

    \n
    select distinct ID \nfrom [table] \nWHERE ID NOT IN \n (\n    select t1.id\n    from [table] t1\n    inner join [table] t2 on t2.id = t1.id \n    where t1.number > t2.number and t1.numberDate < t2.numberDate\n )\n
    \n soup wrap:

    We'll do this in two steps.

    --Step 1: Find records the violate the rule
    With BadIDs AS (
        --IDs where there is another record with a matching ID and lower number, but greater date
        select t1.id
        from [table] t1
        inner join [table] t2 on t2.id = t1.id 
        where t1.number > t2.number and t1.numberDate < t2.numberDate
    )
    -- Step 2: All IDs not part of the first step:
    select distinct ID from [table] WHERE ID NOT IN (select ID from BadIDs)
    

    Unfortunately, MySql doesn't support CTEs (Common Table Expressions). Here's a version that will work with MySql:

    select distinct ID 
    from [table] 
    WHERE ID NOT IN 
     (
        select t1.id
        from [table] t1
        inner join [table] t2 on t2.id = t1.id 
        where t1.number > t2.number and t1.numberDate < t2.numberDate
     )
    
    qid & accept id: (29254576, 29914343) query: how to save synonyms in database ( Oracle Text ) soup:

    I found a solution :

    \n

    1- I uploaded my synonyms list to a table called words( contains all the terms and their synonyms' IDs ) and master table called synset (contains synonyms)

    \n

    2- create thesaurus :

    \n
    begin\n  ctx_thes.create_thesaurus ('MyThesaurus');\nend;\n
    \n

    3- create a stored procedure to read from my table [words] and create relationship between synonyms:

    \n
    create or replace procedure CreateSynonyms is\n  CURSOR syn_cur is    select s.name_abstract,w.root,w.word_abstract \n  from p words  w , synset s \n  where w.synset_id=s.synset_id and w.root<>s.name_abstract and w.word_abstract<> s.name_abstract\n  order by s.synset_id;\n  syn_rec syn_cur%rowtype;\nBEGIN\nOPEN syn_cur;\nLOOP\n  FETCH syn_cur into syn_rec;\n  EXIT WHEN syn_cur%notfound;\n  begin\n    ctx_thes.create_relation ('MyThesurus', syn_rec.name_abstract, 'syn', syn_rec.word_abstract);\n  END LOOP;\nEND;\n
    \n

    4- rewrite my query to select synonyms:

    \n
    select  /*+ FIRST_ROWS(1)*/  sentence_id,score(1) as sc, isn  \n          where contains(PROCESSED_TEXT,'\n\n   search for somthing here\n \n transform((TOKENS,  "{", "}", ","))\n transform((TOKENS,  "syn(", ",listing)", " , "))/seq>\n \n \n ',1)>0 \n
    \n

    Hope this will help someone

    \n soup wrap:

    I found a solution :

    1- I uploaded my synonyms list to a table called words( contains all the terms and their synonyms' IDs ) and master table called synset (contains synonyms)

    2- create thesaurus :

    begin
      ctx_thes.create_thesaurus ('MyThesaurus');
    end;
    

    3- create a stored procedure to read from my table [words] and create relationship between synonyms:

    create or replace procedure CreateSynonyms is
      CURSOR syn_cur is    select s.name_abstract,w.root,w.word_abstract 
      from p words  w , synset s 
      where w.synset_id=s.synset_id and w.root<>s.name_abstract and w.word_abstract<> s.name_abstract
      order by s.synset_id;
      syn_rec syn_cur%rowtype;
    BEGIN
    OPEN syn_cur;
    LOOP
      FETCH syn_cur into syn_rec;
      EXIT WHEN syn_cur%notfound;
      begin
        ctx_thes.create_relation ('MyThesurus', syn_rec.name_abstract, 'syn', syn_rec.word_abstract);
      END LOOP;
    END;
    

    4- rewrite my query to select synonyms:

    select  /*+ FIRST_ROWS(1)*/  sentence_id,score(1) as sc, isn  
              where contains(PROCESSED_TEXT,'
    
       search for somthing here
     
     transform((TOKENS,  "{", "}", ","))
     transform((TOKENS,  "syn(", ",listing)", " , "))/seq>
     
     
     ',1)>0 
    

    Hope this will help someone

    qid & accept id: (29272756, 29282396) query: Column to Row SQL syntax soup:

    For interchanging rows and columns, you need to UNPIVOT(convert columns into row values) first and then PIVOT(rows to columns) based on UNPIVOT result.

    \n
    -- Here is the result \nSELECT * FROM \n(\n    -- Unpivot here using CROSS APPLY\n    SELECT [Group],\n    [Values],COLNAMES \n    FROM YOURTABLE\n    CROSS APPLY(VALUES (Value1,'Value1'),(Value2,'Value2'),(Value3,'Value3'))\n    AS COLUMNNAMES([Values],COLNAMES)\n)TAB\nPIVOT\n(\n     -- Specify the values to hold in pivoted column\n     MIN([Values])\n     -- Specify the name of columns\n     FOR [Group] IN([A],[B],[C])\n)P\nORDER BY COLNAMES\n
    \n

    WORKING OF QUERY

    \n

    You can use CROSS APPLY to UNPIVOT. Value1 will be the columns which holds the values in column - Value1. 'Value1'(in single quotes) will be the hard coded column name value(which is shown in COLNAMES variable. The usage of CROSS APPLY will generate the following result.

    \n\n

    enter image description here

    \n

    Now with the data generated from CROSS APPLY, you are going to PIVOT which forms the following result.

    \n\n

    enter image description here

    \n

    Sometimes you cannot know the values in column Group in advance. In such case you need to using Dynamic Sql. The first step in that is to get the values in the row Group to a variable.

    \n
    DECLARE @cols NVARCHAR (MAX)\n\nSELECT @cols = STUFF((SELECT ',' + QUOTENAME([Group]) \n            FROM \n            (\n                SELECT distinct [Group] from YOURTABLE\n            ) c\n            FOR XML PATH(''), TYPE\n            ).value('.', 'NVARCHAR(MAX)') \n        ,1,1,'')\n
    \n

    Now use the PIVOT query with Dynamic Sql. Why we are using Dynamic Sql is because Sql Server cannot get the column names from the variable unless and otherwise Dynamic Sql is used.

    \n
    DECLARE @query NVARCHAR(MAX)\nSET @query = '\n            SELECT * FROM \n             (\n                -- Unpivot here using CROSS APPLY\n                SELECT [Group],\n                [Values],COLNAMES \n                FROM YOURTABLE\n                CROSS APPLY(VALUES (Value1,''Value1''),(Value2,''Value2''),(Value3,''Value3''))\n                AS COLUMNNAMES([Values],COLNAMES)\n             ) x\n             PIVOT \n             (\n                 -- Specify the values to hold in pivoted column\n                 MIN([Values])\n                 -- Get the column names from variable\n                 FOR [Group] IN('+@cols+')\n            ) p            \n            ORDER BY COLNAMES;'     \n\nEXEC SP_EXECUTESQL @query\n
    \n\n

    Hope you understand the concepts and got your result.
    \nAny clarifications, feel free to ask.

    \n soup wrap:

    For interchanging rows and columns, you need to UNPIVOT(convert columns into row values) first and then PIVOT(rows to columns) based on UNPIVOT result.

    -- Here is the result 
    SELECT * FROM 
    (
        -- Unpivot here using CROSS APPLY
        SELECT [Group],
        [Values],COLNAMES 
        FROM YOURTABLE
        CROSS APPLY(VALUES (Value1,'Value1'),(Value2,'Value2'),(Value3,'Value3'))
        AS COLUMNNAMES([Values],COLNAMES)
    )TAB
    PIVOT
    (
         -- Specify the values to hold in pivoted column
         MIN([Values])
         -- Specify the name of columns
         FOR [Group] IN([A],[B],[C])
    )P
    ORDER BY COLNAMES
    

    WORKING OF QUERY

    You can use CROSS APPLY to UNPIVOT. Value1 will be the columns which holds the values in column - Value1. 'Value1'(in single quotes) will be the hard coded column name value(which is shown in COLNAMES variable. The usage of CROSS APPLY will generate the following result.

    enter image description here

    Now with the data generated from CROSS APPLY, you are going to PIVOT which forms the following result.

    enter image description here

    Sometimes you cannot know the values in column Group in advance. In such case you need to using Dynamic Sql. The first step in that is to get the values in the row Group to a variable.

    DECLARE @cols NVARCHAR (MAX)
    
    SELECT @cols = STUFF((SELECT ',' + QUOTENAME([Group]) 
                FROM 
                (
                    SELECT distinct [Group] from YOURTABLE
                ) c
                FOR XML PATH(''), TYPE
                ).value('.', 'NVARCHAR(MAX)') 
            ,1,1,'')
    

    Now use the PIVOT query with Dynamic Sql. Why we are using Dynamic Sql is because Sql Server cannot get the column names from the variable unless and otherwise Dynamic Sql is used.

    DECLARE @query NVARCHAR(MAX)
    SET @query = '
                SELECT * FROM 
                 (
                    -- Unpivot here using CROSS APPLY
                    SELECT [Group],
                    [Values],COLNAMES 
                    FROM YOURTABLE
                    CROSS APPLY(VALUES (Value1,''Value1''),(Value2,''Value2''),(Value3,''Value3''))
                    AS COLUMNNAMES([Values],COLNAMES)
                 ) x
                 PIVOT 
                 (
                     -- Specify the values to hold in pivoted column
                     MIN([Values])
                     -- Get the column names from variable
                     FOR [Group] IN('+@cols+')
                ) p            
                ORDER BY COLNAMES;'     
    
    EXEC SP_EXECUTESQL @query
    

    Hope you understand the concepts and got your result.
    Any clarifications, feel free to ask.

    qid & accept id: (29276354, 29276686) query: Select a gender dominated group from a pupulation where one gendergroup has lower average salary but higher jobPoints soup:
    with t1 as (\n    select\n        one.GroupName,\n        one.GroupJobPoints,\n        (select cast(count(1) as float) from TableTwo where GroupName=one.GroupName and Gender='M')/(select cast(count(1) as float) from TableTwo where GroupName=one.GroupName) FracMale,\n        (select avg(Salary) from TableTwo where GroupName=one.GroupName) AvgSalary\n    from\n        TableOne one\n)\nselect\n    m.GroupName,\n    m.GroupJobPoints,\n    m.AvgSalary,\n    m.FracMale,\n    f.GroupName,\n    f.GroupJobPoints,\n    f.AvgSalary,\n    f.FracMale\nfrom\n    t1 m\n    cross join t1 f\nwhere\n    m.FracMale>=0.60\n    and f.FracMale<=0.40\n    and abs(f.GroupJobPoints-m.GroupJobPoints)/m.GroupJobPoints<=0.04\n    and m.AvgSalary>f.AvgSalary\n;\n
    \n

    Test data:

    \n
    if object_id('TableTwo') is not null drop table TableTwo;\nif object_id('TableOne') is not null drop table TableOne;\ncreate table TableOne (GroupName varchar(32), GroupJobPoints float, primary key (GroupName) );\ncreate table TableTwo (GroupName varchar(32) references TableOne(GroupName), Person_ID int, Gender char(1), Salary float, primary key (Person_ID) );\n\ninsert into TableOne (GroupName, GroupJobPoints ) values ('1',2000);\ninsert into TableOne (GroupName, GroupJobPoints ) values ('2',1950);\n\ndeclare @PersonID int = 0;\ndeclare @i int;\n\nset @i = 0; while (@i < 250) begin set @PersonID=@PersonID+1; insert into TableTwo (GroupName, Person_ID, Gender, Salary ) values ('1',@PersonID,'M',25000); set @i=@i+1; end;\nset @i = 0; while (@i < 50) begin set @PersonID=@PersonID+1; insert into TableTwo (GroupName, Person_ID, Gender, Salary ) values ('1',@PersonID,'F',25000); set @i=@i+1; end;\n\nset @i = 0; while (@i < 20) begin set @PersonID=@PersonID+1; insert into TableTwo (GroupName, Person_ID, Gender, Salary ) values ('2',@PersonID,'M',22000); set @i=@i+1; end;\nset @i = 0; while (@i < 300) begin set @PersonID=@PersonID+1; insert into TableTwo (GroupName, Person_ID, Gender, Salary ) values ('2',@PersonID,'F',22000); set @i=@i+1; end;\n
    \n

    Output from running all of the above:

    \n

    output

    \n soup wrap:
    with t1 as (
        select
            one.GroupName,
            one.GroupJobPoints,
            (select cast(count(1) as float) from TableTwo where GroupName=one.GroupName and Gender='M')/(select cast(count(1) as float) from TableTwo where GroupName=one.GroupName) FracMale,
            (select avg(Salary) from TableTwo where GroupName=one.GroupName) AvgSalary
        from
            TableOne one
    )
    select
        m.GroupName,
        m.GroupJobPoints,
        m.AvgSalary,
        m.FracMale,
        f.GroupName,
        f.GroupJobPoints,
        f.AvgSalary,
        f.FracMale
    from
        t1 m
        cross join t1 f
    where
        m.FracMale>=0.60
        and f.FracMale<=0.40
        and abs(f.GroupJobPoints-m.GroupJobPoints)/m.GroupJobPoints<=0.04
        and m.AvgSalary>f.AvgSalary
    ;
    

    Test data:

    if object_id('TableTwo') is not null drop table TableTwo;
    if object_id('TableOne') is not null drop table TableOne;
    create table TableOne (GroupName varchar(32), GroupJobPoints float, primary key (GroupName) );
    create table TableTwo (GroupName varchar(32) references TableOne(GroupName), Person_ID int, Gender char(1), Salary float, primary key (Person_ID) );
    
    insert into TableOne (GroupName, GroupJobPoints ) values ('1',2000);
    insert into TableOne (GroupName, GroupJobPoints ) values ('2',1950);
    
    declare @PersonID int = 0;
    declare @i int;
    
    set @i = 0; while (@i < 250) begin set @PersonID=@PersonID+1; insert into TableTwo (GroupName, Person_ID, Gender, Salary ) values ('1',@PersonID,'M',25000); set @i=@i+1; end;
    set @i = 0; while (@i < 50) begin set @PersonID=@PersonID+1; insert into TableTwo (GroupName, Person_ID, Gender, Salary ) values ('1',@PersonID,'F',25000); set @i=@i+1; end;
    
    set @i = 0; while (@i < 20) begin set @PersonID=@PersonID+1; insert into TableTwo (GroupName, Person_ID, Gender, Salary ) values ('2',@PersonID,'M',22000); set @i=@i+1; end;
    set @i = 0; while (@i < 300) begin set @PersonID=@PersonID+1; insert into TableTwo (GroupName, Person_ID, Gender, Salary ) values ('2',@PersonID,'F',22000); set @i=@i+1; end;
    

    Output from running all of the above:

    output

    qid & accept id: (29292248, 29292832) query: ecpg insert null with host variable (psotgreSQL) soup:

    yooooo

    \n

    I tried that "insert" can use indicator too, if you want to like this:

    \n
    short var1_ind, var2_ind;\n\nvoid insert(){\n  EXEC SQL INSERT INTO mytable (var1, var2 ) \n  VALUE (:var1 INDICATOR :var1_ind, :var2 INDICATOR :var2_ind);\n}\n
    \n

    If you want to insert NULL into var1, just make indicator < 0:

    \n
    var1_ind = -1\n
    \n

    after assign -1 to var1_ind, it would insert NULL to var1 in DB whetever the value of :var1

    \n

    it is some information from the manual

    \n
    \n

    The indicator variable val_ind will be zero if the value was not null,\n and it will be negative if the value was null.

    \n

    The indicator has another function: if the indicator value is\n positive, it means that the value is not null, but it was truncated\n when it was stored in the host variable.

    \n
    \n soup wrap:

    yooooo

    I tried that "insert" can use indicator too, if you want to like this:

    short var1_ind, var2_ind;
    
    void insert(){
      EXEC SQL INSERT INTO mytable (var1, var2 ) 
      VALUE (:var1 INDICATOR :var1_ind, :var2 INDICATOR :var2_ind);
    }
    

    If you want to insert NULL into var1, just make indicator < 0:

    var1_ind = -1
    

    after assign -1 to var1_ind, it would insert NULL to var1 in DB whetever the value of :var1

    it is some information from the manual

    The indicator variable val_ind will be zero if the value was not null, and it will be negative if the value was null.

    The indicator has another function: if the indicator value is positive, it means that the value is not null, but it was truncated when it was stored in the host variable.

    qid & accept id: (29312505, 29312683) query: Select records that are not associated with the other record soup:

    One possible way to select all modules that the student has not registered for, assuming that the student no is 48377767 in this example :

    \n
    SELECT m.*\nFROM Modules m\n     LEFT JOIN StudentsModules sm ON sm.ModuleCode = m.ModuleCode \n                                     AND sm.StudentNo = 48377767\nWHERE sm.ModuleCode IS NULL\n
    \n

    [SQL Fiddle]

    \n

    UPDATE :

    \n

    Different approach without JOIN :

    \n
    SELECT m.*\nFROM Modules m\nWHERE m.ModuleCode NOT IN\n                   (\n                      SELECT ModuleCode\n                      FROM StudentsModules\n                      WHERE StudentNo = 48377767\n                    )\n
    \n soup wrap:

    One possible way to select all modules that the student has not registered for, assuming that the student no is 48377767 in this example :

    SELECT m.*
    FROM Modules m
         LEFT JOIN StudentsModules sm ON sm.ModuleCode = m.ModuleCode 
                                         AND sm.StudentNo = 48377767
    WHERE sm.ModuleCode IS NULL
    

    [SQL Fiddle]

    UPDATE :

    Different approach without JOIN :

    SELECT m.*
    FROM Modules m
    WHERE m.ModuleCode NOT IN
                       (
                          SELECT ModuleCode
                          FROM StudentsModules
                          WHERE StudentNo = 48377767
                        )
    
    qid & accept id: (29318004, 29318094) query: How to select certain numbers of groups in MySQL? soup:

    One way to do pagination by groups is to assign a product sequence to the query. Using variables, this requires a subquery:

    \n
    select t.*\nfrom (select t.*,\n             (@rn := if(@p = productid, @rn + 1,\n                        if(@rn := productid, 1, 1)\n                       )\n             ) as rn\n      from table t cross join\n           (select @rn := 0, @p := -1) vars\n      order by t.productid\n     ) t\nwhere rn between X and Y;\n
    \n

    With an index on t(productid), you can also do this with a subquery. The condition can then go in a having clause:

    \n
    select t.*,\n       (select count(distinct productid)\n        from t t2\n        where t2.productid <= t.productid)\n       ) as pno\nfrom t\nhaving pno between X and Y;\n
    \n soup wrap:

    One way to do pagination by groups is to assign a product sequence to the query. Using variables, this requires a subquery:

    select t.*
    from (select t.*,
                 (@rn := if(@p = productid, @rn + 1,
                            if(@rn := productid, 1, 1)
                           )
                 ) as rn
          from table t cross join
               (select @rn := 0, @p := -1) vars
          order by t.productid
         ) t
    where rn between X and Y;
    

    With an index on t(productid), you can also do this with a subquery. The condition can then go in a having clause:

    select t.*,
           (select count(distinct productid)
            from t t2
            where t2.productid <= t.productid)
           ) as pno
    from t
    having pno between X and Y;
    
    qid & accept id: (29359799, 29359903) query: Row data from a comma-delimited field used within a select query soup:

    You can use a CSV Splitter for this. Here is the DelimitedSplit8K function by Jeff Moden.

    \n
    ;WITH CteDelimitted AS(\n    SELECT\n        t.ClassID,\n        nProdType = CAST(s.Item AS INT)\n    FROM Table2 t\n    CROSS APPLY dbo.DelimitedSplit8K(t.ExcludedList, ',') s\n),\nCteCross AS(\n    SELECT\n        t2.ClassID,\n        t1.nProdType,\n        t1.SprodDesc\n    FROM Table1 t1\n    CROSS JOIN(\n        SELECT DISTINCT ClassID FROM Table2\n    )t2\n\n)\nSELECT * \nFROM CteCross c\nWHERE NOT EXISTS(\n    SELECT 1\n    FROM CteDelimitted\n    WHERE\n        ClassID = c.ClassID\n        AND nProdType = c.nProdType\n)\nORDER BY ClassID, nProdType\n
    \n

    SQL Fiddle

    \n
    \n

    Another approach using NOT IN:

    \n
    WITH Cte AS(\n    SELECT\n        t2.ClassID,\n        t1.nProdType,\n        t1.SprodDesc\n    FROM Table1 t1\n    CROSS JOIN(\n        SELECT DISTINCT ClassID FROM Table2\n    )t2\n)\nSELECT *\nFROM Cte c\nWHERE c.nProdType NOT IN(\n    SELECT CAST(s.Item AS INT)\n    FROM Table2\n    CROSS APPLY dbo.DelimitedSplit8K(ExcludedList, ',') s\n    WHERE ClassID = c.ClassID\n)\nORDER BY ClassID, nProdType\n
    \n

    SQL Fiddle

    \n soup wrap:

    You can use a CSV Splitter for this. Here is the DelimitedSplit8K function by Jeff Moden.

    ;WITH CteDelimitted AS(
        SELECT
            t.ClassID,
            nProdType = CAST(s.Item AS INT)
        FROM Table2 t
        CROSS APPLY dbo.DelimitedSplit8K(t.ExcludedList, ',') s
    ),
    CteCross AS(
        SELECT
            t2.ClassID,
            t1.nProdType,
            t1.SprodDesc
        FROM Table1 t1
        CROSS JOIN(
            SELECT DISTINCT ClassID FROM Table2
        )t2
    
    )
    SELECT * 
    FROM CteCross c
    WHERE NOT EXISTS(
        SELECT 1
        FROM CteDelimitted
        WHERE
            ClassID = c.ClassID
            AND nProdType = c.nProdType
    )
    ORDER BY ClassID, nProdType
    

    SQL Fiddle


    Another approach using NOT IN:

    WITH Cte AS(
        SELECT
            t2.ClassID,
            t1.nProdType,
            t1.SprodDesc
        FROM Table1 t1
        CROSS JOIN(
            SELECT DISTINCT ClassID FROM Table2
        )t2
    )
    SELECT *
    FROM Cte c
    WHERE c.nProdType NOT IN(
        SELECT CAST(s.Item AS INT)
        FROM Table2
        CROSS APPLY dbo.DelimitedSplit8K(ExcludedList, ',') s
        WHERE ClassID = c.ClassID
    )
    ORDER BY ClassID, nProdType
    

    SQL Fiddle

    qid & accept id: (29407378, 29408245) query: Selecting Distinct Fields From Joined Rows soup:

    try this:

    \n
    SELECT a.*, cpmd.*, cpfd.*\nFROM dbo.ANIMAL a    \n  LEFT JOIN CALF_PARENT cpm ON a.ID = cpm.Calf AND cpm.IsMother = 'Y'\n  LEFT JOIN ANIMAL cpmd     ON cpmd.ID = cpm.Parent\n  LEFT JOIN CALF_PARENT cpf ON a.ID = cpf.Calf AND cpf.IsMother = 'N'\n  LEFT JOIN ANIMAL cpfd     ON cpfd.ID = cpf.Parent\n
    \n

    and your life would be easier if your calf_parent table would consist of 3 columns:

    \n
    Animal_ID (PK), Father_ID, Mother_ID\n
    \n soup wrap:

    try this:

    SELECT a.*, cpmd.*, cpfd.*
    FROM dbo.ANIMAL a    
      LEFT JOIN CALF_PARENT cpm ON a.ID = cpm.Calf AND cpm.IsMother = 'Y'
      LEFT JOIN ANIMAL cpmd     ON cpmd.ID = cpm.Parent
      LEFT JOIN CALF_PARENT cpf ON a.ID = cpf.Calf AND cpf.IsMother = 'N'
      LEFT JOIN ANIMAL cpfd     ON cpfd.ID = cpf.Parent
    

    and your life would be easier if your calf_parent table would consist of 3 columns:

    Animal_ID (PK), Father_ID, Mother_ID
    
    qid & accept id: (29446361, 29446723) query: select rows having time difference less than 2 hour of a single column soup:

    This self-join query does the job:

    \n

    SQL Fiddle

    \n
    select distinct t1.id, t1.cam_time \n  from test t1 join test t2 on t1.rowid <> t2.rowid  \n    and trunc(t1.cam_time) = trunc(t2.cam_time)\n  where abs(t1.cam_time-t2.cam_time) <= 2/24\n  order by t1.id\n
    \n

    Edit:

    \n

    If cam_time is time_stamp type then condition should be:

    \n
    where t1.cam_time between t2.cam_time - interval '2' Hour \n                      and t2.cam_time + interval '2' Hour\n
    \n soup wrap:

    This self-join query does the job:

    SQL Fiddle

    select distinct t1.id, t1.cam_time 
      from test t1 join test t2 on t1.rowid <> t2.rowid  
        and trunc(t1.cam_time) = trunc(t2.cam_time)
      where abs(t1.cam_time-t2.cam_time) <= 2/24
      order by t1.id
    

    Edit:

    If cam_time is time_stamp type then condition should be:

    where t1.cam_time between t2.cam_time - interval '2' Hour 
                          and t2.cam_time + interval '2' Hour
    
    qid & accept id: (29470119, 29470331) query: SQL syntax to SUM a Count column for calculating percentage distribution soup:

    This would work:

    \n
    select [Room Nights],\n  count([Room Nights]) AS 'Count of RN',\n  cast(\n    (count([Room Nights])\n    /\n    (Select Count([Room Nights]) * 1.0 from HOLDINGS2) \n   ) * 100 as decimal(6,1)\n  ) as '% Distribution'    \nFROM HOLDINGS2\nGROUP BY [Room Nights]\n
    \n

    The * 1.0 in the subquery forces a floating point division, and the outer cast limits the precision.

    \n

    Or, as you're using a modern version of MSSQL you could use window functions:

    \n
    cast(count([Room Nights])/(sum(count([Room Nights])*1.0) over ()) * 100 as decimal(6,1))\n
    \n soup wrap:

    This would work:

    select [Room Nights],
      count([Room Nights]) AS 'Count of RN',
      cast(
        (count([Room Nights])
        /
        (Select Count([Room Nights]) * 1.0 from HOLDINGS2) 
       ) * 100 as decimal(6,1)
      ) as '% Distribution'    
    FROM HOLDINGS2
    GROUP BY [Room Nights]
    

    The * 1.0 in the subquery forces a floating point division, and the outer cast limits the precision.

    Or, as you're using a modern version of MSSQL you could use window functions:

    cast(count([Room Nights])/(sum(count([Room Nights])*1.0) over ()) * 100 as decimal(6,1))
    
    qid & accept id: (29474365, 29474503) query: Getting the Sum of Multiple Indicators by Row soup:

    You can use

    \n
    SELECT Customer\n       , Date\n       , Ind1\n       , Ind2\n       , Ind3\n       , Ind4\n       , Ind1+Ind2+Ind3+Ind4 As Indicators\n  FROM TABLE_NAME\n
    \n

    Replace TABLE_NAME with whatever name the table have. If you dont want all the Ind1,Ind2,Ind3,Ind4 columns reported, use

    \n
    SELECT Customer\n       , Date\n       , Ind1+Ind2+Ind3+Ind4 As Indicators\n  FROM TABLE_NAME\n
    \n soup wrap:

    You can use

    SELECT Customer
           , Date
           , Ind1
           , Ind2
           , Ind3
           , Ind4
           , Ind1+Ind2+Ind3+Ind4 As Indicators
      FROM TABLE_NAME
    

    Replace TABLE_NAME with whatever name the table have. If you dont want all the Ind1,Ind2,Ind3,Ind4 columns reported, use

    SELECT Customer
           , Date
           , Ind1+Ind2+Ind3+Ind4 As Indicators
      FROM TABLE_NAME
    
    qid & accept id: (29487482, 29487730) query: How to add blank rows when select query sql soup:

    While I don't understand the cause of this task, anyway you can do it like :

    \n
    DECLARE @t TABLE ( ID INT )\nDECLARE @c INT  = 8\n\nINSERT  INTO @t\nVALUES  ( 1 ),\n        ( 2 ),\n        ( 3 );\nWITH    cte\n          AS ( SELECT   1 AS rn\n               UNION ALL\n               SELECT   rn + 1\n               FROM     cte\n               WHERE    rn <= @c\n             )\n    SELECT TOP ( @c )\n            *\n    FROM    ( SELECT    ID\n              FROM      @t\n              UNION ALL\n              SELECT    NULL\n              FROM      cte\n            ) t\n    ORDER BY ID DESC      \n
    \n

    Output:

    \n
    ID\n3\n2\n1\nNULL\nNULL\nNULL\nNULL\nNULL\n
    \n soup wrap:

    While I don't understand the cause of this task, anyway you can do it like :

    DECLARE @t TABLE ( ID INT )
    DECLARE @c INT  = 8
    
    INSERT  INTO @t
    VALUES  ( 1 ),
            ( 2 ),
            ( 3 );
    WITH    cte
              AS ( SELECT   1 AS rn
                   UNION ALL
                   SELECT   rn + 1
                   FROM     cte
                   WHERE    rn <= @c
                 )
        SELECT TOP ( @c )
                *
        FROM    ( SELECT    ID
                  FROM      @t
                  UNION ALL
                  SELECT    NULL
                  FROM      cte
                ) t
        ORDER BY ID DESC      
    

    Output:

    ID
    3
    2
    1
    NULL
    NULL
    NULL
    NULL
    NULL
    
    qid & accept id: (29489036, 29489143) query: Split comma separated varchar parameter up into temp table soup:

    use convert to XML and cross apply:

    \n
      DECLARE @str varchar(50)\n  SET @str='John, Samantha, Bob, Tom'\n\n  SELECT names = y.i.value('(./text())[1]', 'nvarchar(1000)')             \n  FROM \n  ( \n    SELECT \n        n = CONVERT(XML, '' \n            + REPLACE(@str, ',' , '') \n            + '')\n  ) AS a \n  CROSS APPLY n.nodes('i') AS y(i)\n
    \n

    OUTPUT:

    \n
    names\n-----\nJohn\n Samantha\n Bob\n Tom\n
    \n

    EDIT: it's not need to the temp table inside the proc so the proc will be:

    \n
    CREATE PROCEDURE myProc\n\n    (@nameList varchar(500))\n\nAS\nBEGIN\n\n      SELECT names = y.i.value('(./text())[1]', 'nvarchar(1000)')             \n      FROM \n      ( \n        SELECT \n            n = CONVERT(XML, '' \n                + REPLACE(@nameList, ',' , '') \n                + '')\n      ) AS a \n      CROSS APPLY n.nodes('i') AS y(i)\nEND\n
    \n

    but if you want to insert it into a temp table, below is a the sample:

    \n
    create table #names \n    (\n        Name varchar(20)\n    )\n\n  DECLARE @str varchar(50)\n  SET @str='John, Samantha, Bob, Tom'\n\n  insert into #names\n  SELECT names = y.i.value('(./text())[1]', 'nvarchar(1000)')             \n  FROM \n  ( \n    SELECT \n        n = CONVERT(XML, '' \n            + REPLACE(@str, ',' , '') \n            + '')\n  ) AS a \n  CROSS APPLY n.nodes('i') AS y(i)\n\n  select * from #names \n  drop table #names \n
    \n

    EDIT 2: the input string may contains some special characters like '<' , '>' , etc it's not standard for names but if the the given string contains them you can remove them by using replace function : replace(@str,'<','')

    \n soup wrap:

    use convert to XML and cross apply:

      DECLARE @str varchar(50)
      SET @str='John, Samantha, Bob, Tom'
    
      SELECT names = y.i.value('(./text())[1]', 'nvarchar(1000)')             
      FROM 
      ( 
        SELECT 
            n = CONVERT(XML, '' 
                + REPLACE(@str, ',' , '') 
                + '')
      ) AS a 
      CROSS APPLY n.nodes('i') AS y(i)
    

    OUTPUT:

    names
    -----
    John
     Samantha
     Bob
     Tom
    

    EDIT: it's not need to the temp table inside the proc so the proc will be:

    CREATE PROCEDURE myProc
    
        (@nameList varchar(500))
    
    AS
    BEGIN
    
          SELECT names = y.i.value('(./text())[1]', 'nvarchar(1000)')             
          FROM 
          ( 
            SELECT 
                n = CONVERT(XML, '' 
                    + REPLACE(@nameList, ',' , '') 
                    + '')
          ) AS a 
          CROSS APPLY n.nodes('i') AS y(i)
    END
    

    but if you want to insert it into a temp table, below is a the sample:

    create table #names 
        (
            Name varchar(20)
        )
    
      DECLARE @str varchar(50)
      SET @str='John, Samantha, Bob, Tom'
    
      insert into #names
      SELECT names = y.i.value('(./text())[1]', 'nvarchar(1000)')             
      FROM 
      ( 
        SELECT 
            n = CONVERT(XML, '' 
                + REPLACE(@str, ',' , '') 
                + '')
      ) AS a 
      CROSS APPLY n.nodes('i') AS y(i)
    
      select * from #names 
      drop table #names 
    

    EDIT 2: the input string may contains some special characters like '<' , '>' , etc it's not standard for names but if the the given string contains them you can remove them by using replace function : replace(@str,'<','')

    qid & accept id: (29521617, 29522435) query: using self joins to retrieve unique columns soup:

    Your query seems overly complicated. In most databases, you can use window functions for this:

    \n
    SELECT msisdn, flex_soc,\n       coalesce(base_soc, 'Not Prov') as base_soc, plan_name, limit\nfrom (SELECT flex.msisdn, flex.product_name as Flex_Soc,\n             MAX(case when base_soc_boo = 'Y' then flex.product_name end) over\n                 (partition by flex.msisdn) as base_soc,\n            flex.plan_name, flex.limit\n      FROM table1 flex\n     ) flex\nwhere base_soc_boo = 'N';\n
    \n

    You don't specify a database so ANSI-compatible syntax seems reasonable.

    \n
    +----+--------+-----------+----------+-----------+-------+\n|    | MSISDN | FLEX_SOC  | BASE_SOC | PLAN_NAME | Limit |\n+----+--------+-----------+----------+-----------+-------+\n|  6 |    152 | THRWS33   | THRWS33  | ABC       | 10240 |\n|  7 |    152 | WADADJTH5 | THRWS33  | ABC       |  4092 |\n|  8 |    152 | WHOADJTH2 | THRWS33  | ABC       |  1024 |\n|  9 |    149 | WADADJTH4 | Not Prov | ABC       |   512 |\n| 10 |    149 | WADADJTH5 | Not Prov | ABC       |  1024 |\n+----+--------+-----------+----------+-----------+-------+\n
    \n soup wrap:

    Your query seems overly complicated. In most databases, you can use window functions for this:

    SELECT msisdn, flex_soc,
           coalesce(base_soc, 'Not Prov') as base_soc, plan_name, limit
    from (SELECT flex.msisdn, flex.product_name as Flex_Soc,
                 MAX(case when base_soc_boo = 'Y' then flex.product_name end) over
                     (partition by flex.msisdn) as base_soc,
                flex.plan_name, flex.limit
          FROM table1 flex
         ) flex
    where base_soc_boo = 'N';
    

    You don't specify a database so ANSI-compatible syntax seems reasonable.

    +----+--------+-----------+----------+-----------+-------+
    |    | MSISDN | FLEX_SOC  | BASE_SOC | PLAN_NAME | Limit |
    +----+--------+-----------+----------+-----------+-------+
    |  6 |    152 | THRWS33   | THRWS33  | ABC       | 10240 |
    |  7 |    152 | WADADJTH5 | THRWS33  | ABC       |  4092 |
    |  8 |    152 | WHOADJTH2 | THRWS33  | ABC       |  1024 |
    |  9 |    149 | WADADJTH4 | Not Prov | ABC       |   512 |
    | 10 |    149 | WADADJTH5 | Not Prov | ABC       |  1024 |
    +----+--------+-----------+----------+-----------+-------+
    
    qid & accept id: (29533346, 29534119) query: join 2 tables case sensitive upper and lower case soup:

    There are at least two quick ways you can solve this.

    \n

    1. You specify a case-sensitive collation (rules for comparing strings across characters in a character set) for A.Code and B.Code. In MySQL and a few other database management systems, the default collation is case-insensitive.

    \n

    That is, assuming that you're using MySQL or similar, you'll have to modify your statement as such:

    \n
    SELECT Code, BrandName, Count(*) QTY, SUM(Price) TOTAL\nFROM A\nINNER JOIN B\nON A.Code=B.Code COLLATE latin1_bin\nGROUP BY Code, BrandName\n
    \n

    If, however, you plan on only performing case-sensitive queries on A and B, it may be in your interest to set the default collation on those two tables to case-sensitive.

    \n

    Please see How can I make SQL case sensitive string comparison on MySQL?

    \n

    2. Cast A.Code and B.Code to a binary string and compare the two. This is an simple way to compare two strings, byte-by-byte, thus achieving case-insensitivity.

    \n
    SELECT Code, BrandName, Count(*) QTY, SUM(Price) TOTAL\nFROM A\nINNER JOIN B\nON BINARY A.Code=B.Code\nGROUP BY Code, BrandName\n
    \n soup wrap:

    There are at least two quick ways you can solve this.

    1. You specify a case-sensitive collation (rules for comparing strings across characters in a character set) for A.Code and B.Code. In MySQL and a few other database management systems, the default collation is case-insensitive.

    That is, assuming that you're using MySQL or similar, you'll have to modify your statement as such:

    SELECT Code, BrandName, Count(*) QTY, SUM(Price) TOTAL
    FROM A
    INNER JOIN B
    ON A.Code=B.Code COLLATE latin1_bin
    GROUP BY Code, BrandName
    

    If, however, you plan on only performing case-sensitive queries on A and B, it may be in your interest to set the default collation on those two tables to case-sensitive.

    Please see How can I make SQL case sensitive string comparison on MySQL?

    2. Cast A.Code and B.Code to a binary string and compare the two. This is an simple way to compare two strings, byte-by-byte, thus achieving case-insensitivity.

    SELECT Code, BrandName, Count(*) QTY, SUM(Price) TOTAL
    FROM A
    INNER JOIN B
    ON BINARY A.Code=B.Code
    GROUP BY Code, BrandName
    
    qid & accept id: (29563877, 29564248) query: T-SQL to pull decimal values from a string soup:

    An alternative approach is to remove the characters after the string and before the string. The following expression does this:

    \n
    select val, \n       stuff(stuff(val+'x', patindex('%[0-9][^0-9.]%', val+'x') + 1, len(val), ''\n                  ), 1, patindex('%[0-9]%', val) - 1, '')\nfrom (values ('test123 xxx'), ('123.4'), ('123.4yyyyy'), ('tasdf 8.9'), ('asdb'), ('.2345')) as t(val);\n
    \n

    The inner stuff() remove the characters after the number. The +'x' handles the problem that occurs when the number is at the end of the string. The first part handles the part before the number.

    \n

    This does assume that there is only one number in the string. You can check this with a where clause like:

    \n
    where val not like '%[0-9]%[^0-9.]%[0-9]%'\n
    \n soup wrap:

    An alternative approach is to remove the characters after the string and before the string. The following expression does this:

    select val, 
           stuff(stuff(val+'x', patindex('%[0-9][^0-9.]%', val+'x') + 1, len(val), ''
                      ), 1, patindex('%[0-9]%', val) - 1, '')
    from (values ('test123 xxx'), ('123.4'), ('123.4yyyyy'), ('tasdf 8.9'), ('asdb'), ('.2345')) as t(val);
    

    The inner stuff() remove the characters after the number. The +'x' handles the problem that occurs when the number is at the end of the string. The first part handles the part before the number.

    This does assume that there is only one number in the string. You can check this with a where clause like:

    where val not like '%[0-9]%[^0-9.]%[0-9]%'
    
    qid & accept id: (29598252, 29598638) query: SQL Query: For each department that has one or more majors with a GPA under 1.0, print the name of the department and the average GPA of its majors soup:

    Question 2:

    \n
       select s.sid, s.sname, s.gpa\n      from student s\n        inner join enroll e\n          on s.sid = e.sid\n      where e.dname = 'Civil Engineering'\n      group by sid\n      having count(distinct cno) = \n        (select count(cno) from course where dname = 'Civil Engineering');\n
    \n

    Example fiddle here: http://sqlfiddle.com/#!9/807bc/1

    \n

    We join student to enroll tables to get a list of all courses the students are enrolled in, we filter it to only courses that are in the Civil Engineering department, we then group them by student, and count the number of distinct courses the student is enrolled in (since in real life, a student may end up enrolling in the same course multiple times, over time), and compare that to total number of courses in the Civil Engineering department, and include only the result rows that match that last condition.

    \n

    Question 1:

    \n
    select d.dname, avg(s.gpa)\n  from dept d\n    inner join major m\n      on d.dname = m.dname\n    inner join student s\n      on s.sid = m.sid\n  group by d.dname\n  having min(s.gpa) < 1.0\n
    \n

    or

    \n
      select m.dname, avg(s.gpa)\n    from major m\n      inner join student s\n        on s.sid = m.sid\n  group by m.dname\n  having min(s.gpa) < 1.0\n
    \n

    Updated fiddle here: http://sqlfiddle.com/#!9/d12f4/5

    \n

    The answer is constructed in a similar fashion. I've given two answers for the second because it seems weird to me that the department table doesn't have a department_id field that the other tables use, whereas instinct would suggest that it would.

    \n soup wrap:

    Question 2:

       select s.sid, s.sname, s.gpa
          from student s
            inner join enroll e
              on s.sid = e.sid
          where e.dname = 'Civil Engineering'
          group by sid
          having count(distinct cno) = 
            (select count(cno) from course where dname = 'Civil Engineering');
    

    Example fiddle here: http://sqlfiddle.com/#!9/807bc/1

    We join student to enroll tables to get a list of all courses the students are enrolled in, we filter it to only courses that are in the Civil Engineering department, we then group them by student, and count the number of distinct courses the student is enrolled in (since in real life, a student may end up enrolling in the same course multiple times, over time), and compare that to total number of courses in the Civil Engineering department, and include only the result rows that match that last condition.

    Question 1:

    select d.dname, avg(s.gpa)
      from dept d
        inner join major m
          on d.dname = m.dname
        inner join student s
          on s.sid = m.sid
      group by d.dname
      having min(s.gpa) < 1.0
    

    or

      select m.dname, avg(s.gpa)
        from major m
          inner join student s
            on s.sid = m.sid
      group by m.dname
      having min(s.gpa) < 1.0
    

    Updated fiddle here: http://sqlfiddle.com/#!9/d12f4/5

    The answer is constructed in a similar fashion. I've given two answers for the second because it seems weird to me that the department table doesn't have a department_id field that the other tables use, whereas instinct would suggest that it would.

    qid & accept id: (29619943, 29620304) query: Select Multiple distinct with Order By Date Clause soup:

    Try this:

    \n

    [EDIT]

    \n
    SELECT src.ID, src.TicketNo, src.TicketQuantity, src.TicketRate, src.EnteredDate\nFROM (\n    SELECT TicketNo, MAX(EnteredDate) AS MaxEnteredDate\n    FROM Tickets\n    GROUP BY TicketNo\n ) AS mtn INNER JOIN Tickets AS src ON mtn.TicketNo = src.TicketNo AND mtn.MaxEnteredDate = src.EnteredDate\nORDER BY src.EnteredDate DESC\n
    \n

    Above query returns:

    \n
    ID  TicketNo    TicketQuantity  TicketRate  EnteredDate\n6   3000        3               2           2015-01-11 18:27:39\n5   3002        6               2           2015-01-11 18:27:31\n2   3001        2               2           2015-01-11 18:27:15\n
    \n soup wrap:

    Try this:

    [EDIT]

    SELECT src.ID, src.TicketNo, src.TicketQuantity, src.TicketRate, src.EnteredDate
    FROM (
        SELECT TicketNo, MAX(EnteredDate) AS MaxEnteredDate
        FROM Tickets
        GROUP BY TicketNo
     ) AS mtn INNER JOIN Tickets AS src ON mtn.TicketNo = src.TicketNo AND mtn.MaxEnteredDate = src.EnteredDate
    ORDER BY src.EnteredDate DESC
    

    Above query returns:

    ID  TicketNo    TicketQuantity  TicketRate  EnteredDate
    6   3000        3               2           2015-01-11 18:27:39
    5   3002        6               2           2015-01-11 18:27:31
    2   3001        2               2           2015-01-11 18:27:15
    
    qid & accept id: (29628590, 29644246) query: SQL statement to set numbering style for all multiple choice questions in a Moodle course soup:

    I asked the same question on Moodle Developer Forums and got the answer from Stuart Mealor and Tim Hunt (Moodle in English: Useful SQL Queries?). In short, it is the following:

    \n
    UPDATE mdl_qtype_multichoice_options\nSET answernumbering = 'none'\nWHERE questionid IN (SELECT id FROM mdl_question WHERE category = 123)\n
    \n

    The table and field names might depend on the Moodle version. In 2.5.9, the following statement worked for me:

    \n
    UPDATE mdl_question_multichoice\nSET answernumbering = 'none'\nWHERE question IN\n    (SELECT id FROM mdl_question\n     WHERE category = 7);\n
    \n soup wrap:

    I asked the same question on Moodle Developer Forums and got the answer from Stuart Mealor and Tim Hunt (Moodle in English: Useful SQL Queries?). In short, it is the following:

    UPDATE mdl_qtype_multichoice_options
    SET answernumbering = 'none'
    WHERE questionid IN (SELECT id FROM mdl_question WHERE category = 123)
    

    The table and field names might depend on the Moodle version. In 2.5.9, the following statement worked for me:

    UPDATE mdl_question_multichoice
    SET answernumbering = 'none'
    WHERE question IN
        (SELECT id FROM mdl_question
         WHERE category = 7);
    
    qid & accept id: (29645733, 29645940) query: How to reverse a GROUP BY like table? soup:

    You can achieve It with Common Table Expression in following:

    \n
    CREATE TABLE #Test\n(\n   Animal NVARCHAR(20),\n   CountAnimals INT,\n   Color NVARCHAR(20)\n)\n\nINSERT INTO #Test VALUES ('Dog', 2, 'brown'), ('Cat', 4, 'black');\n\nWITH CTE AS (\n    SELECT Animal,CountAnimals,Color FROM #Test\n\n    UNION ALL \n\n    SELECT  Animal,CountAnimals-1,Color\n\n    FROM CTE\n    WHERE CountAnimals >= 2\n)\nSELECT Animal,Color\nFROM CTE\nORDER BY Animal DESC\nOPTION (MAXRECURSION 0);\n\nDROP TABLE #Test\n
    \n

    OUTPUT

    \n
    Animal  Color\n Dog    brown\n Dog    brown\n Cat    black\n Cat    black\n Cat    black\n Cat    black\n
    \n

    SQL FIDDLE

    \n soup wrap:

    You can achieve It with Common Table Expression in following:

    CREATE TABLE #Test
    (
       Animal NVARCHAR(20),
       CountAnimals INT,
       Color NVARCHAR(20)
    )
    
    INSERT INTO #Test VALUES ('Dog', 2, 'brown'), ('Cat', 4, 'black');
    
    WITH CTE AS (
        SELECT Animal,CountAnimals,Color FROM #Test
    
        UNION ALL 
    
        SELECT  Animal,CountAnimals-1,Color
    
        FROM CTE
        WHERE CountAnimals >= 2
    )
    SELECT Animal,Color
    FROM CTE
    ORDER BY Animal DESC
    OPTION (MAXRECURSION 0);
    
    DROP TABLE #Test
    

    OUTPUT

    Animal  Color
     Dog    brown
     Dog    brown
     Cat    black
     Cat    black
     Cat    black
     Cat    black
    

    SQL FIDDLE

    qid & accept id: (29652394, 29658332) query: Sql: difference between two dates soup:

    Getting the number of days exclude Saturday and Sunday is not so difficult, you will find several solutions for that at SO.

    \n

    Considering holidays is more challenging. One solution is be to use the Oracle SCHEDULER. By default this is used for SCHEDULER JOBS, however I don't see any reason not using it for other purpose.

    \n

    Biggest problem is the easter day, see here: Computus.\nI think the most efficient way is to hard-code the dates and maintaine them manually.

    \n
    BEGIN\n    DBMS_SCHEDULER.CREATE_SCHEDULE('New_Year', repeat_interval => 'FREQ=YEARLY;BYDATE=0101');\n\n    DBMS_SCHEDULER.CREATE_SCHEDULE('Easter_Sunday',  repeat_interval => 'FREQ=YEARLY;BYDATE=20150405,    20160327,    20170416,    20170416,    20180401,    20190421,    20200412', comments => 'Hard coded till 2020');\n    DBMS_SCHEDULER.CREATE_SCHEDULE('Good_Friday',    repeat_interval => 'FREQ=YEARLY;BYDATE=20150405-2D, 20160327-2D, 20170416-2D, 20170416-2D, 20180401-2D, 20190421-2D, 20200412-2D');\n    DBMS_SCHEDULER.CREATE_SCHEDULE('Easter_Monday',   repeat_interval => 'FREQ=YEARLY;BYDATE=20150405+1D, 20160327+1D, 20170416+1D, 20170416+1D, 20180401+1D, 20190421+1D, 20200412+1D');\n    DBMS_SCHEDULER.CREATE_SCHEDULE('Ascension_Day',   repeat_interval => 'FREQ=YEARLY;BYDATE=20150405+39D,20160327+39D,20170416+39D,20170416+39D,20180401+39D,20190421+39D,20200412+39D');\n    DBMS_SCHEDULER.CREATE_SCHEDULE('Pentecost_Monday', repeat_interval => 'FREQ=YEARLY;BYDATE=20150405+50D,20160327+50D,20170416+50D,20170416+50D,20180401+50D,20190421+50D,20200412+50D');\n\n    DBMS_SCHEDULER.CREATE_SCHEDULE('Repentance_and_Prayer', repeat_interval => 'FREQ=DAILY;BYDATE=1122-SPAN:7D;BYDAY=WED', \n        comments => 'Wednesday before November 23th, Buss- und Bettag');\n    -- alternative solution: \n    --DBMS_SCHEDULER.CREATE_SCHEDULE('Repentance_and_Prayer', repeat_interval => 'FREQ=MONTHLY;BYMONTH=NOV;BYDAY=3 WED', \n    --    comments => '3rd Wednesday in November');\n\n    DBMS_SCHEDULER.CREATE_SCHEDULE('Labor_Day', repeat_interval => 'FREQ=YEARLY;BYDATE=0501');\n    DBMS_SCHEDULER.CREATE_SCHEDULE('German_Unity_Day', repeat_interval => 'FREQ=YEARLY;BYDATE=1003');\n    DBMS_SCHEDULER.CREATE_SCHEDULE('Christmas', repeat_interval => 'FREQ=YEARLY;BYDATE=1225+SPAN:2D');\n\n    DBMS_SCHEDULER.CREATE_SCHEDULE('Christian_Celebration_Days', repeat_interval => 'FREQ=DAILY;INTERSECT=Easter_Sunday,Good_Friday,Easter_Monday,Ascension_Day,Pentecost_Monday,Repentance_and_Prayer,Christmas');\n    -- alternative solution: \n    -- DBMS_SCHEDULER.CREATE_SCHEDULE('Christian_Celebration_Days', repeat_interval => 'FREQ=Good_Friday;BYDAY=1 MON, 6 THU,8 MON');\n    DBMS_SCHEDULER.CREATE_SCHEDULE('Political_Holidays', repeat_interval => 'FREQ=DAILY;INTERSECT=New_Year,Labor_Day,German_Unity_Day');\n\n\nEND;\n/\n
    \n

    See syntax for calendar here: Calendaring Syntax

    \n

    Then you can use the schedules like this:

    \n
    CREATE OR REPLACE FUNCTION DateDiff(end_date IN TIMESTAMP) RETURN INTEGER AS\n    next_run_date TIMESTAMP := TRUNC(SYSTIMESTAMP);\n    res INTEGER := 0;\nBEGIN\n    IF end_date > SYSTIMESTAMP THEN\n        LOOP\n            DBMS_SCHEDULER.EVALUATE_CALENDAR_STRING('FREQ=DAILY;INTERVAL=1;BYDAY=MON,TUE,WED,THU,FRI; EXCLUDE=Christian_Celebration_Days,Political_Holidays', NULL, next_run_date, next_run_date);\n            EXIT WHEN next_run_date >= end_date;\n            res := res + 1;\n        END LOOP;\n        RETURN res;\n    ELSE\n        RAISE VALUE_ERROR;\n    END IF;     \nEND;\n\nSELECT DateDiff(TO_DATE('04/10/2015','mm/dd/yyyy')) AS Differenz FROM DUAL;\n
    \n

    Output next 20 holiays for testing:

    \n
    DECLARE\n    next_run_date TIMESTAMP;\nBEGIN\n    FOR i IN 1..20 LOOP\n        DBMS_SCHEDULER.EVALUATE_CALENDAR_STRING('FREQ=DAILY;INTERSECT='Christian_Celebration_Days,Political_Holidays', NULL, next_run_date, next_run_date);\n        DBMS_OUTPUT.PUT_LINE(next_run_date);\n    END LOOP;\nEND;\n
    \n

    Update

    \n

    I even found a more compact version:

    \n
    BEGIN\n    -- Start with first celebration day (good Friday), all dependent celebration days have to be after this day for proper calculation of schedule\n    DBMS_SCHEDULER.CREATE_SCHEDULE('GOOD_FRIDAY', repeat_interval => 'FREQ=YEARLY;BYDATE=20100402,20110422,20120406,20130329,20140418,20150403,20160325,20170414,20180330,20190419,20200410,20210402,20220410,20230407,20240329,20250418,20260403,20270326,20280414,20290330,20300419', comments => 'Hard coded 2010 to 2030');\n    -- Easter Sunday can be skipped for list of holidays, otherwise 'FREQ=Good_Friday;BYDAY=1 SUN+SPAN:2D'\n    DBMS_SCHEDULER.CREATE_SCHEDULE('EASTER_MONDAY', repeat_interval => 'FREQ=Good_Friday;BYDAY=1 MON', comments => '1st Monday after Good Friday'\n    DBMS_SCHEDULER.CREATE_SCHEDULE('ASCENSION_DAY', repeat_interval => 'FREQ=Good_Friday;BYDAY=6 THU', comments => '6th Thursday after Good Friday (40 days after Easter)');\n    -- Pentecost Sunday can be skipped for list of holidays, otherwise 'FREQ=Good_Friday;BYDAY=8 SUN+SPAN:2D'\n    DBMS_SCHEDULER.CREATE_SCHEDULE('PENTECOST_MONDAY', repeat_interval => 'FREQ=Good_Friday;BYDAY=8 MON', comments => '8th Monday after Good Friday (50 days after Easter)');\n    DBMS_SCHEDULER.CREATE_SCHEDULE('EASTER_RELATED_DAYS', repeat_interval => 'FREQ=Good_Friday;BYDAY=1 MON, 6 THU,8 MON'\nEND;\n
    \n soup wrap:

    Getting the number of days exclude Saturday and Sunday is not so difficult, you will find several solutions for that at SO.

    Considering holidays is more challenging. One solution is be to use the Oracle SCHEDULER. By default this is used for SCHEDULER JOBS, however I don't see any reason not using it for other purpose.

    Biggest problem is the easter day, see here: Computus. I think the most efficient way is to hard-code the dates and maintaine them manually.

    BEGIN
        DBMS_SCHEDULER.CREATE_SCHEDULE('New_Year', repeat_interval => 'FREQ=YEARLY;BYDATE=0101');
    
        DBMS_SCHEDULER.CREATE_SCHEDULE('Easter_Sunday',  repeat_interval => 'FREQ=YEARLY;BYDATE=20150405,    20160327,    20170416,    20170416,    20180401,    20190421,    20200412', comments => 'Hard coded till 2020');
        DBMS_SCHEDULER.CREATE_SCHEDULE('Good_Friday',    repeat_interval => 'FREQ=YEARLY;BYDATE=20150405-2D, 20160327-2D, 20170416-2D, 20170416-2D, 20180401-2D, 20190421-2D, 20200412-2D');
        DBMS_SCHEDULER.CREATE_SCHEDULE('Easter_Monday',   repeat_interval => 'FREQ=YEARLY;BYDATE=20150405+1D, 20160327+1D, 20170416+1D, 20170416+1D, 20180401+1D, 20190421+1D, 20200412+1D');
        DBMS_SCHEDULER.CREATE_SCHEDULE('Ascension_Day',   repeat_interval => 'FREQ=YEARLY;BYDATE=20150405+39D,20160327+39D,20170416+39D,20170416+39D,20180401+39D,20190421+39D,20200412+39D');
        DBMS_SCHEDULER.CREATE_SCHEDULE('Pentecost_Monday', repeat_interval => 'FREQ=YEARLY;BYDATE=20150405+50D,20160327+50D,20170416+50D,20170416+50D,20180401+50D,20190421+50D,20200412+50D');
    
        DBMS_SCHEDULER.CREATE_SCHEDULE('Repentance_and_Prayer', repeat_interval => 'FREQ=DAILY;BYDATE=1122-SPAN:7D;BYDAY=WED', 
            comments => 'Wednesday before November 23th, Buss- und Bettag');
        -- alternative solution: 
        --DBMS_SCHEDULER.CREATE_SCHEDULE('Repentance_and_Prayer', repeat_interval => 'FREQ=MONTHLY;BYMONTH=NOV;BYDAY=3 WED', 
        --    comments => '3rd Wednesday in November');
    
        DBMS_SCHEDULER.CREATE_SCHEDULE('Labor_Day', repeat_interval => 'FREQ=YEARLY;BYDATE=0501');
        DBMS_SCHEDULER.CREATE_SCHEDULE('German_Unity_Day', repeat_interval => 'FREQ=YEARLY;BYDATE=1003');
        DBMS_SCHEDULER.CREATE_SCHEDULE('Christmas', repeat_interval => 'FREQ=YEARLY;BYDATE=1225+SPAN:2D');
    
        DBMS_SCHEDULER.CREATE_SCHEDULE('Christian_Celebration_Days', repeat_interval => 'FREQ=DAILY;INTERSECT=Easter_Sunday,Good_Friday,Easter_Monday,Ascension_Day,Pentecost_Monday,Repentance_and_Prayer,Christmas');
        -- alternative solution: 
        -- DBMS_SCHEDULER.CREATE_SCHEDULE('Christian_Celebration_Days', repeat_interval => 'FREQ=Good_Friday;BYDAY=1 MON, 6 THU,8 MON');
        DBMS_SCHEDULER.CREATE_SCHEDULE('Political_Holidays', repeat_interval => 'FREQ=DAILY;INTERSECT=New_Year,Labor_Day,German_Unity_Day');
    
    
    END;
    /
    

    See syntax for calendar here: Calendaring Syntax

    Then you can use the schedules like this:

    CREATE OR REPLACE FUNCTION DateDiff(end_date IN TIMESTAMP) RETURN INTEGER AS
        next_run_date TIMESTAMP := TRUNC(SYSTIMESTAMP);
        res INTEGER := 0;
    BEGIN
        IF end_date > SYSTIMESTAMP THEN
            LOOP
                DBMS_SCHEDULER.EVALUATE_CALENDAR_STRING('FREQ=DAILY;INTERVAL=1;BYDAY=MON,TUE,WED,THU,FRI; EXCLUDE=Christian_Celebration_Days,Political_Holidays', NULL, next_run_date, next_run_date);
                EXIT WHEN next_run_date >= end_date;
                res := res + 1;
            END LOOP;
            RETURN res;
        ELSE
            RAISE VALUE_ERROR;
        END IF;     
    END;
    
    SELECT DateDiff(TO_DATE('04/10/2015','mm/dd/yyyy')) AS Differenz FROM DUAL;
    

    Output next 20 holiays for testing:

    DECLARE
        next_run_date TIMESTAMP;
    BEGIN
        FOR i IN 1..20 LOOP
            DBMS_SCHEDULER.EVALUATE_CALENDAR_STRING('FREQ=DAILY;INTERSECT='Christian_Celebration_Days,Political_Holidays', NULL, next_run_date, next_run_date);
            DBMS_OUTPUT.PUT_LINE(next_run_date);
        END LOOP;
    END;
    

    Update

    I even found a more compact version:

    BEGIN
        -- Start with first celebration day (good Friday), all dependent celebration days have to be after this day for proper calculation of schedule
        DBMS_SCHEDULER.CREATE_SCHEDULE('GOOD_FRIDAY', repeat_interval => 'FREQ=YEARLY;BYDATE=20100402,20110422,20120406,20130329,20140418,20150403,20160325,20170414,20180330,20190419,20200410,20210402,20220410,20230407,20240329,20250418,20260403,20270326,20280414,20290330,20300419', comments => 'Hard coded 2010 to 2030');
        -- Easter Sunday can be skipped for list of holidays, otherwise 'FREQ=Good_Friday;BYDAY=1 SUN+SPAN:2D'
        DBMS_SCHEDULER.CREATE_SCHEDULE('EASTER_MONDAY', repeat_interval => 'FREQ=Good_Friday;BYDAY=1 MON', comments => '1st Monday after Good Friday'
        DBMS_SCHEDULER.CREATE_SCHEDULE('ASCENSION_DAY', repeat_interval => 'FREQ=Good_Friday;BYDAY=6 THU', comments => '6th Thursday after Good Friday (40 days after Easter)');
        -- Pentecost Sunday can be skipped for list of holidays, otherwise 'FREQ=Good_Friday;BYDAY=8 SUN+SPAN:2D'
        DBMS_SCHEDULER.CREATE_SCHEDULE('PENTECOST_MONDAY', repeat_interval => 'FREQ=Good_Friday;BYDAY=8 MON', comments => '8th Monday after Good Friday (50 days after Easter)');
        DBMS_SCHEDULER.CREATE_SCHEDULE('EASTER_RELATED_DAYS', repeat_interval => 'FREQ=Good_Friday;BYDAY=1 MON, 6 THU,8 MON'
    END;
    
    qid & accept id: (29660006, 29660521) query: return the last row that meets a condition in sql soup:

    Like @PaulGriffin said in his comment you need to remove PreviousReadDate column from your GROUP BY clause.

    \n

    Why are you experiencing this behaviour?

    \n

    Basically the partition you have chosen - (SerialNumber,ReadTypeCode,PreviousReadDate) for each distinct pair of those values prints you SerialNumber, ReadTypeCode, MAX(PreviousReadDate). Since you are applying a MAX() function to each row of the partition that includes this column you are simply using an aggregate function on one value - so the output of MAX() will be equal to the one without it.

    \n

    What you wanted to achieve

    \n

    Get MAX value of PreviousReadDate for every pair of (SerialNumber,ReadTypeCode). So this is what your GROUP BY clause should include.

    \n
    select a.SerialNumber, ReadTypeCode, MAX(PreviousReadDate) from Meter as a\nleft join RegisterLevelInformation as b on a.MeterID = b.MeterID\nwhere ReadType = 'ACT'\ngroup by a.SerialNumber,b.ReadTypeCode\norder by a.SerialNumber\n
    \n

    Is the correct SQL query for what you want.

    \n

    Difference example

    \n
    ID         MeterID    ReadValue    Consumption  PreviousReadDate    ReadType\n============================================================================\n1          1          250          250          1 jan 2015          EST\n2          1          550          300          1 feb 2015          ACT\n3          1          1000         450          1 apr 2015          EST\n
    \n

    Here if you apply the query with grouping by 3 columns you would get result:

    \n
    SerialNumber | ReadTypeCode | PreviousReadDate\n  ABC1       |    EST       | 1 jan 2015 -- which is MAX of 1 value (1 jan 2015)\n  ABC1       |    ACT       | 1 feb 2015\n  ABC1       |    EST       | 1 apr 2015\n
    \n

    But instead when you only group by SerialNumber,ReadTypeCode it would yield result (considering the sample data that I posted):

    \n
    SerialNumber | ReadTypeCode | PreviousReadDate\n  ABC1       |    EST       | 1 apr 2015 -- which is MAX of 2 values (1 jan 2015, 1 apr 2015)\n  ABC1       |    ACT       | 1 feb 2015 -- which is MAX of 1 value (because ReadTypeCode is different from the row above\n
    \n

    Explanation of your second query

    \n

    In this query - you are right indeed - each serial is shown only once.

    \n
    select a.SerialNumber, count(*) from Meter as a\nleft join RegisterLevelInformation as b on a.MeterID = b.MeterID\ngroup by a.SerialNumber\norder by a.SerialNumber\n
    \n

    But this query would produce you odd results you don't expect if you add grouping by more columns (which you have done in your first query - try it yourself).

    \n soup wrap:

    Like @PaulGriffin said in his comment you need to remove PreviousReadDate column from your GROUP BY clause.

    Why are you experiencing this behaviour?

    Basically the partition you have chosen - (SerialNumber,ReadTypeCode,PreviousReadDate) for each distinct pair of those values prints you SerialNumber, ReadTypeCode, MAX(PreviousReadDate). Since you are applying a MAX() function to each row of the partition that includes this column you are simply using an aggregate function on one value - so the output of MAX() will be equal to the one without it.

    What you wanted to achieve

    Get MAX value of PreviousReadDate for every pair of (SerialNumber,ReadTypeCode). So this is what your GROUP BY clause should include.

    select a.SerialNumber, ReadTypeCode, MAX(PreviousReadDate) from Meter as a
    left join RegisterLevelInformation as b on a.MeterID = b.MeterID
    where ReadType = 'ACT'
    group by a.SerialNumber,b.ReadTypeCode
    order by a.SerialNumber
    

    Is the correct SQL query for what you want.

    Difference example

    ID         MeterID    ReadValue    Consumption  PreviousReadDate    ReadType
    ============================================================================
    1          1          250          250          1 jan 2015          EST
    2          1          550          300          1 feb 2015          ACT
    3          1          1000         450          1 apr 2015          EST
    

    Here if you apply the query with grouping by 3 columns you would get result:

    SerialNumber | ReadTypeCode | PreviousReadDate
      ABC1       |    EST       | 1 jan 2015 -- which is MAX of 1 value (1 jan 2015)
      ABC1       |    ACT       | 1 feb 2015
      ABC1       |    EST       | 1 apr 2015
    

    But instead when you only group by SerialNumber,ReadTypeCode it would yield result (considering the sample data that I posted):

    SerialNumber | ReadTypeCode | PreviousReadDate
      ABC1       |    EST       | 1 apr 2015 -- which is MAX of 2 values (1 jan 2015, 1 apr 2015)
      ABC1       |    ACT       | 1 feb 2015 -- which is MAX of 1 value (because ReadTypeCode is different from the row above
    

    Explanation of your second query

    In this query - you are right indeed - each serial is shown only once.

    select a.SerialNumber, count(*) from Meter as a
    left join RegisterLevelInformation as b on a.MeterID = b.MeterID
    group by a.SerialNumber
    order by a.SerialNumber
    

    But this query would produce you odd results you don't expect if you add grouping by more columns (which you have done in your first query - try it yourself).

    qid & accept id: (29716952, 29717150) query: Query for branch numbers with no salesperson soup:

    This will get the branches without salespeople, ignoring whether orders exist:

    \n
    Select branchnumber\nfrom branch\nwhere branchnumber not in (Select empbranch from employee \n    where emptitle = 'salesperson')\n
    \n

    Based on your clarification, I believe you want to know what branches haven't had any sales within their state. Start by getting the Salespeople based on the orders they have sold:

    \n
    Select employeeid, empbranch\nfrom employee\njoin orders on orders.salesperson = employee.employeeid\n
    \n

    Now you know who sold something, narrow it down by state:

    \n
    Select employeeid, empbranch\nfrom employee\njoin orders on orders.salesperson = employee.employeeid\njoin customer on customer.customerid = orders.customerid \njoin branch on employee.empbranch = branch.branchnumber \n    and branch.branchstate = customer.custstate\n
    \n

    So now you've got only the employees that have sold something in their home state. You need to flip that and get the branches that don't have orders sold by their own salespeople:

    \n
    Select branchnumber\nfrom branch\nwhere branchid not in (\n    select empbranch\n    from employee\n    join orders on orders.salesperson = employee.employeeid\n    join customer on customer.customerid = orders.customerid \n    join branch on employee.empbranch = branch.branchnumber \n        and branch.branchstate = customer.custstate)\n
    \n soup wrap:

    This will get the branches without salespeople, ignoring whether orders exist:

    Select branchnumber
    from branch
    where branchnumber not in (Select empbranch from employee 
        where emptitle = 'salesperson')
    

    Based on your clarification, I believe you want to know what branches haven't had any sales within their state. Start by getting the Salespeople based on the orders they have sold:

    Select employeeid, empbranch
    from employee
    join orders on orders.salesperson = employee.employeeid
    

    Now you know who sold something, narrow it down by state:

    Select employeeid, empbranch
    from employee
    join orders on orders.salesperson = employee.employeeid
    join customer on customer.customerid = orders.customerid 
    join branch on employee.empbranch = branch.branchnumber 
        and branch.branchstate = customer.custstate
    

    So now you've got only the employees that have sold something in their home state. You need to flip that and get the branches that don't have orders sold by their own salespeople:

    Select branchnumber
    from branch
    where branchid not in (
        select empbranch
        from employee
        join orders on orders.salesperson = employee.employeeid
        join customer on customer.customerid = orders.customerid 
        join branch on employee.empbranch = branch.branchnumber 
            and branch.branchstate = customer.custstate)
    
    qid & accept id: (29721968, 29722706) query: How to get max count of referal to a root node in a tree soup:

    Assuming "first level" is defined by parent_id IS NULL and the current version Postgres 9.4:

    \n
    SELECT parent_id, count(*) AS referral_ct\nFROM  (\n   SELECT id AS parent_id\n   FROM   tbl\n   WHERE  t1.parent_id IS NULL\n   ) t1\nJOIN   tbl t2 USING (parent_id)\nGROUP  BY 1\nORDER  BY 2 DESC\nLIMIT  1;  -- to only get 1 row with max. referral_ct\n
    \n

    With only few root nodes, JOIN LATERAL may be faster:

    \n
    SELECT t1.id, t2.referral_ct\nFROM  (\n   SELECT id\n   FROM   tbl\n   WHERE  parent_id IS NULL\n   ) t1\nLEFT  JOIN LATERAL (\n   SELECT parent_id, count(*) AS referral_ct\n   FROM   tbl\n   WHERE  parent_id = t1.id\n   GROUP  BY 1\n   ) t2 ON true\nORDER   BY 2 DESC\nLIMIT   1;  -- to only get 1 row with max. referral_ct\n
    \n

    Related, with more explanation:

    \n\n soup wrap:

    Assuming "first level" is defined by parent_id IS NULL and the current version Postgres 9.4:

    SELECT parent_id, count(*) AS referral_ct
    FROM  (
       SELECT id AS parent_id
       FROM   tbl
       WHERE  t1.parent_id IS NULL
       ) t1
    JOIN   tbl t2 USING (parent_id)
    GROUP  BY 1
    ORDER  BY 2 DESC
    LIMIT  1;  -- to only get 1 row with max. referral_ct
    

    With only few root nodes, JOIN LATERAL may be faster:

    SELECT t1.id, t2.referral_ct
    FROM  (
       SELECT id
       FROM   tbl
       WHERE  parent_id IS NULL
       ) t1
    LEFT  JOIN LATERAL (
       SELECT parent_id, count(*) AS referral_ct
       FROM   tbl
       WHERE  parent_id = t1.id
       GROUP  BY 1
       ) t2 ON true
    ORDER   BY 2 DESC
    LIMIT   1;  -- to only get 1 row with max. referral_ct
    

    Related, with more explanation:

    qid & accept id: (29759568, 29764647) query: Replace multiple charachters with a sigle line in ORACLE SQL soup:

    It would depend on how much junk you have in your zip codes and phones. For example, you could remove all non-digital characters in those fields with a replace like this one:

    \n
    SELECT REGEXP_REPLACE('234N2&.-@3NDJ23842','[^[:digit:]]+') FROM DUAL\n
    \n

    And afterwards you could format the resulting digits with a replace like this:

    \n
    SELECT REGEXP_REPLACE('2342323842','([[:digit:]]{3})([[:digit:]]{3})([[:digit:]]{4})','\1 \2 \3') FROM DUAL\n
    \n

    I know the examples are not valid as zip codes nor phone numbers but I think they might help you.

    \n soup wrap:

    It would depend on how much junk you have in your zip codes and phones. For example, you could remove all non-digital characters in those fields with a replace like this one:

    SELECT REGEXP_REPLACE('234N2&.-@3NDJ23842','[^[:digit:]]+') FROM DUAL
    

    And afterwards you could format the resulting digits with a replace like this:

    SELECT REGEXP_REPLACE('2342323842','([[:digit:]]{3})([[:digit:]]{3})([[:digit:]]{4})','\1 \2 \3') FROM DUAL
    

    I know the examples are not valid as zip codes nor phone numbers but I think they might help you.

    qid & accept id: (29771413, 29771482) query: Check if email already exists in database and add/change data soup:

    Use insert . . . on duplicate key update. You can do this if you have a unique key on what you want to be unique:

    \n
    create unique index idx_results_name_email (name, email);\n
    \n

    Then, the database will enforce uniqueness. The statement you want is:

    \n
    INSERT INTO Results (1paracwierc, 1paracwierc2, 2paracwierc, 2paracwierc2, 3paracwierc, 3paracwierc2, 4paracwierc, 4paracwierc2, 1parapol, 1parapol2, 2parapol, 2parapol2, final, final2, name, email)\n    VALUES ($quantity, $quantity2, $quantity3, $quantity4, $quantity5, $quantity6, $quantity7, $quantity8, $quantity9, $quantity10, $quantity11, $quantity12, $quantity13, $quantity14, '$name', '$email')\n    ON DUPLICATE KEY UPDATE 1paracwierc = VALUES(1paracwierc),\n                            1paracwierc2 = VALUES(1paracwierc2),\n                             . . .\n                            final2 = VALUES(final2);\n
    \n soup wrap:

    Use insert . . . on duplicate key update. You can do this if you have a unique key on what you want to be unique:

    create unique index idx_results_name_email (name, email);
    

    Then, the database will enforce uniqueness. The statement you want is:

    INSERT INTO Results (1paracwierc, 1paracwierc2, 2paracwierc, 2paracwierc2, 3paracwierc, 3paracwierc2, 4paracwierc, 4paracwierc2, 1parapol, 1parapol2, 2parapol, 2parapol2, final, final2, name, email)
        VALUES ($quantity, $quantity2, $quantity3, $quantity4, $quantity5, $quantity6, $quantity7, $quantity8, $quantity9, $quantity10, $quantity11, $quantity12, $quantity13, $quantity14, '$name', '$email')
        ON DUPLICATE KEY UPDATE 1paracwierc = VALUES(1paracwierc),
                                1paracwierc2 = VALUES(1paracwierc2),
                                 . . .
                                final2 = VALUES(final2);
    
    qid & accept id: (29776360, 29776980) query: Convert varchar column to datetime in sql server soup:

    First, if your table column is "DateTime" type than it will save data in this format "2014-10-09 00:00:00.000" no matter you convert it to date or not. But if not so and if you have SQL Server version 2008 or above than you can use this,

    \n
    DECLARE @data nvarchar(50)\nSET @data =  '10/9/2014'\n\nIF(ISDATE(@data)>0)\nBEGIN\n    SELECT CONVERT(DATE, @data)\nEND\n
    \n

    Otherwise

    \n
    DECLARE @data nvarchar(50)\nSET @data =  '10/9/2014'\n\nIF(ISDATE(@data)>0)\nBEGIN\n    SELECT CONVERT(DATETIME, @data)\nEND\n
    \n

    To Insert into table

    \n
    INSERT INTO dbo.YourTable\nSELECT CREATEDATE FROM\n(\n    SELECT\n        (CASE WHEN (ISDATE(@data) > 0) THEN CONVERT(DATE, CREATEDATE) \n        ELSE CONVERT(DATE, '01/01/1900') END) as CREATEDATE \n    FROM \n        [dbo].[TestTB]\n) AS Temp\nWHERE\n    CREATEDATE <> CONVERT(DATE, '01/01/1900')\n
    \n soup wrap:

    First, if your table column is "DateTime" type than it will save data in this format "2014-10-09 00:00:00.000" no matter you convert it to date or not. But if not so and if you have SQL Server version 2008 or above than you can use this,

    DECLARE @data nvarchar(50)
    SET @data =  '10/9/2014'
    
    IF(ISDATE(@data)>0)
    BEGIN
        SELECT CONVERT(DATE, @data)
    END
    

    Otherwise

    DECLARE @data nvarchar(50)
    SET @data =  '10/9/2014'
    
    IF(ISDATE(@data)>0)
    BEGIN
        SELECT CONVERT(DATETIME, @data)
    END
    

    To Insert into table

    INSERT INTO dbo.YourTable
    SELECT CREATEDATE FROM
    (
        SELECT
            (CASE WHEN (ISDATE(@data) > 0) THEN CONVERT(DATE, CREATEDATE) 
            ELSE CONVERT(DATE, '01/01/1900') END) as CREATEDATE 
        FROM 
            [dbo].[TestTB]
    ) AS Temp
    WHERE
        CREATEDATE <> CONVERT(DATE, '01/01/1900')
    
    qid & accept id: (29782934, 29783015) query: sql statement with an if inside of it? soup:

    You could do it like this (example column)

    \n
    CASE timesheet.wed > 0 THEN timesheet.wed/10000 ELSE null END as wed,\n
    \n

    if the value can be null then you have to do this:

    \n
    CASE COALESCE(timesheet.wed,0) > 0 THEN timesheet.wed/10000 ELSE null END as wed,\n
    \n soup wrap:

    You could do it like this (example column)

    CASE timesheet.wed > 0 THEN timesheet.wed/10000 ELSE null END as wed,
    

    if the value can be null then you have to do this:

    CASE COALESCE(timesheet.wed,0) > 0 THEN timesheet.wed/10000 ELSE null END as wed,
    
    qid & accept id: (29802055, 29825343) query: Multiple sort/filter factors on a Fusion tables query soup:

    LIMIT isn't a sorting, the AND is wrong there and the LIMIT has to be the last clause in the SQL:

    \n
    '+ORDER+BY+Date+DESC+LIMIT+1000'\n
    \n

    The order of the clauses is fixed and has to be:

    \n
      \n
    1. where
    2. \n
    3. group
    4. \n
    5. order
    6. \n
    7. limit
    8. \n
    \n

    Your code seems to be Javascript, you better do yourself a favour and let JS handle the encoding.

    \n

    Possible approach:

    \n

    \n
    \n
    var base = 'https://www.googleapis.com/fusiontables/v2/query',
    \n  columns = 'SELECT Lat,Lng,Date,Username,TripID',
    \n  from = 'from fusionTableID',
    \n  //apply a filter when you want to
    \n  where = '',
    \n  //group the results when you want to
    \n  groupby = '',
    \n  orderby = 'ORDER BY DATE DESC',
    \n  limit = 'LIMIT 1000',
    \n  key = 'yourApiKey',
    \n  //do you want a JSONP-response? Add a callback-parameter
    \n  callback = '&callback=functionName',
    \n  //prepare the query;
    \n  sql = encodeURIComponent([columns, from, where, groupby, orderby, limit].join(' ')),
    \n  //prepare the url
    \n  url = [base, '?sql=', sql, callback, '&key=', key].join('');
    \n
    \n//see what we got
    \ndocument.body.appendChild(document.createTextNode(url));
    \n
    body {
    \n  font-family: Monospace
    \n}
    \n
    \n
    \n

    Demo using all 4 clauses: http://jsfiddle.net/doktormolle/fc47243g/

    \n soup wrap:

    LIMIT isn't a sorting, the AND is wrong there and the LIMIT has to be the last clause in the SQL:

    '+ORDER+BY+Date+DESC+LIMIT+1000'
    

    The order of the clauses is fixed and has to be:

    1. where
    2. group
    3. order
    4. limit

    Your code seems to be Javascript, you better do yourself a favour and let JS handle the encoding.

    Possible approach:

    var base = 'https://www.googleapis.com/fusiontables/v2/query',
      columns = 'SELECT Lat,Lng,Date,Username,TripID',
      from = 'from fusionTableID',
      //apply a filter when you want to
      where = '',
      //group the results when you want to
      groupby = '',
      orderby = 'ORDER BY DATE DESC',
      limit = 'LIMIT 1000',
      key = 'yourApiKey',
      //do you want a JSONP-response? Add a callback-parameter
      callback = '&callback=functionName',
      //prepare the query;
      sql = encodeURIComponent([columns, from, where, groupby, orderby, limit].join(' ')),
      //prepare the url
      url = [base, '?sql=', sql, callback, '&key=', key].join('');
    
    //see what we got
    document.body.appendChild(document.createTextNode(url));
    body {
      font-family: Monospace
    }

    Demo using all 4 clauses: http://jsfiddle.net/doktormolle/fc47243g/

    qid & accept id: (29827229, 29828861) query: MVC4 two drop downs one view soup:

    view bag has its uses but you really want to try to avoid it. change your model to be

    \n
    public class BigViewModel\n{\n    public Materials_Packer mPacker { get; set; }\n    public Materials_Product mProduct { get; set; }\n    SelectList ProductList { get; set; }\n    SelectList UserList { get; set; }\n}\n
    \n

    I would recommend not using the class name as the instance name. I have run into issues in the past where the compiler was confused by that.\nthen on your controller you can set the lists

    \n
    public ActionResult Assign() {\n    BigViewModel vm = new BigViewModel();\n    vm.ProductList = new SelectList(db.BigViewModel.ToList(), "MatProdID", "Product");\n    vm.UserList = new SelectList(db.BigViewModel.ToList(), "MatPackID", "PackerName");\n    return View(vm);\n}\n
    \n

    your drop downs will now be changed to

    \n
    @Html.DropDownListFor(model => model.Materials_Product.MatProdID, Model.ProductList)\n
    \n soup wrap:

    view bag has its uses but you really want to try to avoid it. change your model to be

    public class BigViewModel
    {
        public Materials_Packer mPacker { get; set; }
        public Materials_Product mProduct { get; set; }
        SelectList ProductList { get; set; }
        SelectList UserList { get; set; }
    }
    

    I would recommend not using the class name as the instance name. I have run into issues in the past where the compiler was confused by that. then on your controller you can set the lists

    public ActionResult Assign() {
        BigViewModel vm = new BigViewModel();
        vm.ProductList = new SelectList(db.BigViewModel.ToList(), "MatProdID", "Product");
        vm.UserList = new SelectList(db.BigViewModel.ToList(), "MatPackID", "PackerName");
        return View(vm);
    }
    

    your drop downs will now be changed to

    @Html.DropDownListFor(model => model.Materials_Product.MatProdID, Model.ProductList)
    
    qid & accept id: (29871937, 29871955) query: SQLite - How to remove rows that have a string cell value contained in other rows? soup:

    You can use exists (case–insensitive):

    \n
    delete from table\nwhere exists (select * from table t where t.string like '%' || table.string || '%')\n
    \n

    or instr (case–sensitive):

    \n
    delete from table\nwhere exists (select * from table t where instr(t.string, table.string) > 0)\n
    \n soup wrap:

    You can use exists (case–insensitive):

    delete from table
    where exists (select * from table t where t.string like '%' || table.string || '%')
    

    or instr (case–sensitive):

    delete from table
    where exists (select * from table t where instr(t.string, table.string) > 0)
    
    qid & accept id: (29883190, 29883683) query: SQL optional join and default value soup:
    \n

    Currently it only returns rows where the join matches

    \n
    \n

    Use LEFT JOIN instead of JOIN to also retrieve the rows that don't "have" a parent.

    \n
    \n

    I'd like to sort by the "parent"'s name (f27), but if a parent is not linked I'd like to put it at the bottom of the sort.

    \n
    \n

    Instead of

    \n
    ORDER BY parent.data->>'f27' ASC\n
    \n

    try to fill in the nulls with a value that would be pushed to the end when sorting (for example, the string 'zzz', but think of something that would make sense based on the data you have):

    \n
    ORDER BY coalesce(parent.data->>'f27', 'zzz') ASC\n
    \n

    \n

    use

    \n
    ORDER BY parent.data->>'f27' ASC NULLS LAST\n
    \n

    (thanks, Marth!)

    \n
    \n

    In the end, you query could look like this:

    \n
    SELECT e.*\nFROM entry AS e\nLEFT JOIN entry as parent \n  ON parent.entry_id = cast(e.data->>'f22' as integer)\nWHERE e.deleted = 0 AND e.section_id = $1 AND e.grp_id = $2 \nORDER BY parent.data->>'f27' ASC NULLS LAST\n
    \n

    Here's a demo: http://www.sqlfiddle.com/#!15/d18cc/16

    \n soup wrap:

    Currently it only returns rows where the join matches

    Use LEFT JOIN instead of JOIN to also retrieve the rows that don't "have" a parent.

    I'd like to sort by the "parent"'s name (f27), but if a parent is not linked I'd like to put it at the bottom of the sort.

    Instead of

    ORDER BY parent.data->>'f27' ASC
    

    try to fill in the nulls with a value that would be pushed to the end when sorting (for example, the string 'zzz', but think of something that would make sense based on the data you have):

    ORDER BY coalesce(parent.data->>'f27', 'zzz') ASC
    

    use

    ORDER BY parent.data->>'f27' ASC NULLS LAST
    

    (thanks, Marth!)


    In the end, you query could look like this:

    SELECT e.*
    FROM entry AS e
    LEFT JOIN entry as parent 
      ON parent.entry_id = cast(e.data->>'f22' as integer)
    WHERE e.deleted = 0 AND e.section_id = $1 AND e.grp_id = $2 
    ORDER BY parent.data->>'f27' ASC NULLS LAST
    

    Here's a demo: http://www.sqlfiddle.com/#!15/d18cc/16

    qid & accept id: (29923332, 29923816) query: MySQL: Query to get some rows compulsory soup:

    You can do this with a left join:

    \n
    SELECT spot_key, m.market, panel_member, SUM(weight) as TVR \nFROM (SELECT '9' as market UNION ALL SELECT '300'\n     ) LEFT JOIN\n     break_minute_tvr_fixed b \n     ON b.market = m.market and\n        b.column1 in (1,3,4,2,3,4) and\n        b.section in (1,2,3,4) and\n        b.sex in (1,2) and\n        b.age in (1,2,3,4,5,6,7) and\n        b.spot_key in ( '1:20141017:2129' )  \nGROUP BY spot_key, market;\n
    \n

    Of course spot_key will be NULL, unless you explicitly assign it a value:

    \n
    SELECT '1:20141017:2129' as spot_key, m.market, panel_member, SUM(weight) as TVR \n
    \n soup wrap:

    You can do this with a left join:

    SELECT spot_key, m.market, panel_member, SUM(weight) as TVR 
    FROM (SELECT '9' as market UNION ALL SELECT '300'
         ) LEFT JOIN
         break_minute_tvr_fixed b 
         ON b.market = m.market and
            b.column1 in (1,3,4,2,3,4) and
            b.section in (1,2,3,4) and
            b.sex in (1,2) and
            b.age in (1,2,3,4,5,6,7) and
            b.spot_key in ( '1:20141017:2129' )  
    GROUP BY spot_key, market;
    

    Of course spot_key will be NULL, unless you explicitly assign it a value:

    SELECT '1:20141017:2129' as spot_key, m.market, panel_member, SUM(weight) as TVR 
    
    qid & accept id: (29932394, 29932744) query: Showing History of changes from a History table soup:

    To get the roleID and locationID on separate rows you can use a simple UNION ALL.

    \n

    And to combine the old and new values use the ROW_NUMBER() window function, like this:

    \n
    ;with t as(\n    select *,\n    ROW_NUMBER() OVER(partition by userid Order BY lastUpdateDate) rn\n    from @MyTable\n),\na as (\nselect userId, 'locationId' as fieldname,\nlocationId as value, lastUpdateUserId, lastUpdateDate, rn\nfrom t\nUNION ALL\nselect userId, 'roleId' as fieldname,\nroleId as value, lastUpdateUserId, lastUpdateDate, rn\nfrom t\n)\nselect CASE WHEN a2.userId IS NULL THEN 'I' ELSE 'U' END as ChangeType,\na1.userId, a1.lastUpdateDate, a1.lastUpdateUserId, a1.fieldname, a1.value as newValue, a2.value as oldvalue\nFROM a a1 LEFT JOIN a a2\nON a1.userId = a2.userId and a1.fieldname = a2.fieldname\nAND a1.rn = a2.rn+1\norder by 2,3,5\n
    \n

    The a1 alias in the query above contains the "new values", the a2 contains the "old values". When you use the real data you will also need to partition by the fieldname (and perhaps table name) and also to join by them

    \n

    The result:

    \n
    ChangeType userId      lastUpdateDate          lastUpdateUserId fieldname  newValue    oldvalue\n---------- ----------- ----------------------- ---------------- ---------- ----------- -----------\nI          1           2015-04-30 12:20:59.183 7                locationId 1000        NULL\nI          1           2015-04-30 12:20:59.183 7                roleId     1           NULL\nU          1           2015-05-03 12:20:59.183 6                locationId 1100        1000\nU          1           2015-05-03 12:20:59.183 6                roleId     3           1\nU          1           2015-05-07 12:20:59.183 7                locationId 1000        1100\nU          1           2015-05-07 12:20:59.183 7                roleId     3           3\nI          2           2015-05-01 12:20:59.183 9                locationId 1100        NULL\nI          2           2015-05-01 12:20:59.183 9                roleId     5           NULL\nU          2           2015-05-02 12:20:59.183 6                locationId 1110        1100\nU          2           2015-05-02 12:20:59.183 6                roleId     5           5\nI          4           2015-05-04 12:20:59.183 8                locationId 1500        NULL\nI          4           2015-05-04 12:20:59.183 8                roleId     5           NULL\nI          7           2015-05-05 12:20:59.183 9                locationId 1000        NULL\nI          7           2015-05-05 12:20:59.183 9                roleId     8           NULL\nU          7           2015-05-06 12:20:59.183 9                locationId 1100        1000\nU          7           2015-05-06 12:20:59.183 9                roleId     9           8\nI          9           2015-05-08 12:20:59.183 2                locationId 1100        NULL\nI          9           2015-05-08 12:20:59.183 2                roleId     5           NULL\nU          9           2015-05-09 12:20:59.183 5                locationId 1100        1100\nU          9           2015-05-09 12:20:59.183 5                roleId     6           5\n\n(20 row(s) affected)\n
    \n soup wrap:

    To get the roleID and locationID on separate rows you can use a simple UNION ALL.

    And to combine the old and new values use the ROW_NUMBER() window function, like this:

    ;with t as(
        select *,
        ROW_NUMBER() OVER(partition by userid Order BY lastUpdateDate) rn
        from @MyTable
    ),
    a as (
    select userId, 'locationId' as fieldname,
    locationId as value, lastUpdateUserId, lastUpdateDate, rn
    from t
    UNION ALL
    select userId, 'roleId' as fieldname,
    roleId as value, lastUpdateUserId, lastUpdateDate, rn
    from t
    )
    select CASE WHEN a2.userId IS NULL THEN 'I' ELSE 'U' END as ChangeType,
    a1.userId, a1.lastUpdateDate, a1.lastUpdateUserId, a1.fieldname, a1.value as newValue, a2.value as oldvalue
    FROM a a1 LEFT JOIN a a2
    ON a1.userId = a2.userId and a1.fieldname = a2.fieldname
    AND a1.rn = a2.rn+1
    order by 2,3,5
    

    The a1 alias in the query above contains the "new values", the a2 contains the "old values". When you use the real data you will also need to partition by the fieldname (and perhaps table name) and also to join by them

    The result:

    ChangeType userId      lastUpdateDate          lastUpdateUserId fieldname  newValue    oldvalue
    ---------- ----------- ----------------------- ---------------- ---------- ----------- -----------
    I          1           2015-04-30 12:20:59.183 7                locationId 1000        NULL
    I          1           2015-04-30 12:20:59.183 7                roleId     1           NULL
    U          1           2015-05-03 12:20:59.183 6                locationId 1100        1000
    U          1           2015-05-03 12:20:59.183 6                roleId     3           1
    U          1           2015-05-07 12:20:59.183 7                locationId 1000        1100
    U          1           2015-05-07 12:20:59.183 7                roleId     3           3
    I          2           2015-05-01 12:20:59.183 9                locationId 1100        NULL
    I          2           2015-05-01 12:20:59.183 9                roleId     5           NULL
    U          2           2015-05-02 12:20:59.183 6                locationId 1110        1100
    U          2           2015-05-02 12:20:59.183 6                roleId     5           5
    I          4           2015-05-04 12:20:59.183 8                locationId 1500        NULL
    I          4           2015-05-04 12:20:59.183 8                roleId     5           NULL
    I          7           2015-05-05 12:20:59.183 9                locationId 1000        NULL
    I          7           2015-05-05 12:20:59.183 9                roleId     8           NULL
    U          7           2015-05-06 12:20:59.183 9                locationId 1100        1000
    U          7           2015-05-06 12:20:59.183 9                roleId     9           8
    I          9           2015-05-08 12:20:59.183 2                locationId 1100        NULL
    I          9           2015-05-08 12:20:59.183 2                roleId     5           NULL
    U          9           2015-05-09 12:20:59.183 5                locationId 1100        1100
    U          9           2015-05-09 12:20:59.183 5                roleId     6           5
    
    (20 row(s) affected)
    
    qid & accept id: (29941637, 29941691) query: Converting MySQL string date to Y-m-d H:i:s soup:

    The correct format for str_to_date should be

    \n
    mysql> select str_to_date('Friday 08 May 2015','%W %d %M %Y');\n+-------------------------------------------------+\n| str_to_date('Friday 08 May 2015','%W %d %M %Y') |\n+-------------------------------------------------+\n| 2015-05-08                                      |\n+-------------------------------------------------+\n1 row in set (0.00 sec)\n
    \n

    The format that you are using will return null

    \n
    mysql> select str_to_date('Friday 08 May 2015','%l %d %F %Y');\n+-------------------------------------------------+\n| str_to_date('Friday 08 May 2015','%l %d %F %Y') |\n+-------------------------------------------------+\n| NULL                                            |\n+-------------------------------------------------+\n1 row in set, 1 warning (0.00 sec)\n
    \n

    Here is the list of formatting Specifier https://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-format

    \n soup wrap:

    The correct format for str_to_date should be

    mysql> select str_to_date('Friday 08 May 2015','%W %d %M %Y');
    +-------------------------------------------------+
    | str_to_date('Friday 08 May 2015','%W %d %M %Y') |
    +-------------------------------------------------+
    | 2015-05-08                                      |
    +-------------------------------------------------+
    1 row in set (0.00 sec)
    

    The format that you are using will return null

    mysql> select str_to_date('Friday 08 May 2015','%l %d %F %Y');
    +-------------------------------------------------+
    | str_to_date('Friday 08 May 2015','%l %d %F %Y') |
    +-------------------------------------------------+
    | NULL                                            |
    +-------------------------------------------------+
    1 row in set, 1 warning (0.00 sec)
    

    Here is the list of formatting Specifier https://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-format

    qid & accept id: (29949018, 29949517) query: SQL Select - Where search term contains '.' soup:

    The cell contains this:

    \n
    d  e  v  .  h  o  w  m  u  c  h\n64 65 76 A9 68 6F 77 6D 75 63 68\n
    \n

    Full stop should probably be 2E (it's a 7-bit ASCII character so it's the same byte in many encodings, including UTF-8):

    \n
    mysql> SELECT HEX('.');\n+----------+\n| HEX('.') |\n+----------+\n| 2E       |\n+----------+\n1 row in set (0.00 sec)\n
    \n

    But you have A9. That's not a 7-bit ASCII character and we don't know what encoding your data uses so we can't tell what it is (but it's clearly not a dot). In ISO-8859-1 and Windows-1252 it'd be a copyright symbol (©). In UTF-8 it'd be an invalid character, typically displayed as REPLACEMENT CHARACTER (�) by many clients.

    \n soup wrap:

    The cell contains this:

    d  e  v  .  h  o  w  m  u  c  h
    64 65 76 A9 68 6F 77 6D 75 63 68
    

    Full stop should probably be 2E (it's a 7-bit ASCII character so it's the same byte in many encodings, including UTF-8):

    mysql> SELECT HEX('.');
    +----------+
    | HEX('.') |
    +----------+
    | 2E       |
    +----------+
    1 row in set (0.00 sec)
    

    But you have A9. That's not a 7-bit ASCII character and we don't know what encoding your data uses so we can't tell what it is (but it's clearly not a dot). In ISO-8859-1 and Windows-1252 it'd be a copyright symbol (©). In UTF-8 it'd be an invalid character, typically displayed as REPLACEMENT CHARACTER (�) by many clients.

    qid & accept id: (30003686, 30017789) query: How do I query a polymorphic pivot table with Eloquent & Laravel 5 soup:

    First, rename features table to carousel_video.

    \n

    Next, define relationship in your Carousel model like:

    \n
    public function videos()\n{\n    return $this->belongsToMany('YourAppNamespace\Video');\n}\n
    \n

    Then, query the Carousel model like:

    \n
    $videos = Carousel::find(2)->videos; //finds all videos associated with carousel having id of 2\n\nreturn $videos;\n
    \n

    You can do the opposite by defining a relationship on your Video model like:

    \n
    public function carousels()\n{\n    return $this->belongsToMany('YourAppNamespace\Carousel');\n}\n
    \n

    And, querying like:

    \n
    $carousels = Video::find(2)->carousels; //finds all carousels associated with video having id of 2\n\nreturn $carousels;\n
    \n soup wrap:

    First, rename features table to carousel_video.

    Next, define relationship in your Carousel model like:

    public function videos()
    {
        return $this->belongsToMany('YourAppNamespace\Video');
    }
    

    Then, query the Carousel model like:

    $videos = Carousel::find(2)->videos; //finds all videos associated with carousel having id of 2
    
    return $videos;
    

    You can do the opposite by defining a relationship on your Video model like:

    public function carousels()
    {
        return $this->belongsToMany('YourAppNamespace\Carousel');
    }
    

    And, querying like:

    $carousels = Video::find(2)->carousels; //finds all carousels associated with video having id of 2
    
    return $carousels;
    
    qid & accept id: (30030023, 30030609) query: Select sum of top three scores for each user soup:

    This is a pretty typical greatest-n-per-group problem. When I see those, I usually use a correlated subquery like this:

    \n
    SELECT *\nFROM myTable m\nWHERE(\n  SELECT COUNT(*)\n  FROM myTable mT\n  WHERE mT.userId = m.userId AND mT.score >= m.score) <= 3;\n
    \n

    This is not the whole solution, as it only gives you the top three scores for each user in its own row. To get the total, you can use SUM() wrapped around that subquery like this:

    \n
    SELECT userId, SUM(score) AS totalScore\nFROM(\n  SELECT userId, score\n  FROM myTable m\n  WHERE(\n    SELECT COUNT(*)\n    FROM myTable mT\n    WHERE mT.userId = m.userId AND mT.score >= m.score) <= 3) tmp\nGROUP BY userId;\n
    \n

    Here is an SQL Fiddle example.

    \n

    EDIT

    \n

    Regarding the ordering (which I forgot the first time through), you can just order by totalScore in descending order, and then by MIN(timestamp) in ascending order so that users with the lowest timestamp appears first in the list. Here is the updated query:

    \n
    SELECT userId, SUM(score) AS totalScore\nFROM(\n  SELECT userId, score, timeCol\n  FROM myTable m\n  WHERE(\n    SELECT COUNT(*)\n    FROM myTable mT\n    WHERE mT.userId = m.userId AND mT.score >= m.score) <= 3) tmp\nGROUP BY userId\nORDER BY totalScore DESC, MIN(timeCol) ASC;\n
    \n

    and here is an updated Fiddle link.

    \n

    EDIT 2

    \n

    As JPW pointed out in the comments, this query will not work if the user has the same score for multiple questions. To settle this, you can add an additional condition inside the subquery to order the users three rows by timestamp as well, like this:

    \n
    SELECT userId, SUM(score) AS totalScore\nFROM(\n  SELECT userId, score, timeCol\n  FROM myTable m\n  WHERE(\n    SELECT COUNT(*)\n    FROM myTable mT\n    WHERE mT.userId = m.userId AND mT.score >= m.score \n      AND mT.timeCol <= m.timeCol) <= 3) tmp\nGROUP BY userId\nORDER BY totalScore DESC, MIN(timeCol) ASC;\n
    \n

    I am still working on a solution to find out how to handle the scenario where the userid, score, and timestamp are all the same. In that case, you will have to find another tiebreaker. Perhaps you have a primary key column, and you can choose to take a higher/lower primary key?

    \n soup wrap:

    This is a pretty typical greatest-n-per-group problem. When I see those, I usually use a correlated subquery like this:

    SELECT *
    FROM myTable m
    WHERE(
      SELECT COUNT(*)
      FROM myTable mT
      WHERE mT.userId = m.userId AND mT.score >= m.score) <= 3;
    

    This is not the whole solution, as it only gives you the top three scores for each user in its own row. To get the total, you can use SUM() wrapped around that subquery like this:

    SELECT userId, SUM(score) AS totalScore
    FROM(
      SELECT userId, score
      FROM myTable m
      WHERE(
        SELECT COUNT(*)
        FROM myTable mT
        WHERE mT.userId = m.userId AND mT.score >= m.score) <= 3) tmp
    GROUP BY userId;
    

    Here is an SQL Fiddle example.

    EDIT

    Regarding the ordering (which I forgot the first time through), you can just order by totalScore in descending order, and then by MIN(timestamp) in ascending order so that users with the lowest timestamp appears first in the list. Here is the updated query:

    SELECT userId, SUM(score) AS totalScore
    FROM(
      SELECT userId, score, timeCol
      FROM myTable m
      WHERE(
        SELECT COUNT(*)
        FROM myTable mT
        WHERE mT.userId = m.userId AND mT.score >= m.score) <= 3) tmp
    GROUP BY userId
    ORDER BY totalScore DESC, MIN(timeCol) ASC;
    

    and here is an updated Fiddle link.

    EDIT 2

    As JPW pointed out in the comments, this query will not work if the user has the same score for multiple questions. To settle this, you can add an additional condition inside the subquery to order the users three rows by timestamp as well, like this:

    SELECT userId, SUM(score) AS totalScore
    FROM(
      SELECT userId, score, timeCol
      FROM myTable m
      WHERE(
        SELECT COUNT(*)
        FROM myTable mT
        WHERE mT.userId = m.userId AND mT.score >= m.score 
          AND mT.timeCol <= m.timeCol) <= 3) tmp
    GROUP BY userId
    ORDER BY totalScore DESC, MIN(timeCol) ASC;
    

    I am still working on a solution to find out how to handle the scenario where the userid, score, and timestamp are all the same. In that case, you will have to find another tiebreaker. Perhaps you have a primary key column, and you can choose to take a higher/lower primary key?

    qid & accept id: (30099821, 30099883) query: Query to find second largest value from every group soup:

    I think you can do what you want with the project_milestone table and row_number():

    \n
    select pm.*\nfrom (select pm.*,\n             row_number() over (partition by project_id order by completed_date desc) as seqnum\n      from project_milestone pm\n      where pm.completed_date is not null\n     ) pm\nwhere seqnum = 2;\n
    \n

    If you need to include all projects, even those without two milestones, you can use a left join:

    \n
    select p.project_id, pm.milestone_id, pm.completed_date\nfrom projects p left join\n     (select pm.*,\n             row_number() over (partition by project_id order by completed_date desc) as seqnum\n      from project_milestone pm\n      where pm.completed_date is not null\n     ) pm\n     on p.project_id = pm.project_id and pm.seqnum = 2;\n
    \n soup wrap:

    I think you can do what you want with the project_milestone table and row_number():

    select pm.*
    from (select pm.*,
                 row_number() over (partition by project_id order by completed_date desc) as seqnum
          from project_milestone pm
          where pm.completed_date is not null
         ) pm
    where seqnum = 2;
    

    If you need to include all projects, even those without two milestones, you can use a left join:

    select p.project_id, pm.milestone_id, pm.completed_date
    from projects p left join
         (select pm.*,
                 row_number() over (partition by project_id order by completed_date desc) as seqnum
          from project_milestone pm
          where pm.completed_date is not null
         ) pm
         on p.project_id = pm.project_id and pm.seqnum = 2;
    
    qid & accept id: (30108652, 30108864) query: How to join 3 tables (1 lookup) with SQL soup:

    You have to use country look up table join with seller_id and user_id separetly two times, to get user and seller country

    \n
    create table table_1(user_ID int, seller_country_ID int)\ncreate table table_2(user_ID int, users_country_ID int)\ncreate table table_3(country_ID int, country_Name varchar(50))\n\n\ninsert into table_1 values(1, 100)\ninsert into table_1 values(2, 101)\n\ninsert into table_2 values(1, 200)\ninsert into table_2 values(2, 201)\n\n\ninsert into table_3 values(100, 'USA')\ninsert into table_3 values(101, 'China')\ninsert into table_3 values(200, 'CANADA')\ninsert into table_3 values(201, 'Japan')\n\nSelect table_1.user_ID, uc.country_Name "User Contry", sc.country_Name "Seller Country"\nFROM table_1 INNER JOIN table_2 ON table_1.user_ID= table_2.user_ID\nINNER JOIN table_3 uc ON table_2.users_country_ID= uc.country_ID\nINNER JOIN table_3 sc ON table_1.seller_country_ID= sc.country_ID\n
    \n

    OUTPUT

    \n
    user_ID   User Contry     Seller Country\n1        CANADA            USA\n2        Japan            China\n
    \n

    DEMO SQL FIDDLE

    \n soup wrap:

    You have to use country look up table join with seller_id and user_id separetly two times, to get user and seller country

    create table table_1(user_ID int, seller_country_ID int)
    create table table_2(user_ID int, users_country_ID int)
    create table table_3(country_ID int, country_Name varchar(50))
    
    
    insert into table_1 values(1, 100)
    insert into table_1 values(2, 101)
    
    insert into table_2 values(1, 200)
    insert into table_2 values(2, 201)
    
    
    insert into table_3 values(100, 'USA')
    insert into table_3 values(101, 'China')
    insert into table_3 values(200, 'CANADA')
    insert into table_3 values(201, 'Japan')
    
    Select table_1.user_ID, uc.country_Name "User Contry", sc.country_Name "Seller Country"
    FROM table_1 INNER JOIN table_2 ON table_1.user_ID= table_2.user_ID
    INNER JOIN table_3 uc ON table_2.users_country_ID= uc.country_ID
    INNER JOIN table_3 sc ON table_1.seller_country_ID= sc.country_ID
    

    OUTPUT

    user_ID   User Contry     Seller Country
    1        CANADA            USA
    2        Japan            China
    

    DEMO SQL FIDDLE

    qid & accept id: (30134105, 30134283) query: Update duplicate latitude values by iteratively increasing margin soup:

    I don't know what your exact data looks like, but suppose you have this table, called tbl:

    \n
            ID        LAT        LON\n---------- ---------- ----------\n         1         20         25\n         2         30         33\n         3         30         33\n         4         55         60\n         5         55         60\n         6         55         60\n
    \n

    You could run the following:

    \n
    select  id,\n        case when rn > 1 then lat+rn-1 else lat end as lat,\n        lon\nfrom(\nselect  t.*,\n        row_number() over(partition by lat, lon order by id) as rn\nfrom    tbl t\n) x;\n
    \n

    To get:

    \n
            ID        LAT        LON\n---------- ---------- ----------\n         1         20         25\n         2         30         33\n         3         31         33\n         4         55         60\n         5         56         60\n         6         57         60\n
    \n

    Notice how IDs 2 and 3 were dups, and IDs 4, 5, and 6, were dups. They are no longer exact dups because the lat value has increased, sequentially, to make the rows not duplicates. They go up by one for each next duplicate.

    \n

    Fiddle: http://sqlfiddle.com/#!4/ef959/1/0

    \n

    Edit (based on your edit)

    \n
    select  id,\n        case when rn > .0003 then lat+rn-.0003 else lat end as lat,\n        lon\nfrom(\nselect  t.*,\n        row_number() over(partition by lat, lon order by id)*.0003 as rn\nfrom    tbl t\n) x;\n
    \n

    The above will ascend by .0003 rather than 1.

    \n

    See new fiddle here: http://sqlfiddle.com/#!4/21506/6/0

    \n soup wrap:

    I don't know what your exact data looks like, but suppose you have this table, called tbl:

            ID        LAT        LON
    ---------- ---------- ----------
             1         20         25
             2         30         33
             3         30         33
             4         55         60
             5         55         60
             6         55         60
    

    You could run the following:

    select  id,
            case when rn > 1 then lat+rn-1 else lat end as lat,
            lon
    from(
    select  t.*,
            row_number() over(partition by lat, lon order by id) as rn
    from    tbl t
    ) x;
    

    To get:

            ID        LAT        LON
    ---------- ---------- ----------
             1         20         25
             2         30         33
             3         31         33
             4         55         60
             5         56         60
             6         57         60
    

    Notice how IDs 2 and 3 were dups, and IDs 4, 5, and 6, were dups. They are no longer exact dups because the lat value has increased, sequentially, to make the rows not duplicates. They go up by one for each next duplicate.

    Fiddle: http://sqlfiddle.com/#!4/ef959/1/0

    Edit (based on your edit)

    select  id,
            case when rn > .0003 then lat+rn-.0003 else lat end as lat,
            lon
    from(
    select  t.*,
            row_number() over(partition by lat, lon order by id)*.0003 as rn
    from    tbl t
    ) x;
    

    The above will ascend by .0003 rather than 1.

    See new fiddle here: http://sqlfiddle.com/#!4/21506/6/0

    qid & accept id: (30141267, 30141332) query: How do I add an exception to a query that finds the most popular records? soup:

    If you want screen casts that have none of the 9 tags, then the logic is more like this:

    \n
    SELECT v.screencastId, v.title,\n       GROUP_CONCAT(m.tagName) as tags\nFROM screencasts v JOIN\n     screencastTags m\n     ON v.screencastId = m.screencastId LEFT JOIN\n     (SELECT t.tagName\n      FROM tags t JOIN\n           screencastTags m\n           ON m.tagName = t.tagName\n      GROUP BY t.tagName\n      ORDER BY COUNT(*) DESC, t.tagName DESC\n      LIMIT 9\n     ) tags9\n     ON m.tagname = tags9.tagname\nGROUP BY v.screencastId, v.title\nHAVING SUM(tags9.tagname IS NOT NULL) = 0;\n
    \n

    What is this doing? The LEFT JOIN is matching tags to the nine original tags (assuming the database has not been updated between the two queries). The aggregation is by the screen case. The HAVING clause then checks that there is no match to the nine tags. This guarantees that none of the nine tags are one the returned values from this query.

    \n

    EDIT:

    \n

    Ooops, I think I misinterpreted the question. I thought you wanted screen casts that have none of the nine tags. Instead, you want all the tags for screen casts that have additional tags.

    \n

    This is actually a small variation on the above query. Instead of checking that all tags are different, this checks that any tag is different. The only change is to the HAVING clause:

    \n
    SELECT v.screencastId, v.title,\n       GROUP_CONCAT(m.tagName) as tags\nFROM screencasts v JOIN\n     screencastTags m\n     ON v.screencastId = m.screencastId LEFT JOIN\n     (SELECT t.tagName\n      FROM tags t JOIN\n           screencastTags m\n           ON m.tagName = t.tagName\n      GROUP BY t.tagName\n      ORDER BY COUNT(*) DESC, t.tagName DESC\n      LIMIT 9\n     ) tags9\n     ON m.tagname = tags9.tagname\nGROUP BY v.screencastId, v.title\nHAVING SUM(tags9.tagname IS NULL) > 0;\n
    \n soup wrap:

    If you want screen casts that have none of the 9 tags, then the logic is more like this:

    SELECT v.screencastId, v.title,
           GROUP_CONCAT(m.tagName) as tags
    FROM screencasts v JOIN
         screencastTags m
         ON v.screencastId = m.screencastId LEFT JOIN
         (SELECT t.tagName
          FROM tags t JOIN
               screencastTags m
               ON m.tagName = t.tagName
          GROUP BY t.tagName
          ORDER BY COUNT(*) DESC, t.tagName DESC
          LIMIT 9
         ) tags9
         ON m.tagname = tags9.tagname
    GROUP BY v.screencastId, v.title
    HAVING SUM(tags9.tagname IS NOT NULL) = 0;
    

    What is this doing? The LEFT JOIN is matching tags to the nine original tags (assuming the database has not been updated between the two queries). The aggregation is by the screen case. The HAVING clause then checks that there is no match to the nine tags. This guarantees that none of the nine tags are one the returned values from this query.

    EDIT:

    Ooops, I think I misinterpreted the question. I thought you wanted screen casts that have none of the nine tags. Instead, you want all the tags for screen casts that have additional tags.

    This is actually a small variation on the above query. Instead of checking that all tags are different, this checks that any tag is different. The only change is to the HAVING clause:

    SELECT v.screencastId, v.title,
           GROUP_CONCAT(m.tagName) as tags
    FROM screencasts v JOIN
         screencastTags m
         ON v.screencastId = m.screencastId LEFT JOIN
         (SELECT t.tagName
          FROM tags t JOIN
               screencastTags m
               ON m.tagName = t.tagName
          GROUP BY t.tagName
          ORDER BY COUNT(*) DESC, t.tagName DESC
          LIMIT 9
         ) tags9
         ON m.tagname = tags9.tagname
    GROUP BY v.screencastId, v.title
    HAVING SUM(tags9.tagname IS NULL) > 0;
    
    qid & accept id: (30166727, 30167775) query: Select data from one table & then rename the columns based on another table in SQL server soup:

    You can use dynamic sql to generate the query to execute based on the values in the first table. Here is an example:

    \n
    Declare @dynamicSQL nvarchar(200)\n
    \n

    SET @dynamicSQL = 'SELECT ' + (SELECT stuff((select ',' + name + ' AS ' + value\n from Table1\n for xml path('')),1,1,'')) + ' FROM Table2'

    \n
    EXECUTE sp_executesql @dynamicSQL\n
    \n

    SQL Fiddle: http://sqlfiddle.com/#!6/768f9/10

    \n soup wrap:

    You can use dynamic sql to generate the query to execute based on the values in the first table. Here is an example:

    Declare @dynamicSQL nvarchar(200)
    

    SET @dynamicSQL = 'SELECT ' + (SELECT stuff((select ',' + name + ' AS ' + value from Table1 for xml path('')),1,1,'')) + ' FROM Table2'

    EXECUTE sp_executesql @dynamicSQL
    

    SQL Fiddle: http://sqlfiddle.com/#!6/768f9/10

    qid & accept id: (30174842, 30175507) query: Return the proportionate share of the same type soup:

    Set up:

    \n
    create table sales (\n    ID numeric,\n    type varchar(20),\n    price decimal\n);\n\ninsert into sales values (1,'bike','900.00');\ninsert into sales values (2,'bike','100.00');\n
    \n

    Query:

    \n
    select s1.ID, s1.type, s1.price, (s1.price/s2.sum_price) as proportion\nfrom sales s1\ninner join (\n    select type, sum(price) as sum_price\n    from sales\n    group by type\n) s2\non s1.type = s2.type;\n
    \n

    The inner query gets all the sums by type. This is an emulation of the sum(col2) over (partition by col2) which is available in some databases but not in MySQL.

    \n soup wrap:

    Set up:

    create table sales (
        ID numeric,
        type varchar(20),
        price decimal
    );
    
    insert into sales values (1,'bike','900.00');
    insert into sales values (2,'bike','100.00');
    

    Query:

    select s1.ID, s1.type, s1.price, (s1.price/s2.sum_price) as proportion
    from sales s1
    inner join (
        select type, sum(price) as sum_price
        from sales
        group by type
    ) s2
    on s1.type = s2.type;
    

    The inner query gets all the sums by type. This is an emulation of the sum(col2) over (partition by col2) which is available in some databases but not in MySQL.

    qid & accept id: (30191802, 30192081) query: SQL Add hours for employees soup:

    Basically you have at least 2 options:

    \n

    Option 1 - Use DISTINCT and SUM with OVER clause:

    \n
    SELECT DISTINCT a.*, \n       SUM(DATEDIFF(mi, b.timein, b.timeout)) OVER(PARTITION BY a.id) AS total_mins \nFROM tbl_people a \nLEFT JOIN tbl_register b ON a.id=b.personid\n
    \n

    Option 2 - Use a derived table for the GROUP BY part:

    \n
    SELECT a.*,\n       total_mins            \nfrom tbl_people a \nleft join (\n    SELECT personid, \n           SUM(DATEDIFF(mi, timein, timeout) AS total_mins \n    FROM tbl_register \n    GROUP BY personid \n ) b ON a.id=b.personid\n
    \n soup wrap:

    Basically you have at least 2 options:

    Option 1 - Use DISTINCT and SUM with OVER clause:

    SELECT DISTINCT a.*, 
           SUM(DATEDIFF(mi, b.timein, b.timeout)) OVER(PARTITION BY a.id) AS total_mins 
    FROM tbl_people a 
    LEFT JOIN tbl_register b ON a.id=b.personid
    

    Option 2 - Use a derived table for the GROUP BY part:

    SELECT a.*,
           total_mins            
    from tbl_people a 
    left join (
        SELECT personid, 
               SUM(DATEDIFF(mi, timein, timeout) AS total_mins 
        FROM tbl_register 
        GROUP BY personid 
     ) b ON a.id=b.personid
    
    qid & accept id: (30205573, 30205684) query: SQL: Total days in a month soup:

    You can get the number of days of a given date like this:

    \n
    DECLARE @date DATETIME = '2014-01-01'\nSELECT DATEDIFF(DAY, @date, DATEADD(MONTH, 1, @date))\n
    \n

    And the query:

    \n
    SELECT  ID\n        ,[Date]\n        ,[Time]\n        ,Value1\n        ,Value2\n        ,DATEDIFF(DAY, [Date], DATEADD(MONTH, 1, [Date])) AS TotalDayinMonth\n        ,Value1 * 100 * DATEDIFF(DAY, [Date], DATEADD(MONTH, 1, [Date])) * Value2 AS Result\nFROM yourTable\n
    \n soup wrap:

    You can get the number of days of a given date like this:

    DECLARE @date DATETIME = '2014-01-01'
    SELECT DATEDIFF(DAY, @date, DATEADD(MONTH, 1, @date))
    

    And the query:

    SELECT  ID
            ,[Date]
            ,[Time]
            ,Value1
            ,Value2
            ,DATEDIFF(DAY, [Date], DATEADD(MONTH, 1, [Date])) AS TotalDayinMonth
            ,Value1 * 100 * DATEDIFF(DAY, [Date], DATEADD(MONTH, 1, [Date])) * Value2 AS Result
    FROM yourTable
    
    qid & accept id: (30211424, 30211655) query: wordpress display date table with inner join and the same variable soup:

    Select the columns you want with an alias

    \n
    SELECT wp_a.name AS a_name FROM wp_a \n
    \n

    and you can get the value like

    \n
     $row['a_name']\n
    \n

    likewise you can select fields from second table

    \n soup wrap:

    Select the columns you want with an alias

    SELECT wp_a.name AS a_name FROM wp_a 
    

    and you can get the value like

     $row['a_name']
    

    likewise you can select fields from second table

    qid & accept id: (30239227, 30239332) query: How to do autoincrement based on last value from another table? soup:

    I think, you are looking for something like this.

    \n

    Use MAX(ID) of @t + id for incremented values of ID and ROW_NUMBER() with PARTITION BY to get partitioned values of VID

    \n
    INSERT INTO @t (ID,VID,Sname,Rname)\nSelect (select MAX(ID) FROM @t) + id as Id,ROW_NUMBER()OVER(partition by id ORDER BY VID)VID,Sname,Rname from @tt\n
    \n

    Inserted Values

    \n
    4602    1   Bike    Dio\n4602    2   Bike    Pulsar\n4602    3   Bike    Duke\n4603    1   Cloth   jeans\n4603    2   Cloth   shirts\n4603    3   Cloth   short\n
    \n soup wrap:

    I think, you are looking for something like this.

    Use MAX(ID) of @t + id for incremented values of ID and ROW_NUMBER() with PARTITION BY to get partitioned values of VID

    INSERT INTO @t (ID,VID,Sname,Rname)
    Select (select MAX(ID) FROM @t) + id as Id,ROW_NUMBER()OVER(partition by id ORDER BY VID)VID,Sname,Rname from @tt
    

    Inserted Values

    4602    1   Bike    Dio
    4602    2   Bike    Pulsar
    4602    3   Bike    Duke
    4603    1   Cloth   jeans
    4603    2   Cloth   shirts
    4603    3   Cloth   short
    
    qid & accept id: (30245181, 30245296) query: How to check a date range in SQL? soup:

    You need an additional branch on that conditional, if you're to cover all your bases:

    \n
    ELSE IF @date1 <= @startdate\n
    \n

    You've only tested for 1) between two dates and 2) greater than the last date. At least one of those should include an equality check, as well, or else if your date is equal you won't match up.

    \n

    Now, you could go with a plain old ELSE block at the end, to catch everything, but I suspect that you're actually looking for something more like:

    \n
    IF @date1 > @startdate AND @date1 < @enddate\n    BEGIN\n        SET @finaldate = '1'\n    END\nELSE IF @date1 >= @enddate\n    BEGIN\n        SET @finaldate = '2'\n    END\nELSE IF @date1 <= @startdate\n    BEGIN\n        SET @finaldate = '3'\n    END\n\nSELECT @finaldate AS Final_Date\n
    \n soup wrap:

    You need an additional branch on that conditional, if you're to cover all your bases:

    ELSE IF @date1 <= @startdate
    

    You've only tested for 1) between two dates and 2) greater than the last date. At least one of those should include an equality check, as well, or else if your date is equal you won't match up.

    Now, you could go with a plain old ELSE block at the end, to catch everything, but I suspect that you're actually looking for something more like:

    IF @date1 > @startdate AND @date1 < @enddate
        BEGIN
            SET @finaldate = '1'
        END
    ELSE IF @date1 >= @enddate
        BEGIN
            SET @finaldate = '2'
        END
    ELSE IF @date1 <= @startdate
        BEGIN
            SET @finaldate = '3'
        END
    
    SELECT @finaldate AS Final_Date
    
    qid & accept id: (30263295, 30266884) query: combining AND operator in mysql soup:

    There's nothing wrong with your SQL. Check your data.

    \n
    create table planmenu ( \n  name varchar(20),\n  type varchar(20),\n  dishcontent varchar(20)\n);\n\ninsert into planmenu values ('a','b','c');\n
    \n

    Now if you do this:

    \n
    SELECT * \nFROM planmenu \nWHERE name LIKE '%z%' \nAND type LIKE '%z%' \nAND dishcontent LIKE '%c%';\n
    \n

    You'll get zero records back because there are no records matching all three conditions.

    \n

    If you do this:

    \n
    SELECT * \nFROM planmenu \nWHERE name LIKE '%z%' \nAND type LIKE '%z%' \nOR dishcontent LIKE '%c%';\n
    \n

    You'll get a record back because the OR makes it so that the first two conditions OR only the third condition has to be true. It is the equivalent of running this:

    \n
    SELECT * \nFROM planmenu \nWHERE (name LIKE '%z%' AND type LIKE '%z%')\nOR dishcontent LIKE '%c%';\n
    \n

    Check your data. You don't have any records matching all three conditions.

    \n soup wrap:

    There's nothing wrong with your SQL. Check your data.

    create table planmenu ( 
      name varchar(20),
      type varchar(20),
      dishcontent varchar(20)
    );
    
    insert into planmenu values ('a','b','c');
    

    Now if you do this:

    SELECT * 
    FROM planmenu 
    WHERE name LIKE '%z%' 
    AND type LIKE '%z%' 
    AND dishcontent LIKE '%c%';
    

    You'll get zero records back because there are no records matching all three conditions.

    If you do this:

    SELECT * 
    FROM planmenu 
    WHERE name LIKE '%z%' 
    AND type LIKE '%z%' 
    OR dishcontent LIKE '%c%';
    

    You'll get a record back because the OR makes it so that the first two conditions OR only the third condition has to be true. It is the equivalent of running this:

    SELECT * 
    FROM planmenu 
    WHERE (name LIKE '%z%' AND type LIKE '%z%')
    OR dishcontent LIKE '%c%';
    

    Check your data. You don't have any records matching all three conditions.

    qid & accept id: (30298547, 30298649) query: Condition in WHERE clause (Oracle) soup:

    You could join the subquery, i.e. the CTE with the table, and then use the column name in the filter predicate. The result of the subquery in the WITH clause acts like a temporary table.

    \n

    For example,

    \n
    WITH SUBQ AS\n  (SELECT dim.MONTH_NAME AS current_month_name ,\n    dim.year_period      AS current_month ,\n    dim.PERIOD_YEAR      AS YEAR ,\n    CASE\n      WHEN dim.year_period NOT LIKE '%01'\n      THEN to_number(CONCAT(TO_CHAR(dim.PERIOD_YEAR-1) , '01' ))\n      WHEN dim.year_period LIKE '%01'\n      THEN to_number(CONCAT(TO_CHAR(dim.PERIOD_YEAR-2) , '01' ))\n    END AS START_DATE ,\n    CASE\n      WHEN dim.year_period NOT LIKE '%01'\n      THEN to_number(CONCAT(TO_CHAR(dim.PERIOD_YEAR) , '01' ))\n      WHEN dim.year_period LIKE '%01'\n      THEN to_number(CONCAT(TO_CHAR(dim.PERIOD_YEAR-1) , '01' ))\n    END AS ENDDATE\n  FROM dim_periods dim\n  WHERE dim.year_period=to_number(TO_CHAR(SYSDATE, 'YYYYMM'))\n  )\nSELECT fd.COLUMNS,\n  q.COLUMNS\nFROM financial_data fd\nJOIN subq q\nON (fd.KEY = q.KEY) -- join key\nWHERE fd.year_period BETWEEN q.start_date AND q.enddate;\n
    \n

    So, SUBQ acts like a temporary table, which you join with financial_data table.

    \n

    UPDATE OP doesn't want the ANSI join syntax.

    \n
    WITH SUBQ AS\n  (SELECT dim.MONTH_NAME AS current_month_name ,\n    dim.year_period      AS current_month ,\n    dim.PERIOD_YEAR      AS YEAR ,\n    CASE\n      WHEN dim.year_period NOT LIKE '%01'\n      THEN to_number(CONCAT(TO_CHAR(dim.PERIOD_YEAR-1) , '01' ))\n      WHEN dim.year_period LIKE '%01'\n      THEN to_number(CONCAT(TO_CHAR(dim.PERIOD_YEAR-2) , '01' ))\n    END AS START_DATE ,\n    CASE\n      WHEN dim.year_period NOT LIKE '%01'\n      THEN to_number(CONCAT(TO_CHAR(dim.PERIOD_YEAR) , '01' ))\n      WHEN dim.year_period LIKE '%01'\n      THEN to_number(CONCAT(TO_CHAR(dim.PERIOD_YEAR-1) , '01' ))\n    END AS ENDDATE\n  FROM dim_periods dim\n  WHERE dim.year_period=to_number(TO_CHAR(SYSDATE, 'YYYYMM'))\n  )\nSELECT fd.COLUMNS,\n  q.COLUMNS\nFROM financial_data fd,\n  subq q\nWHERE fd.KEY = q.KEY -- join key\nAND fd.year_period BETWEEN q.start_date AND q.enddate;\n
    \n soup wrap:

    You could join the subquery, i.e. the CTE with the table, and then use the column name in the filter predicate. The result of the subquery in the WITH clause acts like a temporary table.

    For example,

    WITH SUBQ AS
      (SELECT dim.MONTH_NAME AS current_month_name ,
        dim.year_period      AS current_month ,
        dim.PERIOD_YEAR      AS YEAR ,
        CASE
          WHEN dim.year_period NOT LIKE '%01'
          THEN to_number(CONCAT(TO_CHAR(dim.PERIOD_YEAR-1) , '01' ))
          WHEN dim.year_period LIKE '%01'
          THEN to_number(CONCAT(TO_CHAR(dim.PERIOD_YEAR-2) , '01' ))
        END AS START_DATE ,
        CASE
          WHEN dim.year_period NOT LIKE '%01'
          THEN to_number(CONCAT(TO_CHAR(dim.PERIOD_YEAR) , '01' ))
          WHEN dim.year_period LIKE '%01'
          THEN to_number(CONCAT(TO_CHAR(dim.PERIOD_YEAR-1) , '01' ))
        END AS ENDDATE
      FROM dim_periods dim
      WHERE dim.year_period=to_number(TO_CHAR(SYSDATE, 'YYYYMM'))
      )
    SELECT fd.COLUMNS,
      q.COLUMNS
    FROM financial_data fd
    JOIN subq q
    ON (fd.KEY = q.KEY) -- join key
    WHERE fd.year_period BETWEEN q.start_date AND q.enddate;
    

    So, SUBQ acts like a temporary table, which you join with financial_data table.

    UPDATE OP doesn't want the ANSI join syntax.

    WITH SUBQ AS
      (SELECT dim.MONTH_NAME AS current_month_name ,
        dim.year_period      AS current_month ,
        dim.PERIOD_YEAR      AS YEAR ,
        CASE
          WHEN dim.year_period NOT LIKE '%01'
          THEN to_number(CONCAT(TO_CHAR(dim.PERIOD_YEAR-1) , '01' ))
          WHEN dim.year_period LIKE '%01'
          THEN to_number(CONCAT(TO_CHAR(dim.PERIOD_YEAR-2) , '01' ))
        END AS START_DATE ,
        CASE
          WHEN dim.year_period NOT LIKE '%01'
          THEN to_number(CONCAT(TO_CHAR(dim.PERIOD_YEAR) , '01' ))
          WHEN dim.year_period LIKE '%01'
          THEN to_number(CONCAT(TO_CHAR(dim.PERIOD_YEAR-1) , '01' ))
        END AS ENDDATE
      FROM dim_periods dim
      WHERE dim.year_period=to_number(TO_CHAR(SYSDATE, 'YYYYMM'))
      )
    SELECT fd.COLUMNS,
      q.COLUMNS
    FROM financial_data fd,
      subq q
    WHERE fd.KEY = q.KEY -- join key
    AND fd.year_period BETWEEN q.start_date AND q.enddate;
    
    qid & accept id: (30309531, 30310126) query: How to determine number of records in a subquery soup:

    The problem is in your parent SQL:

    \n
    (SELECT COUNT(*)\n FROM\n     (SELECT cjbe.EMPLID\n      FROM PS_JOB cjbe\n      WHERE cjbe.POSITION_NBR = jbe.POSITION_NBR\n      GROUP BY cjbe.EMPLID)) "Qty_In_Position?"\n
    \n

    jbe.POSITION_NBR is not available because you are inside a second nested subquery. It is available one level up (inside the first subquery where you have SELECT COUNT(*)).

    \n

    The parent and subquery should look like this:

    \n
    SELECT jbe.EMPLID "Employee_ID", REPLACE(nam.NAME,',',', ') "Name", jbe.HR_STATUS "HR_Status" , jbe.REG_TEMP "Reg/Temp" ,jbe.FULL_PART_TIME "FT/PT" ,jbe.SAL_ADMIN_PLAN "Emp_Type" ,jbe.DEPTID "Dept_ID" ,dpt.descr "Dept_Name" ,\n(SELECT Min(pj.EFFDT) AS HIRE_DT_1\n FROM PS_JOB pj\n WHERE pj.EMPLID=jbe.emplid\n     AND action IN('HIR',\n                   'REH')\n     AND pj.empl_rcd = 0) "Emp_Hired_Into_Pos_Dt" , jbe.POSITION_NBR "Position_Num" ,\n(SELECT MIN(EFFDT)\n FROM PS_JOB\n WHERE 1=1\n     AND POSITION_NBR = jbe.POSITION_NBR) "Position_Orig_Created_On" , pos.DESCR "Position_Job_Title" ,dist.ACCT_CD "Budget" ,dist.DIST_PCT "Distribution" ,\n(SELECT count(distinct cjbe.EMPLID)\n          FROM PS_JOB cjbe\n          WHERE cjbe.POSITION_NBR = jbe.POSITION_NBR) "Qty_In_Position?"\nFROM PS_JOB jbe,\n PS_NAMES nam,\n PS_JOB_EARNS_DIST dist,\n PS_POSITION_DATA pos,\n ps_dept_tbl dpt\n\nWHERE (dist.EMPLID = jbe.EMPLID\n   AND dist.EMPL_RCD = jbe.EMPL_RCD\n   AND dist.EFFDT = jbe.EFFDT\n   AND dist.EFFSEQ = jbe.EFFSEQ\n   AND (jbe.EFFDT =\n            (SELECT MAX(A_ED.EFFDT)\n             FROM PS_JOB A_ED\n             WHERE jbe.EMPLID = A_ED.EMPLID\n                 AND jbe.EMPL_RCD = A_ED.EMPL_RCD\n                 AND A_ED.EFFDT <= SYSDATE)\n        AND jbe.EFFSEQ =\n            (SELECT MAX(A_ES.EFFSEQ)\n             FROM PS_JOB A_ES\n             WHERE jbe.EMPLID = A_ES.EMPLID\n                 AND jbe.EMPL_RCD = A_ES.EMPL_RCD\n                 AND jbe.EFFDT = A_ES.EFFDT)\n        AND jbe.EMPL_RCD = 0\n        AND jbe.HR_STATUS = 'A'\n        AND REGEXP_LIKE (SUBSTR(jbe.POSITION_NBR,1,2), '^S[0-9]')\n        AND jbe.EMPLID = nam.EMPLID\n        AND nam.EFFDT =\n            (SELECT MAX(B_ED.EFFDT)\n             FROM PS_NAMES B_ED\n             WHERE nam.EMPLID = B_ED.EMPLID\n                 AND nam.NAME_TYPE = B_ED.NAME_TYPE\n                 AND B_ED.EFFDT <= SYSDATE)\n        AND nam.NAME_TYPE = 'PRI'))\n\n    AND pos.position_nbr(+) = jbe.position_nbr\nAND pos.effdt=\n    (SELECT max(p2.effdt)\n     FROM ps_position_data p2\n     WHERE p2.position_nbr=pos.position_nbr\n         AND p2.effdt<=sysdate)\nAND dpt.deptid(+) = jbe.deptid\nAND dpt.setid(+) = jbe.setid_dept\nAND (dpt.EFFDT=\n         (SELECT MAX(d2.EFFDT)\n          FROM PS_DEPT_TBL d2\n          WHERE dpt.DEPTID=d2.DEPTID\n              AND jbe.SETID_DEPT=d2.SETID\n              AND d2.EFFDT<=SYSDATE))\n\n    AND (\n         (SELECT MIN(EFFDT)\n          FROM PS_JOB\n          WHERE 1=1\n              AND POSITION_NBR = jbe.POSITION_NBR) BETWEEN TO_DATE ('01-JAN-2014', 'DD-MON-YYYY') AND TO_DATE ('31-DEC-2014', 'DD-MON-YYYY')\n     OR\n         (SELECT Min(pj.EFFDT) AS HIRE_DT_1\n          FROM PS_JOB pj\n          WHERE pj.EMPLID=jbe.emplid\n              AND action IN('HIR',\n                            'REH')\n              AND pj.empl_rcd = 0) BETWEEN TO_DATE ('01-JAN-2014', 'DD-MON-YYYY') AND TO_DATE ('31-DEC-2014', 'DD-MON-YYYY'))\nORDER BY jbe.POSITION_NBR\n
    \n soup wrap:

    The problem is in your parent SQL:

    (SELECT COUNT(*)
     FROM
         (SELECT cjbe.EMPLID
          FROM PS_JOB cjbe
          WHERE cjbe.POSITION_NBR = jbe.POSITION_NBR
          GROUP BY cjbe.EMPLID)) "Qty_In_Position?"
    

    jbe.POSITION_NBR is not available because you are inside a second nested subquery. It is available one level up (inside the first subquery where you have SELECT COUNT(*)).

    The parent and subquery should look like this:

    SELECT jbe.EMPLID "Employee_ID", REPLACE(nam.NAME,',',', ') "Name", jbe.HR_STATUS "HR_Status" , jbe.REG_TEMP "Reg/Temp" ,jbe.FULL_PART_TIME "FT/PT" ,jbe.SAL_ADMIN_PLAN "Emp_Type" ,jbe.DEPTID "Dept_ID" ,dpt.descr "Dept_Name" ,
    (SELECT Min(pj.EFFDT) AS HIRE_DT_1
     FROM PS_JOB pj
     WHERE pj.EMPLID=jbe.emplid
         AND action IN('HIR',
                       'REH')
         AND pj.empl_rcd = 0) "Emp_Hired_Into_Pos_Dt" , jbe.POSITION_NBR "Position_Num" ,
    (SELECT MIN(EFFDT)
     FROM PS_JOB
     WHERE 1=1
         AND POSITION_NBR = jbe.POSITION_NBR) "Position_Orig_Created_On" , pos.DESCR "Position_Job_Title" ,dist.ACCT_CD "Budget" ,dist.DIST_PCT "Distribution" ,
    (SELECT count(distinct cjbe.EMPLID)
              FROM PS_JOB cjbe
              WHERE cjbe.POSITION_NBR = jbe.POSITION_NBR) "Qty_In_Position?"
    FROM PS_JOB jbe,
     PS_NAMES nam,
     PS_JOB_EARNS_DIST dist,
     PS_POSITION_DATA pos,
     ps_dept_tbl dpt
    
    WHERE (dist.EMPLID = jbe.EMPLID
       AND dist.EMPL_RCD = jbe.EMPL_RCD
       AND dist.EFFDT = jbe.EFFDT
       AND dist.EFFSEQ = jbe.EFFSEQ
       AND (jbe.EFFDT =
                (SELECT MAX(A_ED.EFFDT)
                 FROM PS_JOB A_ED
                 WHERE jbe.EMPLID = A_ED.EMPLID
                     AND jbe.EMPL_RCD = A_ED.EMPL_RCD
                     AND A_ED.EFFDT <= SYSDATE)
            AND jbe.EFFSEQ =
                (SELECT MAX(A_ES.EFFSEQ)
                 FROM PS_JOB A_ES
                 WHERE jbe.EMPLID = A_ES.EMPLID
                     AND jbe.EMPL_RCD = A_ES.EMPL_RCD
                     AND jbe.EFFDT = A_ES.EFFDT)
            AND jbe.EMPL_RCD = 0
            AND jbe.HR_STATUS = 'A'
            AND REGEXP_LIKE (SUBSTR(jbe.POSITION_NBR,1,2), '^S[0-9]')
            AND jbe.EMPLID = nam.EMPLID
            AND nam.EFFDT =
                (SELECT MAX(B_ED.EFFDT)
                 FROM PS_NAMES B_ED
                 WHERE nam.EMPLID = B_ED.EMPLID
                     AND nam.NAME_TYPE = B_ED.NAME_TYPE
                     AND B_ED.EFFDT <= SYSDATE)
            AND nam.NAME_TYPE = 'PRI'))
    
        AND pos.position_nbr(+) = jbe.position_nbr
    AND pos.effdt=
        (SELECT max(p2.effdt)
         FROM ps_position_data p2
         WHERE p2.position_nbr=pos.position_nbr
             AND p2.effdt<=sysdate)
    AND dpt.deptid(+) = jbe.deptid
    AND dpt.setid(+) = jbe.setid_dept
    AND (dpt.EFFDT=
             (SELECT MAX(d2.EFFDT)
              FROM PS_DEPT_TBL d2
              WHERE dpt.DEPTID=d2.DEPTID
                  AND jbe.SETID_DEPT=d2.SETID
                  AND d2.EFFDT<=SYSDATE))
    
        AND (
             (SELECT MIN(EFFDT)
              FROM PS_JOB
              WHERE 1=1
                  AND POSITION_NBR = jbe.POSITION_NBR) BETWEEN TO_DATE ('01-JAN-2014', 'DD-MON-YYYY') AND TO_DATE ('31-DEC-2014', 'DD-MON-YYYY')
         OR
             (SELECT Min(pj.EFFDT) AS HIRE_DT_1
              FROM PS_JOB pj
              WHERE pj.EMPLID=jbe.emplid
                  AND action IN('HIR',
                                'REH')
                  AND pj.empl_rcd = 0) BETWEEN TO_DATE ('01-JAN-2014', 'DD-MON-YYYY') AND TO_DATE ('31-DEC-2014', 'DD-MON-YYYY'))
    ORDER BY jbe.POSITION_NBR
    
    qid & accept id: (30356880, 30362409) query: Count of days in a period soup:
    \n

    Period start is always sysdate and end sysdate - 5 years

    \n
    \n

    You can get this using: SYSDATE and SYSDATE - INTERVAL '5' YEAR

    \n
    \n

    Item 1) 01.01.2010 - 31.12.2010. Valid range: 15.05.2010 - 31.12.2010\n = ~195 days

    \n

    Item 2) 01.01.2015 - 31.12.2015. Valid range: 01.01.2015 - 15.05.2015\n = ~170 days

    \n
    \n

    Assuming these examples show start_date - end_date and the valid range is your expected answer for that particular SYSDATE then you can use:

    \n

    SQL Fiddle

    \n

    Oracle 11g R2 Schema Setup:

    \n
    CREATE TABLE items ( "user", start_date, end_date ) AS\n          SELECT 'me', DATE '2010-01-01', DATE '2010-12-31' FROM DUAL\nUNION ALL SELECT 'me', DATE '2015-01-01', DATE '2015-12-31' FROM DUAL\nUNION ALL SELECT 'me', DATE '2009-01-01', DATE '2009-12-31' FROM DUAL\nUNION ALL SELECT 'me', DATE '2009-01-01', DATE '2016-12-31' FROM DUAL\nUNION ALL SELECT 'me', DATE '2012-01-01', DATE '2012-12-31' FROM DUAL\nUNION ALL SELECT 'me', DATE '2013-01-01', DATE '2013-01-01' FROM DUAL;\n
    \n

    Query 1:

    \n
    SELECT "user",\n       TO_CHAR( start_date, 'YYYY-MM-DD' ) AS start_date,\n       TO_CHAR( end_date, 'YYYY-MM-DD' ) AS end_date,\n       TO_CHAR( GREATEST(TRUNC(i.start_date), TRUNC(SYSDATE)-INTERVAL '5' YEAR), 'YYYY-MM-DD' ) AS valid_start,\n       TO_CHAR( LEAST(TRUNC(i.end_date),TRUNC(SYSDATE)), 'YYYY-MM-DD' ) AS valid_end,\n       LEAST(TRUNC(i.end_date),TRUNC(SYSDATE))\n         - GREATEST(TRUNC(i.start_date), TRUNC(SYSDATE)-INTERVAL '5' YEAR)\n         + 1\n         AS total_days \nFROM   items i\nWHERE  i."user" = 'me'\nAND    TRUNC(i.start_date) <= TRUNC(SYSDATE)\nAND    TRUNC(i.end_date)   >= TRUNC(SYSDATE) - INTERVAL '5' YEAR\n
    \n

    Results:

    \n
    | user | START_DATE |   END_DATE | VALID_START |  VALID_END | TOTAL_DAYS |\n|------|------------|------------|-------------|------------|------------|\n|   me | 2010-01-01 | 2010-12-31 |  2010-05-21 | 2010-12-31 |        225 |\n|   me | 2015-01-01 | 2015-12-31 |  2015-01-01 | 2015-05-21 |        141 |\n|   me | 2009-01-01 | 2016-12-31 |  2010-05-21 | 2015-05-21 |       1827 |\n|   me | 2012-01-01 | 2012-12-31 |  2012-01-01 | 2012-12-31 |        366 |\n|   me | 2013-01-01 | 2013-01-01 |  2013-01-01 | 2013-01-01 |          1 |\n
    \n

    This assumes that the start date is at the beginning of the day (00:00) and the end date is at the end of the day (24:00) - so, if the start and end dates are the same then you are expecting the result to be 1 total day (i.e. the period 00:00 - 24:00). If you are, instead, expecting the result to be 0 then remove the +1 from the calculation of the total days value.

    \n

    Query 2:

    \n

    If you want the sum of all these valid ranges and are happy to count dates in overlapping ranges multiple times then just wrap it in the SUM aggregate function:

    \n
    SELECT SUM( LEAST(TRUNC(i.end_date),TRUNC(SYSDATE))\n         - GREATEST(TRUNC(i.start_date), TRUNC(SYSDATE)-INTERVAL '5' YEAR)\n         + 1 )\n         AS total_days \nFROM   items i\nWHERE  i."user" = 'me'\nAND    TRUNC(i.start_date) <= TRUNC(SYSDATE)\nAND    TRUNC(i.end_date)   >= TRUNC(SYSDATE) - INTERVAL '5' YEAR\n
    \n

    Results:

    \n
    | TOTAL_DAYS |\n|------------|\n|       2560 |\n
    \n

    Query 3:

    \n

    Now if you want to get a count of all the valid days in the range and not count overlap in ranges multiple times then you can do:

    \n
    WITH ALL_DATES_IN_RANGE AS (\n  SELECT TRUNC(SYSDATE) - LEVEL + 1 AS valid_date\n  FROM   DUAL\n  CONNECT BY LEVEL <= SYSDATE - (SYSDATE - INTERVAL '5' YEAR) + 1\n)\nSELECT COUNT(1) AS TOTAL_DAYS\nFROM   ALL_DATES_IN_RANGE a\nWHERE  EXISTS ( SELECT 'X'\n                FROM   items i\n                WHERE  a.valid_date BETWEEN i.start_date AND i.end_date\n                AND    i."user" = 'me' )\n
    \n

    Results:

    \n
    | TOTAL_DAYS |\n|------------|\n|       1827 |\n
    \n soup wrap:

    Period start is always sysdate and end sysdate - 5 years

    You can get this using: SYSDATE and SYSDATE - INTERVAL '5' YEAR

    Item 1) 01.01.2010 - 31.12.2010. Valid range: 15.05.2010 - 31.12.2010 = ~195 days

    Item 2) 01.01.2015 - 31.12.2015. Valid range: 01.01.2015 - 15.05.2015 = ~170 days

    Assuming these examples show start_date - end_date and the valid range is your expected answer for that particular SYSDATE then you can use:

    SQL Fiddle

    Oracle 11g R2 Schema Setup:

    CREATE TABLE items ( "user", start_date, end_date ) AS
              SELECT 'me', DATE '2010-01-01', DATE '2010-12-31' FROM DUAL
    UNION ALL SELECT 'me', DATE '2015-01-01', DATE '2015-12-31' FROM DUAL
    UNION ALL SELECT 'me', DATE '2009-01-01', DATE '2009-12-31' FROM DUAL
    UNION ALL SELECT 'me', DATE '2009-01-01', DATE '2016-12-31' FROM DUAL
    UNION ALL SELECT 'me', DATE '2012-01-01', DATE '2012-12-31' FROM DUAL
    UNION ALL SELECT 'me', DATE '2013-01-01', DATE '2013-01-01' FROM DUAL;
    

    Query 1:

    SELECT "user",
           TO_CHAR( start_date, 'YYYY-MM-DD' ) AS start_date,
           TO_CHAR( end_date, 'YYYY-MM-DD' ) AS end_date,
           TO_CHAR( GREATEST(TRUNC(i.start_date), TRUNC(SYSDATE)-INTERVAL '5' YEAR), 'YYYY-MM-DD' ) AS valid_start,
           TO_CHAR( LEAST(TRUNC(i.end_date),TRUNC(SYSDATE)), 'YYYY-MM-DD' ) AS valid_end,
           LEAST(TRUNC(i.end_date),TRUNC(SYSDATE))
             - GREATEST(TRUNC(i.start_date), TRUNC(SYSDATE)-INTERVAL '5' YEAR)
             + 1
             AS total_days 
    FROM   items i
    WHERE  i."user" = 'me'
    AND    TRUNC(i.start_date) <= TRUNC(SYSDATE)
    AND    TRUNC(i.end_date)   >= TRUNC(SYSDATE) - INTERVAL '5' YEAR
    

    Results:

    | user | START_DATE |   END_DATE | VALID_START |  VALID_END | TOTAL_DAYS |
    |------|------------|------------|-------------|------------|------------|
    |   me | 2010-01-01 | 2010-12-31 |  2010-05-21 | 2010-12-31 |        225 |
    |   me | 2015-01-01 | 2015-12-31 |  2015-01-01 | 2015-05-21 |        141 |
    |   me | 2009-01-01 | 2016-12-31 |  2010-05-21 | 2015-05-21 |       1827 |
    |   me | 2012-01-01 | 2012-12-31 |  2012-01-01 | 2012-12-31 |        366 |
    |   me | 2013-01-01 | 2013-01-01 |  2013-01-01 | 2013-01-01 |          1 |
    

    This assumes that the start date is at the beginning of the day (00:00) and the end date is at the end of the day (24:00) - so, if the start and end dates are the same then you are expecting the result to be 1 total day (i.e. the period 00:00 - 24:00). If you are, instead, expecting the result to be 0 then remove the +1 from the calculation of the total days value.

    Query 2:

    If you want the sum of all these valid ranges and are happy to count dates in overlapping ranges multiple times then just wrap it in the SUM aggregate function:

    SELECT SUM( LEAST(TRUNC(i.end_date),TRUNC(SYSDATE))
             - GREATEST(TRUNC(i.start_date), TRUNC(SYSDATE)-INTERVAL '5' YEAR)
             + 1 )
             AS total_days 
    FROM   items i
    WHERE  i."user" = 'me'
    AND    TRUNC(i.start_date) <= TRUNC(SYSDATE)
    AND    TRUNC(i.end_date)   >= TRUNC(SYSDATE) - INTERVAL '5' YEAR
    

    Results:

    | TOTAL_DAYS |
    |------------|
    |       2560 |
    

    Query 3:

    Now if you want to get a count of all the valid days in the range and not count overlap in ranges multiple times then you can do:

    WITH ALL_DATES_IN_RANGE AS (
      SELECT TRUNC(SYSDATE) - LEVEL + 1 AS valid_date
      FROM   DUAL
      CONNECT BY LEVEL <= SYSDATE - (SYSDATE - INTERVAL '5' YEAR) + 1
    )
    SELECT COUNT(1) AS TOTAL_DAYS
    FROM   ALL_DATES_IN_RANGE a
    WHERE  EXISTS ( SELECT 'X'
                    FROM   items i
                    WHERE  a.valid_date BETWEEN i.start_date AND i.end_date
                    AND    i."user" = 'me' )
    

    Results:

    | TOTAL_DAYS |
    |------------|
    |       1827 |
    
    qid & accept id: (30391960, 30392553) query: oracle sql varray contains an element soup:

    You could use the condition:

    \n
    IF 'element' member OF  THEN\n
    \n

    For example,

    \n
    SQL> SET SERVEROUTPUT ON\nSQL> DECLARE\n  2  TYPE v_array\n  3  IS\n  4    TABLE OF VARCHAR2(200);\n  5    my_array v_array;\n  6  BEGIN\n  7    my_array := v_array('1','2','3','4');\n  8    IF '4' member OF my_array THEN\n  9      dbms_output.put_line('yes');\n 10    ELSE\n 11      dbms_output.put_line('no');\n 12    END IF;\n 13  END;\n 14  /\nyes\n\nPL/SQL procedure successfully completed.\n\nSQL>\n
    \n soup wrap:

    You could use the condition:

    IF 'element' member OF  THEN
    

    For example,

    SQL> SET SERVEROUTPUT ON
    SQL> DECLARE
      2  TYPE v_array
      3  IS
      4    TABLE OF VARCHAR2(200);
      5    my_array v_array;
      6  BEGIN
      7    my_array := v_array('1','2','3','4');
      8    IF '4' member OF my_array THEN
      9      dbms_output.put_line('yes');
     10    ELSE
     11      dbms_output.put_line('no');
     12    END IF;
     13  END;
     14  /
    yes
    
    PL/SQL procedure successfully completed.
    
    SQL>
    
    qid & accept id: (30392961, 30393993) query: Create query that contains movements out of destination list soup:

    This one gives your expected results. Demo fiddle is here.

    \n
    DECLARE @date DATE = '20150826'\n\nSELECT t1.[Date], t1.Container, \n       (SELECT TOP(1) t2.Location \n               FROM Table1 t2\n               WHERE t2.Container = t1.Container AND t2.[date] < t1.[date]\n               ORDER BY t2.[Date] DESC ) [from], \n        t1.Location [To], t1.Scrapped\nFROM Table1 t1\nWHERE t1.[Date] >= @date\nORDER BY t1.[Date]\n
    \n

    Output:

    \n
    |                     Date |  Container |   from | To | Scrapped |\n|--------------------------|------------|--------|----|----------|\n| August, 26 2015 00:00:00 | Container1 |      A |  D |   (null) |\n| August, 26 2015 00:00:00 | Container2 |      B |  A |   (null) |\n| August, 26 2015 00:00:00 | Container3 |      C |  B |   (null) |\n| August, 27 2015 00:00:00 | Container1 |      D |  D |        x |\n| August, 27 2015 00:00:00 | Container4 | (null) |  B |   (null) |\n| August, 27 2015 00:00:00 | Container2 |      A |  C |   (null) |\n| August, 27 2015 00:00:00 | Container3 |      B |  A |   (null) |\n
    \n soup wrap:

    This one gives your expected results. Demo fiddle is here.

    DECLARE @date DATE = '20150826'
    
    SELECT t1.[Date], t1.Container, 
           (SELECT TOP(1) t2.Location 
                   FROM Table1 t2
                   WHERE t2.Container = t1.Container AND t2.[date] < t1.[date]
                   ORDER BY t2.[Date] DESC ) [from], 
            t1.Location [To], t1.Scrapped
    FROM Table1 t1
    WHERE t1.[Date] >= @date
    ORDER BY t1.[Date]
    

    Output:

    |                     Date |  Container |   from | To | Scrapped |
    |--------------------------|------------|--------|----|----------|
    | August, 26 2015 00:00:00 | Container1 |      A |  D |   (null) |
    | August, 26 2015 00:00:00 | Container2 |      B |  A |   (null) |
    | August, 26 2015 00:00:00 | Container3 |      C |  B |   (null) |
    | August, 27 2015 00:00:00 | Container1 |      D |  D |        x |
    | August, 27 2015 00:00:00 | Container4 | (null) |  B |   (null) |
    | August, 27 2015 00:00:00 | Container2 |      A |  C |   (null) |
    | August, 27 2015 00:00:00 | Container3 |      B |  A |   (null) |
    
    qid & accept id: (30404402, 30404447) query: Oracle pl/sql script which increments number soup:

    PL/SQL doesn't have the ++ syntactic sugar. You'd need to explicitly change the value of the variable.

    \n
    DECLARE\n  id integer := 10;\nBEGIN\n  DELETE FROM myTable;\n  INSERT INTO myTable( id, value ) VALUES( id, 'a value' );\n  id := id + 1;\n  INSERT INTO myTable( id, value ) VALUES( id, 'another value' );\n  id := id + 1;\n  ...\nEND;\n
    \n

    At that point, and since you want to ensure consistency, you may be better off hard-coding the id values just like you are hard-coding the value values, i.e.

    \n
    BEGIN\n  DELETE FROM myTable;\n  INSERT INTO myTable( id, value ) VALUES( 10, 'a value' );\n  INSERT INTO myTable( id, value ) VALUES( 11, 'another value' );\n  ...\nEND;\n
    \n soup wrap:

    PL/SQL doesn't have the ++ syntactic sugar. You'd need to explicitly change the value of the variable.

    DECLARE
      id integer := 10;
    BEGIN
      DELETE FROM myTable;
      INSERT INTO myTable( id, value ) VALUES( id, 'a value' );
      id := id + 1;
      INSERT INTO myTable( id, value ) VALUES( id, 'another value' );
      id := id + 1;
      ...
    END;
    

    At that point, and since you want to ensure consistency, you may be better off hard-coding the id values just like you are hard-coding the value values, i.e.

    BEGIN
      DELETE FROM myTable;
      INSERT INTO myTable( id, value ) VALUES( 10, 'a value' );
      INSERT INTO myTable( id, value ) VALUES( 11, 'another value' );
      ...
    END;
    
    qid & accept id: (30437156, 30437307) query: How to select column with greatest difference between dates - MySQL soup:

    Sounds like you need the difference between the max and min per epc, so this:

    \n
    select epc, max(`datetime`), min(`datetime`), timediff(max(`datetime`), min(`datetime`))\n  from Track_Record\n  order by timediff(max(`datetime`), min(`datetime`)) desc\n  limit 1;\n
    \n

    Results from your sample data above:

    \n
    +-----------------------------+---------------------+---------------------+--------------------------------------------+\n| epc                         | max(`datetime`)     | min(`datetime`)     | timediff(max(`datetime`), min(`datetime`)) |\n+-----------------------------+---------------------+---------------------+--------------------------------------------+\n| 03.0000A89.00016F.000169DCD | 2015-10-15 18:23:18 | 2011-03-01 11:43:26 | 838:59:59                                  |\n+-----------------------------+---------------------+---------------------+--------------------------------------------+\n1 row in set, 1 warning (0.00 sec)\n
    \n soup wrap:

    Sounds like you need the difference between the max and min per epc, so this:

    select epc, max(`datetime`), min(`datetime`), timediff(max(`datetime`), min(`datetime`))
      from Track_Record
      order by timediff(max(`datetime`), min(`datetime`)) desc
      limit 1;
    

    Results from your sample data above:

    +-----------------------------+---------------------+---------------------+--------------------------------------------+
    | epc                         | max(`datetime`)     | min(`datetime`)     | timediff(max(`datetime`), min(`datetime`)) |
    +-----------------------------+---------------------+---------------------+--------------------------------------------+
    | 03.0000A89.00016F.000169DCD | 2015-10-15 18:23:18 | 2011-03-01 11:43:26 | 838:59:59                                  |
    +-----------------------------+---------------------+---------------------+--------------------------------------------+
    1 row in set, 1 warning (0.00 sec)
    
    qid & accept id: (30458856, 30458964) query: Is there a non database-specific command for "insert or update" soup:

    The answer is NO. For MYSQL it will be different and for Oracle it will be different.

    \n

    In MYSQL it would be like

    \n
    INSERT INTO tabelname (id, name) \nVALUES (1, 'abc') \nON DUPLICATE KEY UPDATE id = id;\n
    \n

    In Oracle it would be like

    \n
    DECLARE\n    x NUMBER:=0;\nBEGIN\n    SELECT nvl((SELECT 1 FROM tabelname WHERE name = 'abc'), 0) INTO x FROM dual;\n\n    IF (x = 1) THEN\n        INSERT INTO tabelname (1,'abc')\n    END IF;\n\nEND;\n
    \n

    or you can use merge like this:

    \n
    merge into tablename a\n    using (select 1 id, 'abc' name from dual) b\n        on (a.name = b.name)\n    when not matched then\n   insert( id, name)\n      values( b.id, b.name)\n
    \n soup wrap:

    The answer is NO. For MYSQL it will be different and for Oracle it will be different.

    In MYSQL it would be like

    INSERT INTO tabelname (id, name) 
    VALUES (1, 'abc') 
    ON DUPLICATE KEY UPDATE id = id;
    

    In Oracle it would be like

    DECLARE
        x NUMBER:=0;
    BEGIN
        SELECT nvl((SELECT 1 FROM tabelname WHERE name = 'abc'), 0) INTO x FROM dual;
    
        IF (x = 1) THEN
            INSERT INTO tabelname (1,'abc')
        END IF;
    
    END;
    

    or you can use merge like this:

    merge into tablename a
        using (select 1 id, 'abc' name from dual) b
            on (a.name = b.name)
        when not matched then
       insert( id, name)
          values( b.id, b.name)
    
    qid & accept id: (30494810, 30494871) query: SQL subqueries to get field's value as a field name on the result soup:

    This sounds like a cross table.

    \n

    MySQL does not include a built-in function for cross tables, but you can build your cross table query "by hand".

    \n

    Important: You must have a key to group the data. I'll assume that you have a place_id column:

    \n
    select max(case detail_key when 'location' then detail_value end) as location\n     , max(case detail_key when 'country' then detail_value end) as country\n     -- and so on\nfrom places\n-- add any WHERE conditions here\ngroup by place_id\n
    \n

    Hope this helps.

    \n
    \n

    Edit

    \n

    Your comment made me rethink your problem, and I found a solution here. Here is what you need to do:

    \n
      \n
    1. Create a variable that holds the expressions you want to apply to get what you need
    2. \n
    3. Create a valid SQL query
    4. \n
    5. Use a prepared statement when your query is ready.
    6. \n
    \n

    I created a little SQL fiddle for you to see how to solve this, and here it is:

    \n

    SQL Fiddle

    \n

    MySQL 5.6 Schema Setup:

    \n
    create table places(\n  id int unsigned not null auto_increment primary key,\n  place_id int,\n  detail_key varchar(50),\n  detail_value varchar(50)\n);\n\ninsert into places (place_id, detail_key, detail_value) values\n(1, 'location','Athens'),(1,'country','Greece'),(1,'longitude','12.3333'),(1,'weather','good');\n
    \n

    Query 1:

    \n
    set @sql = null\n
    \n

    Results: (No results)

    \n

    Query 2:

    \n
    select group_concat(distinct\n                    concat(\n                      "max(case detail_key when '",\n                      detail_key,\n                      "' then detail_value end) as `",\n                      detail_key,\n                      "`"\n                    )\n       )\ninto @sql\nfrom places\n
    \n

    Results: (No results)

    \n

    Query 3:

    \n
    set @sql = concat("select place_id, ", @sql, " from places group by place_id")\n
    \n

    Results: (No results)

    \n

    Query 4:

    \n
    prepare stmt from @sql\n
    \n

    Results: (No results)

    \n

    Query 5:

    \n
    execute stmt\n
    \n

    Results:

    \n
    | place_id | location | country | longitude | weather |\n|----------|----------|---------|-----------|---------|\n|        1 |   Athens |  Greece |   12.3333 |    good |\n
    \n
    \n

    Final edit

    \n

    If you somehow created the above table with the data corresponding to just one place (i.e. there's no place_id and all details are from a single place), you can do something like this:

    \n
    select max(case detail_key when 'location' then detail_value end) as location\n     , max(case detail_key when 'country' then detail_value end) as country\n     -- and so on\nfrom places\n-- add any WHERE conditions here\ngroup by null;\n
    \n soup wrap:

    This sounds like a cross table.

    MySQL does not include a built-in function for cross tables, but you can build your cross table query "by hand".

    Important: You must have a key to group the data. I'll assume that you have a place_id column:

    select max(case detail_key when 'location' then detail_value end) as location
         , max(case detail_key when 'country' then detail_value end) as country
         -- and so on
    from places
    -- add any WHERE conditions here
    group by place_id
    

    Hope this helps.


    Edit

    Your comment made me rethink your problem, and I found a solution here. Here is what you need to do:

    1. Create a variable that holds the expressions you want to apply to get what you need
    2. Create a valid SQL query
    3. Use a prepared statement when your query is ready.

    I created a little SQL fiddle for you to see how to solve this, and here it is:

    SQL Fiddle

    MySQL 5.6 Schema Setup:

    create table places(
      id int unsigned not null auto_increment primary key,
      place_id int,
      detail_key varchar(50),
      detail_value varchar(50)
    );
    
    insert into places (place_id, detail_key, detail_value) values
    (1, 'location','Athens'),(1,'country','Greece'),(1,'longitude','12.3333'),(1,'weather','good');
    

    Query 1:

    set @sql = null
    

    Results: (No results)

    Query 2:

    select group_concat(distinct
                        concat(
                          "max(case detail_key when '",
                          detail_key,
                          "' then detail_value end) as `",
                          detail_key,
                          "`"
                        )
           )
    into @sql
    from places
    

    Results: (No results)

    Query 3:

    set @sql = concat("select place_id, ", @sql, " from places group by place_id")
    

    Results: (No results)

    Query 4:

    prepare stmt from @sql
    

    Results: (No results)

    Query 5:

    execute stmt
    

    Results:

    | place_id | location | country | longitude | weather |
    |----------|----------|---------|-----------|---------|
    |        1 |   Athens |  Greece |   12.3333 |    good |
    

    Final edit

    If you somehow created the above table with the data corresponding to just one place (i.e. there's no place_id and all details are from a single place), you can do something like this:

    select max(case detail_key when 'location' then detail_value end) as location
         , max(case detail_key when 'country' then detail_value end) as country
         -- and so on
    from places
    -- add any WHERE conditions here
    group by null;
    
    qid & accept id: (30522930, 30523097) query: SQL Query: Unable to get the count of values between a time stamp from multiple tables soup:

    Maybe something like this:

    \n
    SELECT count(*)\n(\n   SELECT id FROM Table1 \n   WHERE ttime BETWEEN '29-5-2915 08:17:29' AND '29-5-2915 17:17:29'\n   UNION ALL\n   SELECT id FROM Table2\n   WHERE ttime BETWEEN '29-5-2915 08:17:29' AND '29-5-2915 17:17:29'\n) AS t\n
    \n

    Or if you want a distinct count:

    \n
    SELECT count(*)\n(\n   SELECT id FROM Table1 \n   WHERE ttime BETWEEN '29-5-2915 08:17:29' AND '29-5-2915 17:17:29'\n   UNION\n   SELECT id FROM Table2\n   WHERE ttime BETWEEN '29-5-2915 08:17:29' AND '29-5-2915 17:17:29'\n) AS t\n
    \n soup wrap:

    Maybe something like this:

    SELECT count(*)
    (
       SELECT id FROM Table1 
       WHERE ttime BETWEEN '29-5-2915 08:17:29' AND '29-5-2915 17:17:29'
       UNION ALL
       SELECT id FROM Table2
       WHERE ttime BETWEEN '29-5-2915 08:17:29' AND '29-5-2915 17:17:29'
    ) AS t
    

    Or if you want a distinct count:

    SELECT count(*)
    (
       SELECT id FROM Table1 
       WHERE ttime BETWEEN '29-5-2915 08:17:29' AND '29-5-2915 17:17:29'
       UNION
       SELECT id FROM Table2
       WHERE ttime BETWEEN '29-5-2915 08:17:29' AND '29-5-2915 17:17:29'
    ) AS t
    
    qid & accept id: (30577660, 30578027) query: How to calculate Days between two dates soup:

    This will give you a datediff for a 360 day calendar:

    \n

    Exclusive date range:

    \n
    if datepart(day, @date2) = datepart(day,dateadd(day, -1,(Cast(Cast(datepart(year, @date2) as varchar) + '-' + Cast(datepart(month, @date2) + 1 as varchar) + '-01' as date))))\nbegin \n   Select datediff(month, @date1, @date2) * 30 - (DATEPART(day, @date1) - 30)\nend\nelse if datepart(day, @date1) = datepart(day,dateadd(day, -1,(Cast(Cast(datepart(year, @date1) as varchar) + '-' + Cast(datepart(month, @date1) + 1 as varchar) + '-01' as date))))\nbegin \n   Select datediff(month, @date1, @date2) * 30 - (30 - datepart(day, @date2))\nend\nelse\nbegin\n   Select datediff(month, @date1, @date2) * 30 - (DATEPART(day, @date1) - datepart(day, @date2))\nend\n
    \n

    Inclusive date range:

    \n
    if datepart(day, @date2) = datepart(day,dateadd(day, -1,(Cast(Cast(datepart(year, @date2) as varchar) + '-' + Cast(datepart(month, @date2) + 1 as varchar) + '-01' as date))))\nbegin \n   Select datediff(month, @date1, @date2) * 30 - (DATEPART(day, @date1) - 30) + 1\nend\nelse if datepart(day, @date1) = datepart(day,dateadd(day, -1,(Cast(Cast(datepart(year, @date1) as varchar) + '-' + Cast(datepart(month, @date1) + 1 as varchar) + '-01' as date))))\nbegin \n   Select datediff(month, @date1, @date2) * 30 - (30 - datepart(day, @date2)) + 1\nend\nelse\nbegin\n   Select datediff(month, @date1, @date2) * 30 - (DATEPART(day, @date1) - datepart(day, @date2)) + 1\nend\n
    \n

    The If's check to see if either of the dates are the last day of the month. If they are then they treat it as the 30th since each month needs to be 30 days.

    \n

    It calculates the number of months, multiplies by 30 and then the difference in days following this.

    \n

    If you run this year to year with dates it will give 360, if you use your example it gives 311 but that's correct.

    \n

    SQL Fiddle: http://sqlfiddle.com/#!6/9eecb/5791/0

    \n soup wrap:

    This will give you a datediff for a 360 day calendar:

    Exclusive date range:

    if datepart(day, @date2) = datepart(day,dateadd(day, -1,(Cast(Cast(datepart(year, @date2) as varchar) + '-' + Cast(datepart(month, @date2) + 1 as varchar) + '-01' as date))))
    begin 
       Select datediff(month, @date1, @date2) * 30 - (DATEPART(day, @date1) - 30)
    end
    else if datepart(day, @date1) = datepart(day,dateadd(day, -1,(Cast(Cast(datepart(year, @date1) as varchar) + '-' + Cast(datepart(month, @date1) + 1 as varchar) + '-01' as date))))
    begin 
       Select datediff(month, @date1, @date2) * 30 - (30 - datepart(day, @date2))
    end
    else
    begin
       Select datediff(month, @date1, @date2) * 30 - (DATEPART(day, @date1) - datepart(day, @date2))
    end
    

    Inclusive date range:

    if datepart(day, @date2) = datepart(day,dateadd(day, -1,(Cast(Cast(datepart(year, @date2) as varchar) + '-' + Cast(datepart(month, @date2) + 1 as varchar) + '-01' as date))))
    begin 
       Select datediff(month, @date1, @date2) * 30 - (DATEPART(day, @date1) - 30) + 1
    end
    else if datepart(day, @date1) = datepart(day,dateadd(day, -1,(Cast(Cast(datepart(year, @date1) as varchar) + '-' + Cast(datepart(month, @date1) + 1 as varchar) + '-01' as date))))
    begin 
       Select datediff(month, @date1, @date2) * 30 - (30 - datepart(day, @date2)) + 1
    end
    else
    begin
       Select datediff(month, @date1, @date2) * 30 - (DATEPART(day, @date1) - datepart(day, @date2)) + 1
    end
    

    The If's check to see if either of the dates are the last day of the month. If they are then they treat it as the 30th since each month needs to be 30 days.

    It calculates the number of months, multiplies by 30 and then the difference in days following this.

    If you run this year to year with dates it will give 360, if you use your example it gives 311 but that's correct.

    SQL Fiddle: http://sqlfiddle.com/#!6/9eecb/5791/0

    qid & accept id: (30578605, 30578778) query: Get result on two table join soup:

    You are over complicating what should be a simple select with not exists:

    \n
    SELECT SalesManID, ProductID\nFROM Salesman_Product p\nWHERE NOT EXISTS (\n   SELECT 1\n   FROM  Salesman_Sales s \n   WHERE p.SalesManID = s.SalesManID and p.ProductID = s.ProductID\n)\n
    \n

    Results:

    \n
    SalesManID    ProductID\n1             2\n1             4\n2             4\n
    \n

    see fiddle here

    \n soup wrap:

    You are over complicating what should be a simple select with not exists:

    SELECT SalesManID, ProductID
    FROM Salesman_Product p
    WHERE NOT EXISTS (
       SELECT 1
       FROM  Salesman_Sales s 
       WHERE p.SalesManID = s.SalesManID and p.ProductID = s.ProductID
    )
    

    Results:

    SalesManID    ProductID
    1             2
    1             4
    2             4
    

    see fiddle here

    qid & accept id: (30607214, 30607683) query: SQL Insert into and Select multiple columns? soup:

    First thing: I do not have experience with this big tables. So you have to test out the following tipps yourself to find out if they are really working in your situation:

    \n

    1. Create index in the source table

    \n

    Make sure that both columns FromIDNumber and ToIDNumber have an index, i.e.

    \n
    ALTER TABLE Communication ADD INDEX (FromIDNumber);\nALTER TABLE Communication ADD INDEX (ToIDNumber);\n
    \n

    2. Try to remove DISTINCT

    \n

    I could not find a faster query for your example, though you might try the query without the DISTINCT keyword - using UNION returns only distinct values by definition. So this SQL gives us the same result as your current query:

    \n
    INSERT INTO CommIDTemp (`ID`)\nSELECT FromIDNumber FROM Communication\nUNION \nSELECT ToIDNumberFROM Communication;\n
    \n

    3. Use a primary key in the temp table

    \n

    Also try another approach by setting the CommIDTemp.ID column as a primary key and use INSERT IGNORE - this is especially useful if you want to update the table frequently without deleting the contents:

    \n
    CREATE TABLE CommIDTemp (ID INT PRIMARY KEY);\n\nINSERT IGNORE INTO CommIDTemp (`ID`)\nSELECT FromIDNumber FROM Communication\nUNION\nSELECT ToIDNumber FROM Communication;\n
    \n soup wrap:

    First thing: I do not have experience with this big tables. So you have to test out the following tipps yourself to find out if they are really working in your situation:

    1. Create index in the source table

    Make sure that both columns FromIDNumber and ToIDNumber have an index, i.e.

    ALTER TABLE Communication ADD INDEX (FromIDNumber);
    ALTER TABLE Communication ADD INDEX (ToIDNumber);
    

    2. Try to remove DISTINCT

    I could not find a faster query for your example, though you might try the query without the DISTINCT keyword - using UNION returns only distinct values by definition. So this SQL gives us the same result as your current query:

    INSERT INTO CommIDTemp (`ID`)
    SELECT FromIDNumber FROM Communication
    UNION 
    SELECT ToIDNumberFROM Communication;
    

    3. Use a primary key in the temp table

    Also try another approach by setting the CommIDTemp.ID column as a primary key and use INSERT IGNORE - this is especially useful if you want to update the table frequently without deleting the contents:

    CREATE TABLE CommIDTemp (ID INT PRIMARY KEY);
    
    INSERT IGNORE INTO CommIDTemp (`ID`)
    SELECT FromIDNumber FROM Communication
    UNION
    SELECT ToIDNumber FROM Communication;
    
    qid & accept id: (30616479, 30617319) query: Updating table with the closest value from a lookup matrix soup:

    From what I can see, you should be able to perform a simple UPDATE using a JOIN, where you ROUND the values of vx and vz for the JOIN condition, performance wise, you'd have to test this on your dataset though.

    \n

    Here's the basic method to JOIN the data, note I've padded out the INSERT scripts to have a complete matrix:

    \n
    CREATE TABLE #dm_matrix\n    (\n      x FLOAT ,\n      z FLOAT ,\n      avgValue DECIMAL(2, 1)\n    )\n\n\nINSERT  INTO #dm_matrix\nVALUES  ( 1, 1, RAND() )\nINSERT  INTO #dm_matrix\nVALUES  ( 1, 2, RAND() )\nINSERT  INTO #dm_matrix\nVALUES  ( 1, 3, RAND() )\nINSERT  INTO #dm_matrix\nVALUES  ( 1, 4, RAND() )\nINSERT  INTO #dm_matrix\nVALUES  ( 2, 1, RAND() )\nINSERT  INTO #dm_matrix\nVALUES  ( 2, 2, RAND() )\nINSERT  INTO #dm_matrix\nVALUES  ( 2, 3, RAND() )\nINSERT  INTO #dm_matrix\nVALUES  ( 2, 4, RAND() )\nINSERT  INTO #dm_matrix\nVALUES  ( 3, 1, RAND() )\nINSERT  INTO #dm_matrix\nVALUES  ( 3, 2, RAND() )\nINSERT  INTO #dm_matrix\nVALUES  ( 3, 3, RAND() )\nINSERT  INTO #dm_matrix\nVALUES  ( 3, 4, RAND() )\nINSERT  INTO #dm_matrix\nVALUES  ( 4, 1, RAND() )\nINSERT  INTO #dm_matrix\nVALUES  ( 4, 2, RAND() )\nINSERT  INTO #dm_matrix\nVALUES  ( 4, 3, RAND() )\nINSERT  INTO #dm_matrix\nVALUES  ( 4, 4, RAND() )\n\nSELECT  *\nFROM    #dm_matrix\n\nCREATE TABLE #dm_values\n    (\n      vx DECIMAL(2, 1) ,\n      vz DECIMAL(2, 1) ,\n      v FLOAT\n    )\n\nINSERT  INTO #dm_values\n        ( vx, vz )\nVALUES  ( 1 + RAND() * 3, 1 + RAND() * 3 )\nINSERT  INTO #dm_values\n        ( vx, vz )\nVALUES  ( 1 + RAND() * 3, 1 + RAND() * 3 )\n\nSELECT  *\nFROM    #dm_values\n\n-- replace this SELECT with the UPDATE commands below to update values\nSELECT  v.vx ,\n        v.vz ,\n        m.avgValue\nFROM    #dm_values v\n        INNER JOIN #dm_matrix m ON ROUND(v.vx, 0) = m.x\n                                   AND ROUND(v.vz, 0) = m.z\n\nDROP TABLE #dm_matrix\nDROP TABLE #dm_values\n
    \n

    And for the UPDATE you would do something like this:

    \n
    UPDATE v\nSET v.v = m.avgValue\nFROM #dm_values v \nINNER JOIN #dm_matrix m ON ROUND(v.vx, 0) = m.x AND ROUND(v.vz, 0) = m.z\n\nSELECT * FROM #dm_values\n
    \n

    Produces:

    \n

    Matrix:

    \n
    x   z   avgValue\n1   1   0.6\n1   2   0.9  -- row 2 below\n1   3   0.4\n1   4   0.5\n2   1   0.7\n2   2   0.4\n2   3   0.5  -- row 1 below\n2   4   0.5\n3   1   0.4\n3   2   0.1\n3   3   0.3\n3   4   0.8\n4   1   0.1\n4   2   1.0\n4   3   0.5\n4   4   0.5  \n
    \n

    Values:

    \n
    vx  vz  v\n1.8 2.8 NULL  -- x = 2, z = 3\n1.3 1.5 NULL  -- x = 1, z = 2\n
    \n

    After Update:

    \n
    vx  vz  v\n1.8 2.8 0.5\n1.3 1.5 0.9\n
    \n

    NOTE:

    \n

    I've changed the data type to DECIMAL(2, 1) for the purpose of this post, so you may need to modify this based on your actual dataset.

    \n soup wrap:

    From what I can see, you should be able to perform a simple UPDATE using a JOIN, where you ROUND the values of vx and vz for the JOIN condition, performance wise, you'd have to test this on your dataset though.

    Here's the basic method to JOIN the data, note I've padded out the INSERT scripts to have a complete matrix:

    CREATE TABLE #dm_matrix
        (
          x FLOAT ,
          z FLOAT ,
          avgValue DECIMAL(2, 1)
        )
    
    
    INSERT  INTO #dm_matrix
    VALUES  ( 1, 1, RAND() )
    INSERT  INTO #dm_matrix
    VALUES  ( 1, 2, RAND() )
    INSERT  INTO #dm_matrix
    VALUES  ( 1, 3, RAND() )
    INSERT  INTO #dm_matrix
    VALUES  ( 1, 4, RAND() )
    INSERT  INTO #dm_matrix
    VALUES  ( 2, 1, RAND() )
    INSERT  INTO #dm_matrix
    VALUES  ( 2, 2, RAND() )
    INSERT  INTO #dm_matrix
    VALUES  ( 2, 3, RAND() )
    INSERT  INTO #dm_matrix
    VALUES  ( 2, 4, RAND() )
    INSERT  INTO #dm_matrix
    VALUES  ( 3, 1, RAND() )
    INSERT  INTO #dm_matrix
    VALUES  ( 3, 2, RAND() )
    INSERT  INTO #dm_matrix
    VALUES  ( 3, 3, RAND() )
    INSERT  INTO #dm_matrix
    VALUES  ( 3, 4, RAND() )
    INSERT  INTO #dm_matrix
    VALUES  ( 4, 1, RAND() )
    INSERT  INTO #dm_matrix
    VALUES  ( 4, 2, RAND() )
    INSERT  INTO #dm_matrix
    VALUES  ( 4, 3, RAND() )
    INSERT  INTO #dm_matrix
    VALUES  ( 4, 4, RAND() )
    
    SELECT  *
    FROM    #dm_matrix
    
    CREATE TABLE #dm_values
        (
          vx DECIMAL(2, 1) ,
          vz DECIMAL(2, 1) ,
          v FLOAT
        )
    
    INSERT  INTO #dm_values
            ( vx, vz )
    VALUES  ( 1 + RAND() * 3, 1 + RAND() * 3 )
    INSERT  INTO #dm_values
            ( vx, vz )
    VALUES  ( 1 + RAND() * 3, 1 + RAND() * 3 )
    
    SELECT  *
    FROM    #dm_values
    
    -- replace this SELECT with the UPDATE commands below to update values
    SELECT  v.vx ,
            v.vz ,
            m.avgValue
    FROM    #dm_values v
            INNER JOIN #dm_matrix m ON ROUND(v.vx, 0) = m.x
                                       AND ROUND(v.vz, 0) = m.z
    
    DROP TABLE #dm_matrix
    DROP TABLE #dm_values
    

    And for the UPDATE you would do something like this:

    UPDATE v
    SET v.v = m.avgValue
    FROM #dm_values v 
    INNER JOIN #dm_matrix m ON ROUND(v.vx, 0) = m.x AND ROUND(v.vz, 0) = m.z
    
    SELECT * FROM #dm_values
    

    Produces:

    Matrix:

    x   z   avgValue
    1   1   0.6
    1   2   0.9  -- row 2 below
    1   3   0.4
    1   4   0.5
    2   1   0.7
    2   2   0.4
    2   3   0.5  -- row 1 below
    2   4   0.5
    3   1   0.4
    3   2   0.1
    3   3   0.3
    3   4   0.8
    4   1   0.1
    4   2   1.0
    4   3   0.5
    4   4   0.5  
    

    Values:

    vx  vz  v
    1.8 2.8 NULL  -- x = 2, z = 3
    1.3 1.5 NULL  -- x = 1, z = 2
    

    After Update:

    vx  vz  v
    1.8 2.8 0.5
    1.3 1.5 0.9
    

    NOTE:

    I've changed the data type to DECIMAL(2, 1) for the purpose of this post, so you may need to modify this based on your actual dataset.

    qid & accept id: (30643949, 30644145) query: How to get a id from a url using SQL QUERY. The ID changes dynamically. the database is Sql server 2008 soup:

    Try this:

    \n
    SELECT SUBSTRING(\n       @URL2, \n       CHARINDEX('/id/', @URL2)+4, \n       CHARINDEX('/', @URL2, CHARINDEX('/id/', @URL2)+5)\n        - (CHARINDEX('/id/', @URL2)+4))\n
    \n

    Note: This assumes that the id is always followed by at least one more slash.

    \n

    Breakdown:
    \nThe first argument of the substring is the string that contains the full expression.
    \nThe second one is the first index after /id/.\nThe third one is the desired length - calculated by the first index of / after /id/ - the first index after /id/.

    \n

    update

    \n

    To cope with strings that does not contain a slash after the id value, use case:

    \n
    SELECT SUBSTRING(\n       @URL, \n       CHARINDEX('/id/', @URL)+4, \n       CASE WHEN CHARINDEX('/', @URL, CHARINDEX('/id/', @URL)+5) > 0 THEN\n       CHARINDEX('/', @URL, CHARINDEX('/id/', @URL)+5)\n        - (CHARINDEX('/id/', @URL)+4)\n       ELSE\n          LEN(@URL)\n       END\n       )\n
    \n soup wrap:

    Try this:

    SELECT SUBSTRING(
           @URL2, 
           CHARINDEX('/id/', @URL2)+4, 
           CHARINDEX('/', @URL2, CHARINDEX('/id/', @URL2)+5)
            - (CHARINDEX('/id/', @URL2)+4))
    

    Note: This assumes that the id is always followed by at least one more slash.

    Breakdown:
    The first argument of the substring is the string that contains the full expression.
    The second one is the first index after /id/. The third one is the desired length - calculated by the first index of / after /id/ - the first index after /id/.

    update

    To cope with strings that does not contain a slash after the id value, use case:

    SELECT SUBSTRING(
           @URL, 
           CHARINDEX('/id/', @URL)+4, 
           CASE WHEN CHARINDEX('/', @URL, CHARINDEX('/id/', @URL)+5) > 0 THEN
           CHARINDEX('/', @URL, CHARINDEX('/id/', @URL)+5)
            - (CHARINDEX('/id/', @URL)+4)
           ELSE
              LEN(@URL)
           END
           )
    
    qid & accept id: (30652021, 30652148) query: SQL How to select one of two nearly identical rows soup:

    If all of the newer rows have the quantity in them, and the older rows don't, you can try something like this:

    \n
    Select * from MyTable\nwhere [Product] like '%X[0-9]%'\n
    \n

    Alternatively, if all the newer rows have something additional added to the name, try this:

    \n
    Select * from (\n  Select *\n  , ROW_NUMBER() over (partition by ItmKey order by len(product) desc) RN\n  from MyTable\n  ) a\nwhere a.RN = 1\n
    \n

    The first option selects every row where the name of the product includes 'X' followed by a number. The second will return, for each item ID, the row with the longest product name.

    \n soup wrap:

    If all of the newer rows have the quantity in them, and the older rows don't, you can try something like this:

    Select * from MyTable
    where [Product] like '%X[0-9]%'
    

    Alternatively, if all the newer rows have something additional added to the name, try this:

    Select * from (
      Select *
      , ROW_NUMBER() over (partition by ItmKey order by len(product) desc) RN
      from MyTable
      ) a
    where a.RN = 1
    

    The first option selects every row where the name of the product includes 'X' followed by a number. The second will return, for each item ID, the row with the longest product name.

    qid & accept id: (30805539, 30806184) query: Select a specific date for the current year soup:

    HVD's method is probably the simplest:

    \n
    SELECT DATEADD(YEAR,YEAR(GETDATE()) - 2000,'20000531')\n
    \n

    In SQL 2012 and above, they made it really easy.

    \n
    SELECT DATEFROMPARTS(YEAR(GETDATE()),05,31)\n
    \n soup wrap:

    HVD's method is probably the simplest:

    SELECT DATEADD(YEAR,YEAR(GETDATE()) - 2000,'20000531')
    

    In SQL 2012 and above, they made it really easy.

    SELECT DATEFROMPARTS(YEAR(GETDATE()),05,31)
    
    qid & accept id: (30843033, 30843196) query: Get unmatched records without using oracle minus except not in soup:

    The one option left with you is using NOT EXISTS

    \n
    SELECT t1.name \n  FROM table1 t1 \n WHERE NOT EXISTS (SELECT 'X' \n                     FROM table2 t2 \n                    WHERE t2.name = t1.name);\n
    \n

    Update: Using Join

    \n
    with table_ as \n(\n  select t1.name t1_name, t2.name t2_name\n    from table1 t1\n    left join table2 t2 \n      on t1.name = t2.name)\nselect t1_name \n  from table_\n where t2_name is null;\n
    \n

    Or just

    \n
    select t1.name\n  from table1 t1\n  left join table2 t2 \n    on t1.name = t2.name\n where t2.name is null;\n
    \n soup wrap:

    The one option left with you is using NOT EXISTS

    SELECT t1.name 
      FROM table1 t1 
     WHERE NOT EXISTS (SELECT 'X' 
                         FROM table2 t2 
                        WHERE t2.name = t1.name);
    

    Update: Using Join

    with table_ as 
    (
      select t1.name t1_name, t2.name t2_name
        from table1 t1
        left join table2 t2 
          on t1.name = t2.name)
    select t1_name 
      from table_
     where t2_name is null;
    

    Or just

    select t1.name
      from table1 t1
      left join table2 t2 
        on t1.name = t2.name
     where t2.name is null;
    
    qid & accept id: (30857779, 30857960) query: How do I store a SQL statement into a variable soup:
    Dim sqlConnection1 As New SqlConnection("Your Connection String")\nDim cmd As New SqlCommand\nDim reader As SqlDataReader\n\ncmd.CommandText = "Select DESC1 FROM Master WHERE '" & TextBox1.Text & "' "\ncmd.CommandType = CommandType.Text\ncmd.Connection = sqlConnection1\n\nsqlConnection1.Open()    \nreader = cmd.ExecuteReader()\n' Data is accessible through the DataReader object here. \n\nIf reader.HasRows Then \n        Do While reader.Read()\n            Console.WriteLine(reader.GetInt32(0) _\n              & vbTab & reader.GetString(1))\n        Loop \n    Else\n        Console.WriteLine("No rows found.")\n    End If   \nsqlConnection1.Close()\n
    \n

    Considering first column is NUMBER type and second column is VARCHAR

    \n

    Read more

    \n

    ==Update==

    \n
    connetionString = "Data Source=ServerName;Initial Catalog=DatabaseName;User ID=UserName;Password=Password"\n
    \n

    See more

    \n soup wrap:
    Dim sqlConnection1 As New SqlConnection("Your Connection String")
    Dim cmd As New SqlCommand
    Dim reader As SqlDataReader
    
    cmd.CommandText = "Select DESC1 FROM Master WHERE '" & TextBox1.Text & "' "
    cmd.CommandType = CommandType.Text
    cmd.Connection = sqlConnection1
    
    sqlConnection1.Open()    
    reader = cmd.ExecuteReader()
    ' Data is accessible through the DataReader object here. 
    
    If reader.HasRows Then 
            Do While reader.Read()
                Console.WriteLine(reader.GetInt32(0) _
                  & vbTab & reader.GetString(1))
            Loop 
        Else
            Console.WriteLine("No rows found.")
        End If   
    sqlConnection1.Close()
    

    Considering first column is NUMBER type and second column is VARCHAR

    Read more

    ==Update==

    connetionString = "Data Source=ServerName;Initial Catalog=DatabaseName;User ID=UserName;Password=Password"
    

    See more

    qid & accept id: (30877040, 30877188) query: Update table to record position based on text column soup:

    If you want to update the table so that the position column corresponds to the position of the 'name' column (in this example, the alphabet), you can use a case statement:

    \n
    UPDATE myTable SET position =\n   CASE\n      WHEN name = 'a' THEN 1\n      WHEN name = 'b' THEN 2\n      WHEN name = 'c' THEN 3\n      ...\n      ELSE 26\n   END;\n
    \n

    Here is an SQL Fiddle example.

    \n
    \n

    EDIT

    \n

    To order based on the strings you have, you can first write a query using a variable to get the position of each string like this:

    \n
    SET @position := 0;\n\nSELECT @position := @position + 1, name\nFROM(\n  SELECT DISTINCT name\n  FROM myTable\n  ORDER BY name) t;\n
    \n

    Once you have that temporary table, you can join it to your original table and update position of the original to match position of the temp table, like this:

    \n
    SET @position := 0;\n\nUPDATE myTable m\nJOIN(\n  SELECT @position := @position + 1 AS position, name\n  FROM(\n    SELECT DISTINCT name\n    FROM myTable\n    ORDER BY name) t) tmp ON tmp.name = m.name\nSET m.position = tmp.position;\n
    \n

    Here is an SQL Fiddle example of that.

    \n soup wrap:

    If you want to update the table so that the position column corresponds to the position of the 'name' column (in this example, the alphabet), you can use a case statement:

    UPDATE myTable SET position =
       CASE
          WHEN name = 'a' THEN 1
          WHEN name = 'b' THEN 2
          WHEN name = 'c' THEN 3
          ...
          ELSE 26
       END;
    

    Here is an SQL Fiddle example.


    EDIT

    To order based on the strings you have, you can first write a query using a variable to get the position of each string like this:

    SET @position := 0;
    
    SELECT @position := @position + 1, name
    FROM(
      SELECT DISTINCT name
      FROM myTable
      ORDER BY name) t;
    

    Once you have that temporary table, you can join it to your original table and update position of the original to match position of the temp table, like this:

    SET @position := 0;
    
    UPDATE myTable m
    JOIN(
      SELECT @position := @position + 1 AS position, name
      FROM(
        SELECT DISTINCT name
        FROM myTable
        ORDER BY name) t) tmp ON tmp.name = m.name
    SET m.position = tmp.position;
    

    Here is an SQL Fiddle example of that.

    qid & accept id: (30906021, 30906035) query: Select column names from one table based off of values in column of another table soup:

    These is one type of example.

    \n
    select * from table1 join table2 on table1.col1= table2.col2\n
    \n

    we follow these above syntax.

    \n
    select * from table1 join table2 on table1.col1= table2.foo\n
    \n

    These is the how find out column_name is present in another table

    \n
    SELECT * FROM(    SELECT letter  FROM `Table_2` ) a JOIN\n(SELECT `COLUMN_NAME` \nFROM `INFORMATION_SCHEMA`.`COLUMNS` \nWHERE `TABLE_SCHEMA`='database_name' \n    AND `TABLE_NAME`='Table_1') b ON a.letter= b. COLUMN_NAME\n
    \n

    Thank you.

    \n soup wrap:

    These is one type of example.

    select * from table1 join table2 on table1.col1= table2.col2
    

    we follow these above syntax.

    select * from table1 join table2 on table1.col1= table2.foo
    

    These is the how find out column_name is present in another table

    SELECT * FROM(    SELECT letter  FROM `Table_2` ) a JOIN
    (SELECT `COLUMN_NAME` 
    FROM `INFORMATION_SCHEMA`.`COLUMNS` 
    WHERE `TABLE_SCHEMA`='database_name' 
        AND `TABLE_NAME`='Table_1') b ON a.letter= b. COLUMN_NAME
    

    Thank you.

    qid & accept id: (30932071, 30933685) query: Query to filter records based on specific conditions soup:

    Use analytical functions:

    \n
    select distinct\n  id,\n  first_value (status) over (partition by id order by status desc) status,\n  first_value (amt   ) over (partition by id order by status desc) amt\nfrom\n  tq84_a_status_check\nwhere\n  status in ('LC', 'BE')\norder by\n  id;\n
    \n

    Testdata:

    \n
    create table tq84_a_status_check (\n  id number,\n  status varchar2(10),\n  amt number\n);\n\nselect distinct\n  id,\n  first_value (status) over (partition by id order by status desc) status,\n  first_value (amt   ) over (partition by id order by status desc) amt\nfrom\n  tq84_a_status_check\nwhere\n  status in ('LC', 'BE')\norder by\n  id;\n
    \n soup wrap:

    Use analytical functions:

    select distinct
      id,
      first_value (status) over (partition by id order by status desc) status,
      first_value (amt   ) over (partition by id order by status desc) amt
    from
      tq84_a_status_check
    where
      status in ('LC', 'BE')
    order by
      id;
    

    Testdata:

    create table tq84_a_status_check (
      id number,
      status varchar2(10),
      amt number
    );
    
    select distinct
      id,
      first_value (status) over (partition by id order by status desc) status,
      first_value (amt   ) over (partition by id order by status desc) amt
    from
      tq84_a_status_check
    where
      status in ('LC', 'BE')
    order by
      id;
    
    qid & accept id: (30934830, 30955870) query: row counter with condition in two different columns soup:

    I would do the running total using sum() as a windowed aggregate function with the over ... clause, which works in SQL Server 2012+.

    \n
    select \n    g.RowId, g.GameDate, t.GoalMinute, p.PlayerName, \n    GoalsHome = COALESCE(SUM(case when TeamRowId = g.TeamHomeRowId then 1 end) OVER (PARTITION BY gamerowid ORDER BY goalminute),0),\n    GoalsGuest = COALESCE(SUM(case when TeamRowId = g.TeamGuestRowId then 1 end) OVER (PARTITION BY gamerowid ORDER BY goalminute),0) \nfrom tblGoals t\njoin tblPlayers p on t.PlayerRowId = p.RowId\njoin tblGames g on t.GameRowId = g.RowId\norder by t.GameRowId, t.GoalMinute\n
    \n

    Another approach (that also works in older versions) is to use a self-join and sum up the rows with lower goalminutes. For ease of reading I've used a common table expression to split the goals into two columns for home and guest team:

    \n
    ;with t as (\n    select \n       g.GoalMinute, g.PlayerRowId, g.GameRowId, \n       case when TeamRowId = ga.TeamHomeRowId then 1 end HomeGoals,\n       case when TeamRowId = ga.TeamGuestRowId then 1 end GuestGoals\n    from tblGoals g\n    join tblGames ga on g.GameRowId = ga.RowId\n)\n\nselect \n    g.RowId, g.GameDate, t.GoalMinute, p.PlayerName, \n    GoalsHome  = (select sum(coalesce(HomeGoals,0)) from t t2 where t2.GoalMinute <= t.GoalMinute and t2.GameRowId = t.GameRowId),\n    GoalsGuest = (select sum(coalesce(GuestGoals,0)) from t t2 where t2.GoalMinute <= t.GoalMinute and t2.GameRowId = t.GameRowId)\nfrom t\njoin tblPlayers p on t.PlayerRowId = p.RowId\njoin tblGames g on t.GameRowId = g.RowId\norder by t.GameRowId, t.GoalMinute\n
    \n

    The CTE isn't necessary though, you could just as well use a derived table

    \n

    Sample SQL Fiddle

    \n soup wrap:

    I would do the running total using sum() as a windowed aggregate function with the over ... clause, which works in SQL Server 2012+.

    select 
        g.RowId, g.GameDate, t.GoalMinute, p.PlayerName, 
        GoalsHome = COALESCE(SUM(case when TeamRowId = g.TeamHomeRowId then 1 end) OVER (PARTITION BY gamerowid ORDER BY goalminute),0),
        GoalsGuest = COALESCE(SUM(case when TeamRowId = g.TeamGuestRowId then 1 end) OVER (PARTITION BY gamerowid ORDER BY goalminute),0) 
    from tblGoals t
    join tblPlayers p on t.PlayerRowId = p.RowId
    join tblGames g on t.GameRowId = g.RowId
    order by t.GameRowId, t.GoalMinute
    

    Another approach (that also works in older versions) is to use a self-join and sum up the rows with lower goalminutes. For ease of reading I've used a common table expression to split the goals into two columns for home and guest team:

    ;with t as (
        select 
           g.GoalMinute, g.PlayerRowId, g.GameRowId, 
           case when TeamRowId = ga.TeamHomeRowId then 1 end HomeGoals,
           case when TeamRowId = ga.TeamGuestRowId then 1 end GuestGoals
        from tblGoals g
        join tblGames ga on g.GameRowId = ga.RowId
    )
    
    select 
        g.RowId, g.GameDate, t.GoalMinute, p.PlayerName, 
        GoalsHome  = (select sum(coalesce(HomeGoals,0)) from t t2 where t2.GoalMinute <= t.GoalMinute and t2.GameRowId = t.GameRowId),
        GoalsGuest = (select sum(coalesce(GuestGoals,0)) from t t2 where t2.GoalMinute <= t.GoalMinute and t2.GameRowId = t.GameRowId)
    from t
    join tblPlayers p on t.PlayerRowId = p.RowId
    join tblGames g on t.GameRowId = g.RowId
    order by t.GameRowId, t.GoalMinute
    

    The CTE isn't necessary though, you could just as well use a derived table

    Sample SQL Fiddle

    qid & accept id: (30950398, 30950453) query: I have a different date format like 20-Mar-2015 to 30-Mar-2015 in a column in sql. i need to split it in two different columns in sql query soup:

    I think you sql will useful to you.

    \n
      eg:- SELECT SUBSTRING(date_column,1,11) AS date_1, \n        SUBSTRING(date_column,16,27) AS date_2; \n
    \n

    Here date_column='20-Mar-2015 TO 30-Mar-2015'

    \n
    SELECT SUBSTRING('20-Mar-2015 TO 30-Mar-2015',1,11) AS date_1, \n SUBSTRING('20-Mar-2015 TO 30-Mar-2015',16,27) AS date_2; \n
    \n

    Thank you.

    \n soup wrap:

    I think you sql will useful to you.

      eg:- SELECT SUBSTRING(date_column,1,11) AS date_1, 
            SUBSTRING(date_column,16,27) AS date_2; 
    

    Here date_column='20-Mar-2015 TO 30-Mar-2015'

    SELECT SUBSTRING('20-Mar-2015 TO 30-Mar-2015',1,11) AS date_1, 
     SUBSTRING('20-Mar-2015 TO 30-Mar-2015',16,27) AS date_2; 
    

    Thank you.

    qid & accept id: (30977821, 30978597) query: Title search in SQL With replacement of noice words soup:

    I think you want something like this:

    \n
    DECLARE @nw TABLE ( sn INT, [key] VARCHAR(100) )\n\nINSERT  INTO @nw\nVALUES  ( 1, 'and' ),\n        ( 2, 'on' ),\n        ( 3, 'of' ),\n        ( 4, 'the' ),\n        ( 5, 'view' )\n\n\nDECLARE @s VARCHAR(100) = 'view This of is the Man';\nWITH    cte\n          AS ( SELECT   sn ,\n                        REPLACE(@s, [key], '') AS s\n               FROM     @nw\n               WHERE    sn = 1\n               UNION ALL\n               SELECT   n.sn ,\n                        REPLACE(s, n.[key], '') AS s\n               FROM     @nw n\n                        JOIN cte c ON c.sn + 1 = n.sn\n             )\n    SELECT TOP 1 @s =\n            REPLACE(REPLACE(REPLACE(s, ' ', '[]'), '][', ''), '[]', ' ')\n    FROM    cte\n    ORDER BY sn DESC\n
    \n

    Output:

    \n
    This is Man\n
    \n

    First you recursively removing noise words from search string, and in the end a little trick to remove duplicate continuous spaces.

    \n

    Then you can filter base table like:

    \n
    SELECT * FROM TableName WHERE Title LIKE '%' + @s + '%' \n
    \n

    May be you want to consider FULL TEXT SEARCH? I suspect you also want to remove those noise words from base table while searching. It will be very slow. Full Text Search is optimized for such type of work. It includes noise words, stoplists and more...

    \n

    If you don't want to use Full Text Search, you can add additional column to your base table, which will hold the value from Title but without noise words and search based on that column.

    \n

    But if you insist here is the full code for this:

    \n
    DECLARE @t TABLE\n    (\n      SNo INT ,\n      Title VARCHAR(100)\n    )\nINSERT  INTO @t\n        ( SNo, Title )\nVALUES  ( 1, 'women holding stack  of gifts' ),\n        ( 2, 'Rear view of a man playing golf' ),\n        ( 3, 'Women holding gifts' ),\n        ( 4, 'Women holding gifts' ),\n        ( 5, 'Businessman reading a newspaper and smiling' ),\n        ( 6, 'Hey This some what of is the Man from Chicago' )\n\nDECLARE @nw TABLE\n    (\n      sn INT ,\n      [key] VARCHAR(100)\n    )\n\nINSERT  INTO @nw\nVALUES  ( 1, 'and' ),\n        ( 2, 'on' ),\n        ( 3, 'of' ),\n        ( 4, 'the' ),\n        ( 5, 'view' ),\n        ( 6, 'some' ),\n        ( 7, 'what' )\n
    \n

    And the code:

    \n
    DECLARE @s VARCHAR(100) = 'view This of is the Man';\nWITH    cte\n          AS ( SELECT   sn ,\n                        REPLACE(@s, [key], '') AS s\n               FROM     @nw\n               WHERE    sn = 1\n               UNION ALL\n               SELECT   n.sn ,\n                        REPLACE(s, n.[key], '') AS s\n               FROM     @nw n\n                        JOIN cte c ON c.sn + 1 = n.sn\n             )\n    SELECT TOP 1\n            @s = REPLACE(REPLACE(REPLACE(s, ' ', '[]'), '][', ''), '[]', ' ')\n    FROM    cte\n    ORDER BY sn DESC\n\n;WITH    cte\n          AS ( SELECT   t.* ,\n                        n.sn ,\n                        REPLACE(t.Title, n.[key], '') AS s\n               FROM     @t t\n                        JOIN @nw n ON sn = 1\n               UNION ALL\n               SELECT   c.SNo ,\n                        c.Title ,\n                        n.sn ,\n                        REPLACE(c.s, n.[key], '')\n               FROM     cte c\n                        JOIN @nw n ON n.sn = c.sn + 1\n             )\n    SELECT  *\n    FROM    cte\n    WHERE   REPLACE(REPLACE(REPLACE(s, ' ', '[]'), '][', ''), '[]', ' ') LIKE '%' + @s + '%'\n
    \n

    And the output:

    \n
    SNo Title                                           sn  s\n6   Hey This some what of is the Man from Chicago   7   Hey This    is  Man from Chicago\n
    \n soup wrap:

    I think you want something like this:

    DECLARE @nw TABLE ( sn INT, [key] VARCHAR(100) )
    
    INSERT  INTO @nw
    VALUES  ( 1, 'and' ),
            ( 2, 'on' ),
            ( 3, 'of' ),
            ( 4, 'the' ),
            ( 5, 'view' )
    
    
    DECLARE @s VARCHAR(100) = 'view This of is the Man';
    WITH    cte
              AS ( SELECT   sn ,
                            REPLACE(@s, [key], '') AS s
                   FROM     @nw
                   WHERE    sn = 1
                   UNION ALL
                   SELECT   n.sn ,
                            REPLACE(s, n.[key], '') AS s
                   FROM     @nw n
                            JOIN cte c ON c.sn + 1 = n.sn
                 )
        SELECT TOP 1 @s =
                REPLACE(REPLACE(REPLACE(s, ' ', '[]'), '][', ''), '[]', ' ')
        FROM    cte
        ORDER BY sn DESC
    

    Output:

    This is Man
    

    First you recursively removing noise words from search string, and in the end a little trick to remove duplicate continuous spaces.

    Then you can filter base table like:

    SELECT * FROM TableName WHERE Title LIKE '%' + @s + '%' 
    

    May be you want to consider FULL TEXT SEARCH? I suspect you also want to remove those noise words from base table while searching. It will be very slow. Full Text Search is optimized for such type of work. It includes noise words, stoplists and more...

    If you don't want to use Full Text Search, you can add additional column to your base table, which will hold the value from Title but without noise words and search based on that column.

    But if you insist here is the full code for this:

    DECLARE @t TABLE
        (
          SNo INT ,
          Title VARCHAR(100)
        )
    INSERT  INTO @t
            ( SNo, Title )
    VALUES  ( 1, 'women holding stack  of gifts' ),
            ( 2, 'Rear view of a man playing golf' ),
            ( 3, 'Women holding gifts' ),
            ( 4, 'Women holding gifts' ),
            ( 5, 'Businessman reading a newspaper and smiling' ),
            ( 6, 'Hey This some what of is the Man from Chicago' )
    
    DECLARE @nw TABLE
        (
          sn INT ,
          [key] VARCHAR(100)
        )
    
    INSERT  INTO @nw
    VALUES  ( 1, 'and' ),
            ( 2, 'on' ),
            ( 3, 'of' ),
            ( 4, 'the' ),
            ( 5, 'view' ),
            ( 6, 'some' ),
            ( 7, 'what' )
    

    And the code:

    DECLARE @s VARCHAR(100) = 'view This of is the Man';
    WITH    cte
              AS ( SELECT   sn ,
                            REPLACE(@s, [key], '') AS s
                   FROM     @nw
                   WHERE    sn = 1
                   UNION ALL
                   SELECT   n.sn ,
                            REPLACE(s, n.[key], '') AS s
                   FROM     @nw n
                            JOIN cte c ON c.sn + 1 = n.sn
                 )
        SELECT TOP 1
                @s = REPLACE(REPLACE(REPLACE(s, ' ', '[]'), '][', ''), '[]', ' ')
        FROM    cte
        ORDER BY sn DESC
    
    ;WITH    cte
              AS ( SELECT   t.* ,
                            n.sn ,
                            REPLACE(t.Title, n.[key], '') AS s
                   FROM     @t t
                            JOIN @nw n ON sn = 1
                   UNION ALL
                   SELECT   c.SNo ,
                            c.Title ,
                            n.sn ,
                            REPLACE(c.s, n.[key], '')
                   FROM     cte c
                            JOIN @nw n ON n.sn = c.sn + 1
                 )
        SELECT  *
        FROM    cte
        WHERE   REPLACE(REPLACE(REPLACE(s, ' ', '[]'), '][', ''), '[]', ' ') LIKE '%' + @s + '%'
    

    And the output:

    SNo Title                                           sn  s
    6   Hey This some what of is the Man from Chicago   7   Hey This    is  Man from Chicago
    
    qid & accept id: (31010476, 31013523) query: Find group of N similar numbers in group of N+M numbers soup:

    SQL Fiddle

    \n

    PostgreSQL 9.3 Schema Setup:

    \n

    A small dataset of random data:

    \n
    CREATE TABLE test (\n  id INT,\n  population INT\n);\nINSERT INTO TEST VALUES (  1, 12 );\nINSERT INTO TEST VALUES (  2, 11 );\nINSERT INTO TEST VALUES (  3, 14 );\nINSERT INTO TEST VALUES (  4,  6 );\nINSERT INTO TEST VALUES (  5,  7 );\nINSERT INTO TEST VALUES (  6,  7 );\nINSERT INTO TEST VALUES (  7,  1 );\nINSERT INTO TEST VALUES (  8, 15 );\nINSERT INTO TEST VALUES (  9, 14 );\nINSERT INTO TEST VALUES ( 10, 14 );\nINSERT INTO TEST VALUES ( 11, 15 );\nINSERT INTO TEST VALUES ( 12, 12 );\nINSERT INTO TEST VALUES ( 13, 11 );\nINSERT INTO TEST VALUES ( 14,  3 );\nINSERT INTO TEST VALUES ( 15,  8 );\nINSERT INTO TEST VALUES ( 16,  1 );\nINSERT INTO TEST VALUES ( 17,  1 );\nINSERT INTO TEST VALUES ( 18,  2 );\nINSERT INTO TEST VALUES ( 19,  3 );\nINSERT INTO TEST VALUES ( 20,  5 );\n
    \n

    Query 1:

    \n
    WITH ordered_sums AS (\n  SELECT ID,\n         POPULATION,\n         ROW_NUMBER() OVER ( ORDER BY POPULATION ) AS RN,\n         POPULATION - LAG(POPULATION,4) OVER ( ORDER BY POPULATION ) AS DIFFERENCE\n  FROM   test\n), minimum_rn AS (\n  SELECT DISTINCT FIRST_VALUE( RN ) OVER wnd AS optimal_rn\n  FROM   ordered_sums\n  WINDOW wnd AS ( ORDER BY DIFFERENCE )\n)\nSELECT ID,\n       POPULATION\nFROM   ordered_sums o\n       INNER JOIN\n       minimum_rn m\n       ON ( o.RN BETWEEN m.OPTIMAL_RN - 4 AND m.OPTIMAL_RN )\n
    \n

    Results:

    \n
    | id | population |\n|----|------------|\n| 10 |         14 |\n|  9 |         14 |\n|  3 |         14 |\n| 11 |         15 |\n|  8 |         15 |\n
    \n

    The query above will select 5 rows - to change it to select N rows then change the 4s in the LAG function and in the last line to N-1.

    \n soup wrap:

    SQL Fiddle

    PostgreSQL 9.3 Schema Setup:

    A small dataset of random data:

    CREATE TABLE test (
      id INT,
      population INT
    );
    INSERT INTO TEST VALUES (  1, 12 );
    INSERT INTO TEST VALUES (  2, 11 );
    INSERT INTO TEST VALUES (  3, 14 );
    INSERT INTO TEST VALUES (  4,  6 );
    INSERT INTO TEST VALUES (  5,  7 );
    INSERT INTO TEST VALUES (  6,  7 );
    INSERT INTO TEST VALUES (  7,  1 );
    INSERT INTO TEST VALUES (  8, 15 );
    INSERT INTO TEST VALUES (  9, 14 );
    INSERT INTO TEST VALUES ( 10, 14 );
    INSERT INTO TEST VALUES ( 11, 15 );
    INSERT INTO TEST VALUES ( 12, 12 );
    INSERT INTO TEST VALUES ( 13, 11 );
    INSERT INTO TEST VALUES ( 14,  3 );
    INSERT INTO TEST VALUES ( 15,  8 );
    INSERT INTO TEST VALUES ( 16,  1 );
    INSERT INTO TEST VALUES ( 17,  1 );
    INSERT INTO TEST VALUES ( 18,  2 );
    INSERT INTO TEST VALUES ( 19,  3 );
    INSERT INTO TEST VALUES ( 20,  5 );
    

    Query 1:

    WITH ordered_sums AS (
      SELECT ID,
             POPULATION,
             ROW_NUMBER() OVER ( ORDER BY POPULATION ) AS RN,
             POPULATION - LAG(POPULATION,4) OVER ( ORDER BY POPULATION ) AS DIFFERENCE
      FROM   test
    ), minimum_rn AS (
      SELECT DISTINCT FIRST_VALUE( RN ) OVER wnd AS optimal_rn
      FROM   ordered_sums
      WINDOW wnd AS ( ORDER BY DIFFERENCE )
    )
    SELECT ID,
           POPULATION
    FROM   ordered_sums o
           INNER JOIN
           minimum_rn m
           ON ( o.RN BETWEEN m.OPTIMAL_RN - 4 AND m.OPTIMAL_RN )
    

    Results:

    | id | population |
    |----|------------|
    | 10 |         14 |
    |  9 |         14 |
    |  3 |         14 |
    | 11 |         15 |
    |  8 |         15 |
    

    The query above will select 5 rows - to change it to select N rows then change the 4s in the LAG function and in the last line to N-1.

    qid & accept id: (31072333, 31074214) query: unsigned right shift '>>>' Operator in sql server soup:

    T-SQL has no bit-shift operators, so you'd have to implement one yourself. There's an implementation of a bitwise shifts here: http://sqlblog.com/blogs/adam_machanic/archive/2006/07/12/bitmask-handling-part-4-left-shift-and-right-shift.aspx

    \n

    You'd have to cast your integer to a varbinary, use the bitwise shift function and cast back to integer and (hopefully) hey-presto! There's your result you're expecting.

    \n

    Implementation and testing is left as an exercise for the reader...

    \n

    Edit - To try to clarify what I have put in the comments below, executing this SQL will demonstrate the different results given by the various CASTs:

    \n
    SELECT -5381 AS Signed_Integer,\n        cast(-5381 AS varbinary) AS Binary_Representation_of_Signed_Integer,\n        cast(cast(-5381 AS bigint) AS varbinary) AS Binary_Representation_of_Signed_Big_Integer, \n        cast(cast(-5381 AS varbinary) AS bigint) AS Signed_Integer_Transposed_onto_Big_Integer, \n        cast(cast(cast(-5381 AS varbinary) AS bigint) AS varbinary) AS Binary_Representation_of_Signed_Integer_Trasposed_onto_Big_Integer\n
    \n

    Results:

    \n
    Signed_Integer Binary_Representation_of_Signed_Integer                        Binary_Representation_of_Signed_Big_Integer                    Signed_Integer_Transposed_onto_Big_Integer Binary_Representation_of_Signed_Integer_Trasposed_onto_Big_Integer\n-------------- -------------------------------------------------------------- -------------------------------------------------------------- ------------------------------------------ ------------------------------------------------------------------\n-5381          0xFFFFEAFB                                                     0xFFFFFFFFFFFFEAFB                                             4294961915                                 0x00000000FFFFEAFB\n
    \n soup wrap:

    T-SQL has no bit-shift operators, so you'd have to implement one yourself. There's an implementation of a bitwise shifts here: http://sqlblog.com/blogs/adam_machanic/archive/2006/07/12/bitmask-handling-part-4-left-shift-and-right-shift.aspx

    You'd have to cast your integer to a varbinary, use the bitwise shift function and cast back to integer and (hopefully) hey-presto! There's your result you're expecting.

    Implementation and testing is left as an exercise for the reader...

    Edit - To try to clarify what I have put in the comments below, executing this SQL will demonstrate the different results given by the various CASTs:

    SELECT -5381 AS Signed_Integer,
            cast(-5381 AS varbinary) AS Binary_Representation_of_Signed_Integer,
            cast(cast(-5381 AS bigint) AS varbinary) AS Binary_Representation_of_Signed_Big_Integer, 
            cast(cast(-5381 AS varbinary) AS bigint) AS Signed_Integer_Transposed_onto_Big_Integer, 
            cast(cast(cast(-5381 AS varbinary) AS bigint) AS varbinary) AS Binary_Representation_of_Signed_Integer_Trasposed_onto_Big_Integer
    

    Results:

    Signed_Integer Binary_Representation_of_Signed_Integer                        Binary_Representation_of_Signed_Big_Integer                    Signed_Integer_Transposed_onto_Big_Integer Binary_Representation_of_Signed_Integer_Trasposed_onto_Big_Integer
    -------------- -------------------------------------------------------------- -------------------------------------------------------------- ------------------------------------------ ------------------------------------------------------------------
    -5381          0xFFFFEAFB                                                     0xFFFFFFFFFFFFEAFB                                             4294961915                                 0x00000000FFFFEAFB
    
    qid & accept id: (31086391, 31086498) query: Insert If duplicate not found in table else update soup:

    If the combination of roll and sub should be unique, you should define such a key in your table:

    \n
    ALTER TABLE student ADD CONSTRAINT student_uq UNIQUE(roll, sub)\n
    \n

    Note that if you do this, you don't have to explicitly create the index you're creating, the constraint will create on for you. Once you have this is place, you can use the on duplicate key syntax you were trying to use:

    \n
    INSERT INTO student(roll, mark, sub)\nVALUES (102, 22, 12)\nON DUPLICATE KEY UPDATE mark = VALUES(mark)\n
    \n soup wrap:

    If the combination of roll and sub should be unique, you should define such a key in your table:

    ALTER TABLE student ADD CONSTRAINT student_uq UNIQUE(roll, sub)
    

    Note that if you do this, you don't have to explicitly create the index you're creating, the constraint will create on for you. Once you have this is place, you can use the on duplicate key syntax you were trying to use:

    INSERT INTO student(roll, mark, sub)
    VALUES (102, 22, 12)
    ON DUPLICATE KEY UPDATE mark = VALUES(mark)
    
    qid & accept id: (31124970, 31125712) query: how to filter all columns together having not null value in sql server soup:

    For SQL Server, if you can handle returning an "extra" column, you can do something like this:

    \n
     ;WITH xmlnamespaces('http://www.w3.org/2001/XMLSchema-instance' AS ns)\n  SELECT v.*\n    FROM ( SELECT t.*\n                , (SELECT t.*\n                      FOR xml path('row'), elements xsinil, type\n                  ).value('count(//*/@ns:nil)', 'int') AS NullCount\n            FROM table_name t\n         ) v\n   WHERE v.NullCount = 0\n
    \n

    I couldn't get the NullCount expression into a HAVING clause, this was as close as I could come. So this returns an extra NullCount column.

    \n
    \n

    Tested on SQL Server 2008

    \n
     CREATE TABLE foo\n ( id      INT NULL\n , col2    INT NULL\n , col3    VARCHAR(10) NULL\n , col4    DATE NULL\n , col5    DECIMAL(14,5) NULL\n );\n\n INSERT INTO foo (id, col2, col3, col4, col5) VALUES\n  (1,NULL,NULL,NULL,NULL)\n ,(2,2,'2','2/2/2012',22.22)\n ,(3,3,'3','3/3/2013',333.333)\n ,(4,4,NULL,'4/4/2014',4444.4444)\n ,(5,5,'5',NULL,55555.55555)\n ,(6,6,'6','6/6/2016',NULL)\n ;\n\n ;WITH xmlnamespaces('http://www.w3.org/2001/XMLSchema-instance' AS ns)\n  SELECT t.*\n       , (SELECT t.*\n             FOR xml path('row'), elements xsinil, type\n         ).value('count(//*/@ns:nil)', 'int') AS NullCount\n    FROM foo t\n ;\n\n ;WITH xmlnamespaces('http://www.w3.org/2001/XMLSchema-instance' AS ns)\n  SELECT v.*\n    FROM ( SELECT t.*\n                , (SELECT t.*\n                      FOR xml path('row'), elements xsinil, type\n                  ).value('count(//*/@ns:nil)', 'int') AS NullCount\n             FROM foo t\n         ) v\n   WHERE v.NullCount = 0\n ;\n
    \n soup wrap:

    For SQL Server, if you can handle returning an "extra" column, you can do something like this:

     ;WITH xmlnamespaces('http://www.w3.org/2001/XMLSchema-instance' AS ns)
      SELECT v.*
        FROM ( SELECT t.*
                    , (SELECT t.*
                          FOR xml path('row'), elements xsinil, type
                      ).value('count(//*/@ns:nil)', 'int') AS NullCount
                FROM table_name t
             ) v
       WHERE v.NullCount = 0
    

    I couldn't get the NullCount expression into a HAVING clause, this was as close as I could come. So this returns an extra NullCount column.


    Tested on SQL Server 2008

     CREATE TABLE foo
     ( id      INT NULL
     , col2    INT NULL
     , col3    VARCHAR(10) NULL
     , col4    DATE NULL
     , col5    DECIMAL(14,5) NULL
     );
    
     INSERT INTO foo (id, col2, col3, col4, col5) VALUES
      (1,NULL,NULL,NULL,NULL)
     ,(2,2,'2','2/2/2012',22.22)
     ,(3,3,'3','3/3/2013',333.333)
     ,(4,4,NULL,'4/4/2014',4444.4444)
     ,(5,5,'5',NULL,55555.55555)
     ,(6,6,'6','6/6/2016',NULL)
     ;
    
     ;WITH xmlnamespaces('http://www.w3.org/2001/XMLSchema-instance' AS ns)
      SELECT t.*
           , (SELECT t.*
                 FOR xml path('row'), elements xsinil, type
             ).value('count(//*/@ns:nil)', 'int') AS NullCount
        FROM foo t
     ;
    
     ;WITH xmlnamespaces('http://www.w3.org/2001/XMLSchema-instance' AS ns)
      SELECT v.*
        FROM ( SELECT t.*
                    , (SELECT t.*
                          FOR xml path('row'), elements xsinil, type
                      ).value('count(//*/@ns:nil)', 'int') AS NullCount
                 FROM foo t
             ) v
       WHERE v.NullCount = 0
     ;
    
    qid & accept id: (31132669, 31132718) query: Is there a way to get a range of records in Postgres using LIMIT keyword soup:

    Use the OFFSET function.

    \n

    First 30000:

    \n
    SELECT *\nFROM artist t1\nORDER BY count DESC\nLIMIT 30000;\n
    \n

    30001 to 60000

    \n
    SELECT *\nFROM artist t1\nORDER BY count DESC\nLIMIT 30000 OFFSET 30001;\n
    \n

    60001 to 90000

    \n
    SELECT *\nFROM artist t1\nORDER BY count DESC\nLIMIT 30000 OFFSET 60001;\n
    \n soup wrap:

    Use the OFFSET function.

    First 30000:

    SELECT *
    FROM artist t1
    ORDER BY count DESC
    LIMIT 30000;
    

    30001 to 60000

    SELECT *
    FROM artist t1
    ORDER BY count DESC
    LIMIT 30000 OFFSET 30001;
    

    60001 to 90000

    SELECT *
    FROM artist t1
    ORDER BY count DESC
    LIMIT 30000 OFFSET 60001;
    
    qid & accept id: (31143856, 31144212) query: Getting the highest number from a mysql query soup:

    I suggest you aim for less than complete perfection in your assignment of ord values. You can get away with this as follows:

    \n
      \n
    1. don't make ord unique. (It isn't).
    2. \n
    3. rely on the ordering of phonebook_name to get a good order of names. MySQL has these wonderful case-insensitive collations for precisely this purpose.
    4. \n
    5. I suppose you're trying to make some of the entries for a company come first, and others come last. Set the ord column to 50 for everybody, then give the entries you want first lower numbers, and the ones you want last higher numbers.
    6. \n
    \n

    When you display data for a particular company, do it like this ...

    \n
    SELECT whatever, whatever\n  FROM phonebook\n WHERE id_company = 11\n ORDER BY ord, phonebook_name, phonebook_number, id_phonebook\n
    \n

    This ORDER BY clause will do what you want, and it will be stable if there are duplicates. You can then, in your user interface, move an entry up with a query like this.

    \n
    UPDATE phonebook SET ord=ord-1 WHERE id_phonebook = :recordnumber\n
    \n soup wrap:

    I suggest you aim for less than complete perfection in your assignment of ord values. You can get away with this as follows:

    1. don't make ord unique. (It isn't).
    2. rely on the ordering of phonebook_name to get a good order of names. MySQL has these wonderful case-insensitive collations for precisely this purpose.
    3. I suppose you're trying to make some of the entries for a company come first, and others come last. Set the ord column to 50 for everybody, then give the entries you want first lower numbers, and the ones you want last higher numbers.

    When you display data for a particular company, do it like this ...

    SELECT whatever, whatever
      FROM phonebook
     WHERE id_company = 11
     ORDER BY ord, phonebook_name, phonebook_number, id_phonebook
    

    This ORDER BY clause will do what you want, and it will be stable if there are duplicates. You can then, in your user interface, move an entry up with a query like this.

    UPDATE phonebook SET ord=ord-1 WHERE id_phonebook = :recordnumber
    
    qid & accept id: (31173730, 31174069) query: How to display row value as column value in SQL Server (only one column rows value should be displayed as multiple columns) soup:

    Here is one approach using dynamic crosstab:

    \n

    SQL Fiddle

    \n

    Generate sample data

    \n
    use tempdb;\nCREATE TABLE yourtable(\n    id          INT,\n    pname       VARCHAR(20),\n    childname   VARCHAR(20)\n)\nINSERT INTO yourtable VALUES\n(1, 'Parent1', 'p1child1'), \n(1, 'Parent1', 'p1child2'), \n(1, 'Parent1', 'p1child3'), \n(2, 'Parent2', 'p2child1'), \n(2, 'Parent2', 'p2child2'), \n(3, 'Parent3', 'p3child1'), \n(3, 'Parent3', 'p3child2'), \n(3, 'Parent3', 'p3child3'), \n(3, 'Parent3', 'p3child4'), \n(4, 'Parent4', 'p4child1'), \n(4, 'Parent4', 'p4child2'), \n(4, 'Parent4', 'p4child3');\n
    \n

    Dynamic Crosstab

    \n
    DECLARE @maxNoChildren INT\nDECLARE @sql1 VARCHAR(4000) = ''\nDECLARE @sql2 VARCHAR(4000) = ''\nDECLARE @sql3 VARCHAR(4000) = ''\n\nSELECT TOP 1 @maxNoChildren = COUNT(*) FROM yourtable GROUP BY id ORDER BY COUNT(*) DESC\n\nSELECT @sql1 = \n'SELECT\n    id\n    ,pname\n'\n\nSELECT @sql2 = @sql2 +\n'   ,MAX(CASE WHEN RN = ' + CONVERT(VARCHAR(5), N) + ' THEN childname END) AS ' + QUOTENAME('child' + CONVERT(VARCHAR(5), N)) + CHAR(10)\nFROM(\n    SELECT TOP(@maxNoChildren)\n        ROW_NUMBER() OVER(ORDER BY (SELECT NULL))\n    FROM sys.columns a\n    --CROSS JOIN sys.columns b\n)T(N)\nORDER BY N\n\nSELECT @sql3 =\n'FROM(\n    SELECT *,\n        RN = ROW_NUMBER() OVER(PARTITION BY id ORDER BY (SELECT NULL))\n    FROM yourtable\n)t\nGROUP BY id, pname\nORDER BY id'\n\nPRINT(@sql1 + @sql2 + @sql3)\nEXEC (@sql1 + @sql2 + @sql3)\n
    \n

    Result

    \n
    | id |   pname |   child1 |   child2 |   child3 |   child4 |\n|----|---------|----------|----------|----------|----------|\n|  1 | Parent1 | p1child1 | p1child2 | p1child3 |   (null) |\n|  2 | Parent2 | p2child1 | p2child2 |   (null) |   (null) |\n|  3 | Parent3 | p3child1 | p3child2 | p3child3 | p3child4 |\n|  4 | Parent4 | p4child1 | p4child2 | p4child3 |   (null) |\n
    \n soup wrap:

    Here is one approach using dynamic crosstab:

    SQL Fiddle

    Generate sample data

    use tempdb;
    CREATE TABLE yourtable(
        id          INT,
        pname       VARCHAR(20),
        childname   VARCHAR(20)
    )
    INSERT INTO yourtable VALUES
    (1, 'Parent1', 'p1child1'), 
    (1, 'Parent1', 'p1child2'), 
    (1, 'Parent1', 'p1child3'), 
    (2, 'Parent2', 'p2child1'), 
    (2, 'Parent2', 'p2child2'), 
    (3, 'Parent3', 'p3child1'), 
    (3, 'Parent3', 'p3child2'), 
    (3, 'Parent3', 'p3child3'), 
    (3, 'Parent3', 'p3child4'), 
    (4, 'Parent4', 'p4child1'), 
    (4, 'Parent4', 'p4child2'), 
    (4, 'Parent4', 'p4child3');
    

    Dynamic Crosstab

    DECLARE @maxNoChildren INT
    DECLARE @sql1 VARCHAR(4000) = ''
    DECLARE @sql2 VARCHAR(4000) = ''
    DECLARE @sql3 VARCHAR(4000) = ''
    
    SELECT TOP 1 @maxNoChildren = COUNT(*) FROM yourtable GROUP BY id ORDER BY COUNT(*) DESC
    
    SELECT @sql1 = 
    'SELECT
        id
        ,pname
    '
    
    SELECT @sql2 = @sql2 +
    '   ,MAX(CASE WHEN RN = ' + CONVERT(VARCHAR(5), N) + ' THEN childname END) AS ' + QUOTENAME('child' + CONVERT(VARCHAR(5), N)) + CHAR(10)
    FROM(
        SELECT TOP(@maxNoChildren)
            ROW_NUMBER() OVER(ORDER BY (SELECT NULL))
        FROM sys.columns a
        --CROSS JOIN sys.columns b
    )T(N)
    ORDER BY N
    
    SELECT @sql3 =
    'FROM(
        SELECT *,
            RN = ROW_NUMBER() OVER(PARTITION BY id ORDER BY (SELECT NULL))
        FROM yourtable
    )t
    GROUP BY id, pname
    ORDER BY id'
    
    PRINT(@sql1 + @sql2 + @sql3)
    EXEC (@sql1 + @sql2 + @sql3)
    

    Result

    | id |   pname |   child1 |   child2 |   child3 |   child4 |
    |----|---------|----------|----------|----------|----------|
    |  1 | Parent1 | p1child1 | p1child2 | p1child3 |   (null) |
    |  2 | Parent2 | p2child1 | p2child2 |   (null) |   (null) |
    |  3 | Parent3 | p3child1 | p3child2 | p3child3 | p3child4 |
    |  4 | Parent4 | p4child1 | p4child2 | p4child3 |   (null) |
    
    qid & accept id: (31186960, 31187028) query: Identify last record in CASE soup:

    There are two possible scenarios from what you have explained in your question.

    \n

    One of them is the one in which for the max value found in column a you want to display a certain message:

    \n
    SELECT\n    a\n    , CASE\n        WHEN a = 1 THEN 'ONE'\n        WHEN a = 2 THEN 'TWO'\n        WHEN a = (SELECT MAX(a) FROM test) THEN 'MAX'\n        ELSE 'OTHER'\n     END\nFROM TEST;\n
    \n

    The other possible scenario is that only for the last record in the table you want to display that certain message. And in that scenario your query needs to change to:

    \n
    SELECT\n    a\n    , CASE\n        WHEN a = 1 THEN 'ONE'\n        WHEN a = 2 THEN 'TWO'\n        WHEN a = (SELECT TOP 1 a FROM TEST ORDER BY a DESC) THEN 'MAX'\n        ELSE 'OTHER'\n     END\nFROM TEST\nORDER BY A;\n
    \n soup wrap:

    There are two possible scenarios from what you have explained in your question.

    One of them is the one in which for the max value found in column a you want to display a certain message:

    SELECT
        a
        , CASE
            WHEN a = 1 THEN 'ONE'
            WHEN a = 2 THEN 'TWO'
            WHEN a = (SELECT MAX(a) FROM test) THEN 'MAX'
            ELSE 'OTHER'
         END
    FROM TEST;
    

    The other possible scenario is that only for the last record in the table you want to display that certain message. And in that scenario your query needs to change to:

    SELECT
        a
        , CASE
            WHEN a = 1 THEN 'ONE'
            WHEN a = 2 THEN 'TWO'
            WHEN a = (SELECT TOP 1 a FROM TEST ORDER BY a DESC) THEN 'MAX'
            ELSE 'OTHER'
         END
    FROM TEST
    ORDER BY A;
    
    qid & accept id: (31194265, 31194558) query: Trigger referring to another subelement table soup:
    ALTER TABLE\n        InvoicesElements\nADD CONSTRAINT\n        CHK_GOOD\nCHECK   (good <> 'Bike' OR good IS NULL)\n
    \n

    Update:

    \n
    CREATE TRIGGER\n        TR_InvoicesElements_AIU\nON      InvoicesElements\nAFTER   INSERT, UPDATE\nAS\n        IF EXISTS\n                (\n                SELECT  NULL\n                FROM    INSERTED ie\n                JOIN    Invoices inv\n                ON      inv.id = ie.invoiceId\n                WHERE   ie.good = 'bike'\n                        AND inv.customer = 'ABC'\n                )\n                THROW 50000, 'Not sure why but you cannot sell bikes to ABC', 0\nGO\n\nCREATE TRIGGER\n        TR_Invoices_AIU\nON      Invoices\nAFTER   INSERT, UPDATE\nAS\n        IF EXISTS\n                (\n                SELECT  NULL\n                FROM    InvoiceElements ie\n                JOIN    INSERTED inv\n                ON      inv.id = ie.invoiceId\n                WHERE   ie.good = 'bike'\n                        AND inv.customer = 'ABC'\n                )\n                THROW 50000, 'Not sure why but you cannot sell bikes to ABC', 0\nGO\n
    \n soup wrap:
    ALTER TABLE
            InvoicesElements
    ADD CONSTRAINT
            CHK_GOOD
    CHECK   (good <> 'Bike' OR good IS NULL)
    

    Update:

    CREATE TRIGGER
            TR_InvoicesElements_AIU
    ON      InvoicesElements
    AFTER   INSERT, UPDATE
    AS
            IF EXISTS
                    (
                    SELECT  NULL
                    FROM    INSERTED ie
                    JOIN    Invoices inv
                    ON      inv.id = ie.invoiceId
                    WHERE   ie.good = 'bike'
                            AND inv.customer = 'ABC'
                    )
                    THROW 50000, 'Not sure why but you cannot sell bikes to ABC', 0
    GO
    
    CREATE TRIGGER
            TR_Invoices_AIU
    ON      Invoices
    AFTER   INSERT, UPDATE
    AS
            IF EXISTS
                    (
                    SELECT  NULL
                    FROM    InvoiceElements ie
                    JOIN    INSERTED inv
                    ON      inv.id = ie.invoiceId
                    WHERE   ie.good = 'bike'
                            AND inv.customer = 'ABC'
                    )
                    THROW 50000, 'Not sure why but you cannot sell bikes to ABC', 0
    GO
    
    qid & accept id: (31231963, 31231981) query: FInding Duplicate records in a table and deleting those records using postgreSQL soup:

    Assuming that id is unique (as implied by your question), you can use delete with id:

    \n
    delete from cities c\n    where c.id > (select min(c2.id)\n                  from cities c2\n                  where c2.state = c.state and c2.cities = c.cities\n                 );\n
    \n

    If the id can also be the same, you can use ctid:

    \n
    delete from cities c\n    where c.ctid > (select min(c2.ctid)\n                    from cities c2\n                    where c2.state = c.state and c2.cities = c.cities and\n                          c2.id = c.id\n                   );\n
    \n soup wrap:

    Assuming that id is unique (as implied by your question), you can use delete with id:

    delete from cities c
        where c.id > (select min(c2.id)
                      from cities c2
                      where c2.state = c.state and c2.cities = c.cities
                     );
    

    If the id can also be the same, you can use ctid:

    delete from cities c
        where c.ctid > (select min(c2.ctid)
                        from cities c2
                        where c2.state = c.state and c2.cities = c.cities and
                              c2.id = c.id
                       );
    
    qid & accept id: (31273373, 31273899) query: Changing WHERE clause using Correlated Queries soup:

    I am not 100 percent sure if I understood the question correctly. But I think Gordon Linoff is missing part of the GROUP BY clause.

    \n
    SELECT \n    COUNT(DISTINCT(a.id)) AS pears,\n    d.date, # This is what previously was CHANGE_ME\n    c.geo_date\nFROM fruit_factory a\nJOIN dim_date d \n    ON a.run_date < d.date\nLEFT JOIN dim_user u\n    ON u.id = a.user_id \nWHERE a.run_date > u.geo_date\nGROUP BY d.date, c.geo_date\n
    \n

    Here is some explanation why the JOIN works.

    \n

    Take these tables:

    \n

    fruit_factory:

    \n
    id      run_date          user_id\n1       2015-08-30     3\n2       2015-09-01     2\n3       2015-09-02     1\n
    \n

    dim_date:

    \n
    date\n2015-09-01\n2015-09-02\n
    \n

    previously:

    \n
    SELECT ... WHERE date < CHANGE_ME.\n
    \n

    For August 1st:

    \n
    1       2015-08-30     3\n
    \n

    For August 2nd:

    \n
    1       2015-08-30     3\n2       2015-09-01     2\n
    \n

    Now you use the join, this is what the Join gives you:

    \n
    id      run_date          user_id    d.date\n1       2015-08-30     3              2015-09-01\n1       2015-08-30     3              2015-09-02\n2       2015-09-01     2              2015-09-02\n
    \n

    As you see, the first row is there twice now, because the join condition was met for both dates.

    \n

    If you now group by d.date and what you grouped before, it will be like running all the previous queries for one day at the same time: The group by d.date makes sure the other groupings are each run for one value for CHANGE_ME.

    \n soup wrap:

    I am not 100 percent sure if I understood the question correctly. But I think Gordon Linoff is missing part of the GROUP BY clause.

    SELECT 
        COUNT(DISTINCT(a.id)) AS pears,
        d.date, # This is what previously was CHANGE_ME
        c.geo_date
    FROM fruit_factory a
    JOIN dim_date d 
        ON a.run_date < d.date
    LEFT JOIN dim_user u
        ON u.id = a.user_id 
    WHERE a.run_date > u.geo_date
    GROUP BY d.date, c.geo_date
    

    Here is some explanation why the JOIN works.

    Take these tables:

    fruit_factory:

    id      run_date          user_id
    1       2015-08-30     3
    2       2015-09-01     2
    3       2015-09-02     1
    

    dim_date:

    date
    2015-09-01
    2015-09-02
    

    previously:

    SELECT ... WHERE date < CHANGE_ME.
    

    For August 1st:

    1       2015-08-30     3
    

    For August 2nd:

    1       2015-08-30     3
    2       2015-09-01     2
    

    Now you use the join, this is what the Join gives you:

    id      run_date          user_id    d.date
    1       2015-08-30     3              2015-09-01
    1       2015-08-30     3              2015-09-02
    2       2015-09-01     2              2015-09-02
    

    As you see, the first row is there twice now, because the join condition was met for both dates.

    If you now group by d.date and what you grouped before, it will be like running all the previous queries for one day at the same time: The group by d.date makes sure the other groupings are each run for one value for CHANGE_ME.

    qid & accept id: (31277383, 31280260) query: How to constrain that JSON/JSONB values in a column be completely different? soup:

    There is no built-in method to guarantee unique key/value pairs inside JSON values across the table, neither for json nor for jsonb.

    \n

    But you can achieve your goal with a helper table and an index. Only considering the outermost level of your JSON values. This is not prepared for nested values.

    \n

    Solution for jsonb

    \n

    Requires Postgres 9.4, obviously.
    \nWorks for json in Postgres 9.3, too, after minor modifications.

    \n

    Table layout

    \n
    CREATE TABLE example (\n  example_id     serial PRIMARY KEY\n, totally_unique jsonb NOT NULL\n);\n\nCREATE TABLE example_key (\n  key   text\n, value text\n, PRIMARY KEY (key, value)\n);\n
    \n

    Trigger function & trigger

    \n
    CREATE OR REPLACE FUNCTION trg_example_insupdelbef()\n  RETURNS trigger AS\n$func$\nBEGIN\n   -- split UPDATE into DELETE & INSERT to simplify\n   IF TG_OP = 'UPDATE' THEN\n      IF OLD.totally_unique IS DISTINCT FROM NEW.totally_unique THEN  -- keep going\n      ELSE RETURN NEW;  -- exit, nothing to do\n      END IF;\n   END IF;\n\n   IF TG_OP IN ('DELETE', 'UPDATE') THEN\n      DELETE FROM example_key k\n      USING  jsonb_each_text(OLD.totally_unique) j(key, value)\n      WHERE  j.key = k.key\n      AND    j.value = k.value;\n\n      IF TG_OP = 'DELETE' THEN RETURN OLD;  -- exit, we are done\n      END IF;\n   END IF;\n\n   INSERT INTO example_key(key, value)\n   SELECT *\n   FROM   jsonb_each_text(NEW.totally_unique) j;\n\n   RETURN NEW;\nEND\n$func$ LANGUAGE plpgsql;\n\nCREATE TRIGGER example_insupdelbef\nBEFORE INSERT OR DELETE OR UPDATE OF totally_unique ON example\nFOR EACH ROW EXECUTE PROCEDURE trg_example_insupdelbef();\n
    \n

    SQL Fiddle demonstrating INSERT / UPDATE / DELETE.
    \nNote that sqlfiddle.com doesn't provide a Postgres 9.4 cluster, yet. The demo emulates with json on pg 9.3.

    \n

    The key function to handle jsonb is jsonb_each_text(), which does exactly what you need, since your values are supposed to be text.

    \n

    Closely related answer for a Postgres array column with more explanation:

    \n\n

    Also consider the "righteous path" of normalization laid out there. Applies here as well.

    \n

    This is not as unbreakable as a UNIQUE constraint, since triggers can be circumvented by other triggers and more easily deactivated, but if you don't do anything of the sort, your constraint is enforced at all times.

    \n

    Note in particular that, per documentation:

    \n
    \n

    TRUNCATE will not fire any ON DELETE triggers that might exist for the\n tables. But it will fire ON TRUNCATE triggers.

    \n
    \n

    If you plan to TRUNCATE example, then make sure you TRUNCATE example_key as well, or create another trigger for that.

    \n

    Performance should be decently good. If your totally_unique column holds many keys and typically only few change per UPDATE, then it might pay to have separate logic for TG_OP = 'UPDATE' in your trigger: distill a change set between OLD and NEW, and only apply that to example_key.

    \n soup wrap:

    There is no built-in method to guarantee unique key/value pairs inside JSON values across the table, neither for json nor for jsonb.

    But you can achieve your goal with a helper table and an index. Only considering the outermost level of your JSON values. This is not prepared for nested values.

    Solution for jsonb

    Requires Postgres 9.4, obviously.
    Works for json in Postgres 9.3, too, after minor modifications.

    Table layout

    CREATE TABLE example (
      example_id     serial PRIMARY KEY
    , totally_unique jsonb NOT NULL
    );
    
    CREATE TABLE example_key (
      key   text
    , value text
    , PRIMARY KEY (key, value)
    );
    

    Trigger function & trigger

    CREATE OR REPLACE FUNCTION trg_example_insupdelbef()
      RETURNS trigger AS
    $func$
    BEGIN
       -- split UPDATE into DELETE & INSERT to simplify
       IF TG_OP = 'UPDATE' THEN
          IF OLD.totally_unique IS DISTINCT FROM NEW.totally_unique THEN  -- keep going
          ELSE RETURN NEW;  -- exit, nothing to do
          END IF;
       END IF;
    
       IF TG_OP IN ('DELETE', 'UPDATE') THEN
          DELETE FROM example_key k
          USING  jsonb_each_text(OLD.totally_unique) j(key, value)
          WHERE  j.key = k.key
          AND    j.value = k.value;
    
          IF TG_OP = 'DELETE' THEN RETURN OLD;  -- exit, we are done
          END IF;
       END IF;
    
       INSERT INTO example_key(key, value)
       SELECT *
       FROM   jsonb_each_text(NEW.totally_unique) j;
    
       RETURN NEW;
    END
    $func$ LANGUAGE plpgsql;
    
    CREATE TRIGGER example_insupdelbef
    BEFORE INSERT OR DELETE OR UPDATE OF totally_unique ON example
    FOR EACH ROW EXECUTE PROCEDURE trg_example_insupdelbef();
    

    SQL Fiddle demonstrating INSERT / UPDATE / DELETE.
    Note that sqlfiddle.com doesn't provide a Postgres 9.4 cluster, yet. The demo emulates with json on pg 9.3.

    The key function to handle jsonb is jsonb_each_text(), which does exactly what you need, since your values are supposed to be text.

    Closely related answer for a Postgres array column with more explanation:

    Also consider the "righteous path" of normalization laid out there. Applies here as well.

    This is not as unbreakable as a UNIQUE constraint, since triggers can be circumvented by other triggers and more easily deactivated, but if you don't do anything of the sort, your constraint is enforced at all times.

    Note in particular that, per documentation:

    TRUNCATE will not fire any ON DELETE triggers that might exist for the tables. But it will fire ON TRUNCATE triggers.

    If you plan to TRUNCATE example, then make sure you TRUNCATE example_key as well, or create another trigger for that.

    Performance should be decently good. If your totally_unique column holds many keys and typically only few change per UPDATE, then it might pay to have separate logic for TG_OP = 'UPDATE' in your trigger: distill a change set between OLD and NEW, and only apply that to example_key.

    qid & accept id: (31281768, 31306694) query: How to use SQL to attribute events to visit source in traffic logs? soup:

    If you're using a database with window functions, you can do this with a reasonably short query. You can also see a working example of this query (with some dummy data), if you'd like to tinker with this on live data: https://modeanalytics.com/benn/reports/9f72b24dce58/query

    \n

    Each step in this is broken out as a common table expression. While this makes it easier to describe, the query could be written as series of subqueries if that style's more your thing.

    \n

    Step 1: I made your table.

    \n
    WITH event_table AS (\n    SELECT user_id AS dummy_ip,\n           occurred_at,\n           location AS dummy_referer,\n           event_name\n      FROM tutorial.playbook_events \n)\n
    \n

    The example data I had didn't map exactly to your example, but this creates a table that roughly does. I mapped user_id to ip_address since those two fields are conceptually the same. location and referer have absolutely nothing to do with each other, but they're both event attributes associated with every event. And I had a location field in my data, so I went with it. Think of it like a physical referer or something, I guess.

    \n

    Step 2: Determine the time since the last event.

    \n
    with_last_event AS (\n    SELECT *,\n           LAG(occurred_at,1) OVER (PARTITION BY dummy_ip ORDER BY occurred_at) AS last_event\n      FROM event_table\n)\n
    \n

    The LAG function here finds the time of the last event at that IP. If there was no last event, it's null.

    \n

    Step 3: Find which events mark the beginning of a new session.

    \n
    with_new_session_flag AS (\n    SELECT *,\n           CASE WHEN EXTRACT('EPOCH' FROM occurred_at) - EXTRACT('EPOCH' FROM last_event) >= (60 * 10) OR last_event IS NULL \n                THEN 1 ELSE 0 END AS is_new_session,\n           CASE WHEN EXTRACT('EPOCH' FROM occurred_at) - EXTRACT('EPOCH' FROM last_event) >= (60 * 10) OR last_event IS NULL \n                THEN dummy_referer ELSE NULL END AS first_referer\n      FROM with_last_event\n)\n
    \n

    Most platforms define new sessions as an action after a period of inactivity. The first case statement does that by looking for how long it's been since the previous event. If it's longer than the time you choose (in this case, 60 seconds * 10, so 10 minutes), then that event is flagged as the first one in a new session. It's flagged with a 1; non-first events are marked with a 0.

    \n

    The second case statement finds the same event, but rather than marking that event with a 1 to flag it as a new session, it returns the referer. If it's not a new session, it returns null.

    \n

    Step 4: Create session ids.

    \n
    with_session_ids AS (\n    SELECT *,\n           SUM(is_new_session) OVER (ORDER BY dummy_ip, occurred_at) AS global_session_id,\n           SUM(is_new_session) OVER (PARTITION BY dummy_ip ORDER BY occurred_at) AS user_session_id\n      FROM with_new_session_flag\n)\n
    \n

    These window functions produce a running total of the session flags (the column that's 1 when it's a new session and 0 when it's not). The result is a column that stays the same when a session doesn't change, and increments by 1 every time a new session starts. Depending on how you partition and order this window function, you can create sessions ids that are unique to that user and unique globally.

    \n

    Step 5: Find the original session referer.

    \n
    with_session_referer AS (\n    SELECT *,\n           MAX(first_referer) OVER (PARTITION BY global_session_id) AS session_referer\n      FROM with_session_ids\n)\n
    \n

    This final window function looks for the MAX value of the first_referer for that global_session_id. Since that column was made to be null for every value other than the first event of that session, this will return the first_referer of that session for every event in that session.

    \n

    Step 6: Count some stuff.

    \n
    SELECT session_referer,\n       COUNT(1) AS total_events,\n       COUNT(DISTINCT global_session_id) AS distinct_sessions,\n       COUNT(DISTINCT dummy_ip) AS distinct_ips\n  FROM with_session_referer\n WHERE event_name = 'send_message'\n GROUP BY 1\n
    \n

    This last step is straightforward - filter your events to only the event you care about (Submit, in your example). Then count the number of events by session_referer, which is the first referer of the session in which that event occurred. By counting global_session_id and dummy_ip, you can also find how sessions had that event, and how many distinct IPs logged that event.

    \n soup wrap:

    If you're using a database with window functions, you can do this with a reasonably short query. You can also see a working example of this query (with some dummy data), if you'd like to tinker with this on live data: https://modeanalytics.com/benn/reports/9f72b24dce58/query

    Each step in this is broken out as a common table expression. While this makes it easier to describe, the query could be written as series of subqueries if that style's more your thing.

    Step 1: I made your table.

    WITH event_table AS (
        SELECT user_id AS dummy_ip,
               occurred_at,
               location AS dummy_referer,
               event_name
          FROM tutorial.playbook_events 
    )
    

    The example data I had didn't map exactly to your example, but this creates a table that roughly does. I mapped user_id to ip_address since those two fields are conceptually the same. location and referer have absolutely nothing to do with each other, but they're both event attributes associated with every event. And I had a location field in my data, so I went with it. Think of it like a physical referer or something, I guess.

    Step 2: Determine the time since the last event.

    with_last_event AS (
        SELECT *,
               LAG(occurred_at,1) OVER (PARTITION BY dummy_ip ORDER BY occurred_at) AS last_event
          FROM event_table
    )
    

    The LAG function here finds the time of the last event at that IP. If there was no last event, it's null.

    Step 3: Find which events mark the beginning of a new session.

    with_new_session_flag AS (
        SELECT *,
               CASE WHEN EXTRACT('EPOCH' FROM occurred_at) - EXTRACT('EPOCH' FROM last_event) >= (60 * 10) OR last_event IS NULL 
                    THEN 1 ELSE 0 END AS is_new_session,
               CASE WHEN EXTRACT('EPOCH' FROM occurred_at) - EXTRACT('EPOCH' FROM last_event) >= (60 * 10) OR last_event IS NULL 
                    THEN dummy_referer ELSE NULL END AS first_referer
          FROM with_last_event
    )
    

    Most platforms define new sessions as an action after a period of inactivity. The first case statement does that by looking for how long it's been since the previous event. If it's longer than the time you choose (in this case, 60 seconds * 10, so 10 minutes), then that event is flagged as the first one in a new session. It's flagged with a 1; non-first events are marked with a 0.

    The second case statement finds the same event, but rather than marking that event with a 1 to flag it as a new session, it returns the referer. If it's not a new session, it returns null.

    Step 4: Create session ids.

    with_session_ids AS (
        SELECT *,
               SUM(is_new_session) OVER (ORDER BY dummy_ip, occurred_at) AS global_session_id,
               SUM(is_new_session) OVER (PARTITION BY dummy_ip ORDER BY occurred_at) AS user_session_id
          FROM with_new_session_flag
    )
    

    These window functions produce a running total of the session flags (the column that's 1 when it's a new session and 0 when it's not). The result is a column that stays the same when a session doesn't change, and increments by 1 every time a new session starts. Depending on how you partition and order this window function, you can create sessions ids that are unique to that user and unique globally.

    Step 5: Find the original session referer.

    with_session_referer AS (
        SELECT *,
               MAX(first_referer) OVER (PARTITION BY global_session_id) AS session_referer
          FROM with_session_ids
    )
    

    This final window function looks for the MAX value of the first_referer for that global_session_id. Since that column was made to be null for every value other than the first event of that session, this will return the first_referer of that session for every event in that session.

    Step 6: Count some stuff.

    SELECT session_referer,
           COUNT(1) AS total_events,
           COUNT(DISTINCT global_session_id) AS distinct_sessions,
           COUNT(DISTINCT dummy_ip) AS distinct_ips
      FROM with_session_referer
     WHERE event_name = 'send_message'
     GROUP BY 1
    

    This last step is straightforward - filter your events to only the event you care about (Submit, in your example). Then count the number of events by session_referer, which is the first referer of the session in which that event occurred. By counting global_session_id and dummy_ip, you can also find how sessions had that event, and how many distinct IPs logged that event.

    qid & accept id: (31333228, 31333303) query: How to Extract only email id in SQL SERVER soup:

    In MySQL you could use SUBSTRING_INDEX in following:

    \n
    SELECT SUBSTRING_INDEX(SUBSTRING_INDEX(Id, '>', 1), '<', -1) Email\nFROM Tbl;\n
    \n

    In SQL Server will be:

    \n
    SELECT SUBSTRING(Id, CHARINDEX('<', Id) + 1 , CHARINDEX('>', Id) - CHARINDEX('<', Id) - 1)\nFROM Tbl;\n
    \n soup wrap:

    In MySQL you could use SUBSTRING_INDEX in following:

    SELECT SUBSTRING_INDEX(SUBSTRING_INDEX(Id, '>', 1), '<', -1) Email
    FROM Tbl;
    

    In SQL Server will be:

    SELECT SUBSTRING(Id, CHARINDEX('<', Id) + 1 , CHARINDEX('>', Id) - CHARINDEX('<', Id) - 1)
    FROM Tbl;
    
    qid & accept id: (31424830, 31424967) query: SELECT query with exclusions specified in other table - 1 soup:

    Using NOT EXISTS:

    \n
    SELECT a.*\nFROM TableA a\nWHERE\n    a.userid = @userid\n    AND NOT EXISTS(\n        SELECT 1 \n        FROM TableB b\n        WHERE\n            b.blocked_userid = a.userid\n            AND b.blocker_userid = a.friend_id\n    )\n
    \n

    SQL Fiddle

    \n
    \n

    Using LEFT JOIN:

    \n
    SELECT a.*\nFROM TableA a\nLEFT JOIN TableB b\n    ON b.blocked_userid = a.userid\n    AND b.blocker_userid = a.friend_id\nWHERE\n    a.userid = @userid\n    AND b.blocked_userid IS NULL\n
    \n soup wrap:

    Using NOT EXISTS:

    SELECT a.*
    FROM TableA a
    WHERE
        a.userid = @userid
        AND NOT EXISTS(
            SELECT 1 
            FROM TableB b
            WHERE
                b.blocked_userid = a.userid
                AND b.blocker_userid = a.friend_id
        )
    

    SQL Fiddle


    Using LEFT JOIN:

    SELECT a.*
    FROM TableA a
    LEFT JOIN TableB b
        ON b.blocked_userid = a.userid
        AND b.blocker_userid = a.friend_id
    WHERE
        a.userid = @userid
        AND b.blocked_userid IS NULL
    
    qid & accept id: (31444591, 31445274) query: Count the number of attributes that are NULL for a row soup:

    You can have that without spelling out columns. Counter-pivot columns to rows and count. The aggregate function count() only counts non-null values, while count(*) counts all rows. The shortest and fastest way to count NULL values for more than a few columns is count(*) - count(col) ...

    \n

    Works for any table with any number of columns of any data types.

    \n

    In Postgres 9.3+ with built-in JSON functions:

    \n
    SELECT *\n    , (SELECT count(*) - count(v) FROM json_each_text(row_to_json(t)) x(k,v)) AS ct_nulls\nFROM   tbl t;\n
    \n
      \n
    • As per comment: What is x(k,v)?
    • \n
    \n

    json_each_text() returns a set or rows with two columns. Default column names are key and value as can be seen in the manual I linked to. I provided table and column aliases so we don't have to rely on default names. The second column is named v.

    \n

    Or, in any Postgres version since at least 8.3 with the additional module hstore installed, even shorter and a bit faster:

    \n
    SELECT *,  (SELECT count(*) - count(v) FROM svals(hstore(t)) v) AS ct_nulls\nFROM   tbl t;\n
    \n

    This simpler version only returns a set of single values. I only provide a simple alias v, which is automatically taken to be table and column alias.

    \n\n

    Since the additional column is functionally dependent I would consider not to persist it in the table at all. Rather compute it on the fly like demonstrated above or create a tiny function with a polymorphic input type for the purpose:

    \n
    CREATE OR REPLACE FUNCTION f_ct_nulls(_row anyelement)\n  RETURNS int AS\n$func$\nSELECT (count(*) - count(v))::int FROM svals(hstore(_row)) v\n$func$  LANGUAGE sql IMMUTABLE;\n
    \n

    Then:

    \n
    SELECT *, f_ct_nulls(t) AS ct_nulls\nFROM   tbl t;\n
    \n

    You could wrap this into a VIEW if you want ...

    \n

    SQL Fiddle demonstrating all.

    \n

    This should also answer your second question:

    \n
    \n

    ... the table name is obtained from argument, I don't know the schema of a table beforehand. That means I need to update the table with the input table name.

    \n
    \n soup wrap:

    You can have that without spelling out columns. Counter-pivot columns to rows and count. The aggregate function count() only counts non-null values, while count(*) counts all rows. The shortest and fastest way to count NULL values for more than a few columns is count(*) - count(col) ...

    Works for any table with any number of columns of any data types.

    In Postgres 9.3+ with built-in JSON functions:

    SELECT *
        , (SELECT count(*) - count(v) FROM json_each_text(row_to_json(t)) x(k,v)) AS ct_nulls
    FROM   tbl t;
    
    • As per comment: What is x(k,v)?

    json_each_text() returns a set or rows with two columns. Default column names are key and value as can be seen in the manual I linked to. I provided table and column aliases so we don't have to rely on default names. The second column is named v.

    Or, in any Postgres version since at least 8.3 with the additional module hstore installed, even shorter and a bit faster:

    SELECT *,  (SELECT count(*) - count(v) FROM svals(hstore(t)) v) AS ct_nulls
    FROM   tbl t;
    

    This simpler version only returns a set of single values. I only provide a simple alias v, which is automatically taken to be table and column alias.

    Since the additional column is functionally dependent I would consider not to persist it in the table at all. Rather compute it on the fly like demonstrated above or create a tiny function with a polymorphic input type for the purpose:

    CREATE OR REPLACE FUNCTION f_ct_nulls(_row anyelement)
      RETURNS int AS
    $func$
    SELECT (count(*) - count(v))::int FROM svals(hstore(_row)) v
    $func$  LANGUAGE sql IMMUTABLE;
    

    Then:

    SELECT *, f_ct_nulls(t) AS ct_nulls
    FROM   tbl t;
    

    You could wrap this into a VIEW if you want ...

    SQL Fiddle demonstrating all.

    This should also answer your second question:

    ... the table name is obtained from argument, I don't know the schema of a table beforehand. That means I need to update the table with the input table name.

    qid & accept id: (31448656, 31449486) query: How to generate Month list in PostgreSQL? soup:

    You can generate sequences of data with the generate_series() function:

    \n
    SELECT to_char(generate_series(min, max, '1 month'), 'Mon-YY') AS "Mon-YY"\nFROM (\n  SELECT date_trunc('month', min(startdate)) AS min, \n         date_trunc('month', max(startdate)) AS max\n  FROM a) sub;\n
    \n

    This generates a row for every month, in a pretty format. If you want to have it like a list, you can aggregate them all in an outer query:

    \n
    SELECT string_agg("Mon-YY", ', ') AS "Mon-YY list"\nFROM (\n  -- Query above\n) subsub;\n
    \n

    SQLFiddle here

    \n soup wrap:

    You can generate sequences of data with the generate_series() function:

    SELECT to_char(generate_series(min, max, '1 month'), 'Mon-YY') AS "Mon-YY"
    FROM (
      SELECT date_trunc('month', min(startdate)) AS min, 
             date_trunc('month', max(startdate)) AS max
      FROM a) sub;
    

    This generates a row for every month, in a pretty format. If you want to have it like a list, you can aggregate them all in an outer query:

    SELECT string_agg("Mon-YY", ', ') AS "Mon-YY list"
    FROM (
      -- Query above
    ) subsub;
    

    SQLFiddle here

    qid & accept id: (31468957, 31469531) query: SQL Server 2008 R2: Select with condition soup:

    Well I have done it using CTE.

    \n

    For 1 to 2:

    \n
    with cte\nAS\n(\n    SELECT COUNT(DISTINCT Number) as a,Name from test\n    group by name\n)   \nselect DISTINCT x.Number,z.Name \nfrom cte z\ninner join test x\nON z.name = x.name\nWHERE z.a between 1 and 2;\n
    \n

    Result:

    \n
    number  name\n-------------\n111     PersonA\n211     PersonB\n212     PersonB\n311     PersonC\n313     PersonC\n
    \n

    For 1 to 2:

    \n
    with cte\nAS\n(\n    SELECT COUNT(DISTINCT Number) as a,Name from test\n    group by name\n)   \nselect DISTINCT x.Number,z.Name \nfrom cte z\ninner join test x\nON z.name = x.name\nWHERE z.a between 2 and 2;\n
    \n

    Result:

    \n
    number  name\n-------------\n211     PersonB\n212     PersonB\n311     PersonC\n313     PersonC \n
    \n soup wrap:

    Well I have done it using CTE.

    For 1 to 2:

    with cte
    AS
    (
        SELECT COUNT(DISTINCT Number) as a,Name from test
        group by name
    )   
    select DISTINCT x.Number,z.Name 
    from cte z
    inner join test x
    ON z.name = x.name
    WHERE z.a between 1 and 2;
    

    Result:

    number  name
    -------------
    111     PersonA
    211     PersonB
    212     PersonB
    311     PersonC
    313     PersonC
    

    For 1 to 2:

    with cte
    AS
    (
        SELECT COUNT(DISTINCT Number) as a,Name from test
        group by name
    )   
    select DISTINCT x.Number,z.Name 
    from cte z
    inner join test x
    ON z.name = x.name
    WHERE z.a between 2 and 2;
    

    Result:

    number  name
    -------------
    211     PersonB
    212     PersonB
    311     PersonC
    313     PersonC 
    
    qid & accept id: (31478205, 31478277) query: Counting Values based on distinct values from another Column soup:

    You can add DISTINCT to a COUNT:

    \n
    select OrderNo, count(distinct OrderLineNo)\nfrom tab\ngroup by OrderNo;\n
    \n

    Or if OrderLineNo always starts with 1 and increases without gaps:

    \n
    select OrderNo, max(OrderLineNo)\nfrom tab\ngroup by OrderNo;\n
    \n

    Edit:

    \n

    Based on the comment it's not a count per OrderNo, but a global count. You need to use a Derived Table:

    \n
    select count(*)\nfrom\n (select distinct OrderNo, OrderLineNo\n  from tab\n ) as dt;\n
    \n

    or

    \n
    select sum(n)\nfrom\n (select OrderNo, max(OrderLineNo) as n\n  from tab\n  group by OrderNo\n ) as dt;\n
    \n

    or

    \n
    select sum(Dist_count)\nfrom\n ( select OrderNo,count(distinct OrderLineNo) as Dist_count\n   from Table1\n   group by OrderNo\n ) as dt\n
    \n soup wrap:

    You can add DISTINCT to a COUNT:

    select OrderNo, count(distinct OrderLineNo)
    from tab
    group by OrderNo;
    

    Or if OrderLineNo always starts with 1 and increases without gaps:

    select OrderNo, max(OrderLineNo)
    from tab
    group by OrderNo;
    

    Edit:

    Based on the comment it's not a count per OrderNo, but a global count. You need to use a Derived Table:

    select count(*)
    from
     (select distinct OrderNo, OrderLineNo
      from tab
     ) as dt;
    

    or

    select sum(n)
    from
     (select OrderNo, max(OrderLineNo) as n
      from tab
      group by OrderNo
     ) as dt;
    

    or

    select sum(Dist_count)
    from
     ( select OrderNo,count(distinct OrderLineNo) as Dist_count
       from Table1
       group by OrderNo
     ) as dt
    
    qid & accept id: (31533615, 31534303) query: Combine rows if value is blank soup:

    Edit

    \n
    DECLARE @Data table (Name varchar(10), Id varchar(10)) -- Id must be varchar for blank value\nINSERT @Data VALUES\n('John', '1'),\n('Peter', '2'),('Peter', '2'), \n('Peter', '3'),--('Peter', ''), --For test\n('Lisa', '4'),\n('Lisa', NULL),\n('David', '5'),\n('David', ''),\n('Ralph', ''), ('Ralph', '')\n
    \n
    \n
    SELECT \n    Name, \n    Id, \n    COUNT(*) + ISNULL(\n        (SELECT COUNT(*) FROM @data WHERE Name = d.Name AND Id = '' AND d.Id <> '')\n    , 0) AS Cnt \nFROM @data d \nWHERE \n    Id IS NULL \n    OR Id <> '' \n    OR NOT EXISTS(SELECT * FROM @data WHERE Name = d.Name AND Id <> '')\nGROUP BY Name, Id\n
    \n soup wrap:

    Edit

    DECLARE @Data table (Name varchar(10), Id varchar(10)) -- Id must be varchar for blank value
    INSERT @Data VALUES
    ('John', '1'),
    ('Peter', '2'),('Peter', '2'), 
    ('Peter', '3'),--('Peter', ''), --For test
    ('Lisa', '4'),
    ('Lisa', NULL),
    ('David', '5'),
    ('David', ''),
    ('Ralph', ''), ('Ralph', '')
    

    SELECT 
        Name, 
        Id, 
        COUNT(*) + ISNULL(
            (SELECT COUNT(*) FROM @data WHERE Name = d.Name AND Id = '' AND d.Id <> '')
        , 0) AS Cnt 
    FROM @data d 
    WHERE 
        Id IS NULL 
        OR Id <> '' 
        OR NOT EXISTS(SELECT * FROM @data WHERE Name = d.Name AND Id <> '')
    GROUP BY Name, Id
    
    qid & accept id: (31544792, 31544900) query: PostgreSQL getting row where one column is min soup:

    Something like this.

    \n
    SELECT * \nFROM   (SELECT Row_number()OVER(partition BY data_bands ORDER BY jm_dist) AS RN, \n               data_bands, \n               thematic_class1, \n               thematic_class2\n               Avg(sep.jm_dist)OVER(partition BY data_bands) as avarage_jm_dist\n               jm_dist  \n        FROM   separabilities) A \nWHERE  rn = 1 \n
    \n

    or You need to join the result to main table to get the thematic_class1 and thematic_class2 for min jm_dist and data_bands

    \n
    SELECT * \nFROM   separabilities A \n       INNER JOIN (SELECT sep.data_bands, \n                          Sum(sep.jm_dist) / Count(sep.data_bands) AS avarage_jm_dist\n                          Min(jm_dist)                             AS jm_dist \n                   FROM   separabilities AS sep \n                   GROUP  BY sep.data_bands) B \n               ON A.data_bands = B.data_bands \n                  AND A.jm_dist = B.jm_dist \nORDER  BY avarage_jm_dist \n
    \n soup wrap:

    Something like this.

    SELECT * 
    FROM   (SELECT Row_number()OVER(partition BY data_bands ORDER BY jm_dist) AS RN, 
                   data_bands, 
                   thematic_class1, 
                   thematic_class2
                   Avg(sep.jm_dist)OVER(partition BY data_bands) as avarage_jm_dist
                   jm_dist  
            FROM   separabilities) A 
    WHERE  rn = 1 
    

    or You need to join the result to main table to get the thematic_class1 and thematic_class2 for min jm_dist and data_bands

    SELECT * 
    FROM   separabilities A 
           INNER JOIN (SELECT sep.data_bands, 
                              Sum(sep.jm_dist) / Count(sep.data_bands) AS avarage_jm_dist
                              Min(jm_dist)                             AS jm_dist 
                       FROM   separabilities AS sep 
                       GROUP  BY sep.data_bands) B 
                   ON A.data_bands = B.data_bands 
                      AND A.jm_dist = B.jm_dist 
    ORDER  BY avarage_jm_dist 
    
    qid & accept id: (31565962, 31566286) query: Get only rows where 2 conditions are fulfilled in Microsoft SQL soup:

    You almost had it. What you need to do is select all Reqid matching your conditions, and get all rows with that Reqid. This can be accomplished with a sub-query.

    \n

    Done using two sub-queries:

    \n
    SELECT * \nFROM t1 \nWHERE Reqid in \n(\n    SELECT t11.Reqid \n    FROM t1 as t11\n    WHERE \n        (t11.FIELDID='76' AND t11.LISTITEMID='3548') \n        OR (t11.FIELDID='77' AND t11.LISTITEMID='3550')\n) \nAND Reqid in \n(\n    SELECT t11.Reqid \n    FROM t1 as t11\n    WHERE  \n        (t11.FIELDID='86' AND (t11.LISTITEMID='3491' OR t11.LISTITEMID='2380')) \n        OR (t11.FIELDID='87' AND (t11.LISTITEMID='3494' OR t11.LISTITEMID='2386'))\n)\nORDER BY REQUIREMENTID\n
    \n

    This can further be translated into a single sub-query using a JOIN.

    \n
    SELECT * \nFROM t1 \nWHERE Reqid in \n(\n    SELECT t11.Reqid \n    FROM t1 as t11\n    JOIN t1 as t12 on t11.Reqid = t12.Reqid\n    WHERE \n        ((t11.FIELDID='76' AND t11.LISTITEMID='3548') OR (t11.FIELDID='77' AND t11.LISTITEMID='3550'))\n        AND\n        (\n            (t12.FIELDID='86' AND (t12.LISTITEMID='3491' OR t12.LISTITEMID='2380')) \n            OR (t12.FIELDID='87' AND (t12.LISTITEMID='3494' OR t12.LISTITEMID='2386'))\n        )\n) \nORDER BY REQUIREMENTID\n
    \n soup wrap:

    You almost had it. What you need to do is select all Reqid matching your conditions, and get all rows with that Reqid. This can be accomplished with a sub-query.

    Done using two sub-queries:

    SELECT * 
    FROM t1 
    WHERE Reqid in 
    (
        SELECT t11.Reqid 
        FROM t1 as t11
        WHERE 
            (t11.FIELDID='76' AND t11.LISTITEMID='3548') 
            OR (t11.FIELDID='77' AND t11.LISTITEMID='3550')
    ) 
    AND Reqid in 
    (
        SELECT t11.Reqid 
        FROM t1 as t11
        WHERE  
            (t11.FIELDID='86' AND (t11.LISTITEMID='3491' OR t11.LISTITEMID='2380')) 
            OR (t11.FIELDID='87' AND (t11.LISTITEMID='3494' OR t11.LISTITEMID='2386'))
    )
    ORDER BY REQUIREMENTID
    

    This can further be translated into a single sub-query using a JOIN.

    SELECT * 
    FROM t1 
    WHERE Reqid in 
    (
        SELECT t11.Reqid 
        FROM t1 as t11
        JOIN t1 as t12 on t11.Reqid = t12.Reqid
        WHERE 
            ((t11.FIELDID='76' AND t11.LISTITEMID='3548') OR (t11.FIELDID='77' AND t11.LISTITEMID='3550'))
            AND
            (
                (t12.FIELDID='86' AND (t12.LISTITEMID='3491' OR t12.LISTITEMID='2380')) 
                OR (t12.FIELDID='87' AND (t12.LISTITEMID='3494' OR t12.LISTITEMID='2386'))
            )
    ) 
    ORDER BY REQUIREMENTID
    
    qid & accept id: (31643835, 31644057) query: Difficulty in displaying large number of values on a Chart soup:

    If I were to guess that you were using MySQL, then you can do use to_seconds(). The following gives the average reference price for each minute, along with the date/time of the first price in the interval:

    \n
    select min(recievedon), avg(referenceprice)\nfrom dbname\nwhere recievedon >= '2015-06-05 10:30' AND recievedon <= '2015-06-05 10:50'\ngroup by floor(to_seconds(receivedon) / 60) \n
    \n

    EDIT:

    \n

    In SQL Server, you can do:

    \n
    select min(receivedon), avg(referenceprice)\nfrom dbname\nwhere recievedon >= '2015-06-05 10:30' AND recievedon <= '2015-06-05 10:50'\ngroup by datediff(minute, 0, receivedon);\n
    \n

    If you want the beginning of the period rather than the earlier timestamp:

    \n
    select dateadd(minute, 0, datediff(minute, 0, receivedon)) as timeperiod,\n       avg(referenceprice)\nfrom dbname\nwhere recievedon >= '2015-06-05 10:30' AND recievedon <= '2015-06-05 10:50'\ngroup by dateadd(minute, 0, datediff(minute, 0, receivedon)) ;\n
    \n soup wrap:

    If I were to guess that you were using MySQL, then you can do use to_seconds(). The following gives the average reference price for each minute, along with the date/time of the first price in the interval:

    select min(recievedon), avg(referenceprice)
    from dbname
    where recievedon >= '2015-06-05 10:30' AND recievedon <= '2015-06-05 10:50'
    group by floor(to_seconds(receivedon) / 60) 
    

    EDIT:

    In SQL Server, you can do:

    select min(receivedon), avg(referenceprice)
    from dbname
    where recievedon >= '2015-06-05 10:30' AND recievedon <= '2015-06-05 10:50'
    group by datediff(minute, 0, receivedon);
    

    If you want the beginning of the period rather than the earlier timestamp:

    select dateadd(minute, 0, datediff(minute, 0, receivedon)) as timeperiod,
           avg(referenceprice)
    from dbname
    where recievedon >= '2015-06-05 10:30' AND recievedon <= '2015-06-05 10:50'
    group by dateadd(minute, 0, datediff(minute, 0, receivedon)) ;
    
    qid & accept id: (31659215, 31659640) query: Combining two queries where one uses GROUP BY soup:

    How about just using a subquery?

    \n
    SELECT A.pers_key,\n       B.sum_cost / A.months AS ind1,\n       B.visit_count / A.months AS ind2\nFROM TABLE2 A JOIN\n     (SELECT pers_key, SUM(cost) AS sum_cost,\n             COUNT(DISTINCT visit) AS visit_count\n      FROM TABLE1\n      GROUP BY pers_key\n     ) B\n     ON A.pers_key = B.pers_key;\n
    \n

    EDIT:

    \n

    Your question is a bit complicated. This is definitely a reasonable approach. It may be faster to put the subquery in a table and build an index on the table for the join. However, a red flag is the count(distinct). In my experience with Hive, the following is faster than the above subquery:

    \n
         (SELECT pers_key, SUM(sum_cost) AS sum_cost,\n             COUNT(visit) AS visit_count\n      FROM (SELECT pers_key, visit, SUM(cost) as sum_cost\n            FROM TABLE1\n            GROUP BY pers_key, visit\n           ) t\n      GROUP BY pers_key\n     ) B\n
    \n

    It is a bit counter-intuitive (to me) that this version is faster. But, what happens is that the group by is that Hive readily parallelizes the group bys. On the other hand, the count(distinct) is processed serially. This sometimes occurs in other databases (I've seen similar behavior in Postgres with count(distinct). And another caveat: I did not set up the Hive system where I discovered this, so it might be some sort of configuration issue.

    \n soup wrap:

    How about just using a subquery?

    SELECT A.pers_key,
           B.sum_cost / A.months AS ind1,
           B.visit_count / A.months AS ind2
    FROM TABLE2 A JOIN
         (SELECT pers_key, SUM(cost) AS sum_cost,
                 COUNT(DISTINCT visit) AS visit_count
          FROM TABLE1
          GROUP BY pers_key
         ) B
         ON A.pers_key = B.pers_key;
    

    EDIT:

    Your question is a bit complicated. This is definitely a reasonable approach. It may be faster to put the subquery in a table and build an index on the table for the join. However, a red flag is the count(distinct). In my experience with Hive, the following is faster than the above subquery:

         (SELECT pers_key, SUM(sum_cost) AS sum_cost,
                 COUNT(visit) AS visit_count
          FROM (SELECT pers_key, visit, SUM(cost) as sum_cost
                FROM TABLE1
                GROUP BY pers_key, visit
               ) t
          GROUP BY pers_key
         ) B
    

    It is a bit counter-intuitive (to me) that this version is faster. But, what happens is that the group by is that Hive readily parallelizes the group bys. On the other hand, the count(distinct) is processed serially. This sometimes occurs in other databases (I've seen similar behavior in Postgres with count(distinct). And another caveat: I did not set up the Hive system where I discovered this, so it might be some sort of configuration issue.

    qid & accept id: (31685845, 31686129) query: How to tag certain records in a table based on terms within another table? soup:

    You can set the tags in a single query:

    \n
    UPDATE Incoming\n    SET TagName = (CASE WHEN title LIKE '%electric%' OR  \n                             title LIKE '%faceplate%' OR\n                             title LIKE '%wiring%' \n                        THEN 'Electrical' \n                        WHEN title LIKE '%drywall%' \n                             title LIKE '%sheetrock%' \n                        THEN 'Drywall'\n                   END);\n
    \n

    With your table structure, you could do something like:

    \n
    UPDATE incoming\n    SET TagName = (SELECT TOP 1 tst.TagName\n                   FROM TagSearchTerms tst JOIN\n                        TagName tn\n                        ON tst.tagname = tn.tagname\n                   WHERE incoming.title like '%' + tst.searchterm + '%'\n                   ORDER BY tn.rank\n                  );\n
    \n soup wrap:

    You can set the tags in a single query:

    UPDATE Incoming
        SET TagName = (CASE WHEN title LIKE '%electric%' OR  
                                 title LIKE '%faceplate%' OR
                                 title LIKE '%wiring%' 
                            THEN 'Electrical' 
                            WHEN title LIKE '%drywall%' 
                                 title LIKE '%sheetrock%' 
                            THEN 'Drywall'
                       END);
    

    With your table structure, you could do something like:

    UPDATE incoming
        SET TagName = (SELECT TOP 1 tst.TagName
                       FROM TagSearchTerms tst JOIN
                            TagName tn
                            ON tst.tagname = tn.tagname
                       WHERE incoming.title like '%' + tst.searchterm + '%'
                       ORDER BY tn.rank
                      );
    
    qid & accept id: (31720854, 31724169) query: Convert clojure vector to flambo sql row soup:

    I assume you already have spark-context (sc) and sql-context (sql-ctx). First lets import all the stuff we'll need:

    \n
    (import org.apache.spark.sql.RowFactory)\n(import org.apache.spark.sql.types.StructType)\n(import org.apache.spark.sql.types.StructField)\n(import org.apache.spark.sql.types.Metadata)\n(import org.apache.spark.sql.types.DataTypes)\n
    \n
      \n
    1. For each rdd (vector) convert it to rows

      \n
      ;; Vector to Row conversion\n(defn vec->row [v] \n  (RowFactory/create (into-array Object v)))\n\n;; Example data\n(def rows (-> (f/parallelize sc [["foo" 1] ["bar" 2]])\n              (f/map vec->row)))\n
    2. \n
    3. Convert the rows to a data frame

      \n
      ;; Define schema\n(def schema\n  (StructType.\n   (into-array StructField\n     [(StructField. "k" (DataTypes/StringType) false (Metadata/empty))\n      (StructField. "v" (DataTypes/IntegerType) false (Metadata/empty))])))\n\n;; Create data frame\n(def df (.createDataFrame sql-ctx rows schema))\n\n;; See if it works\n(.show df)\n
    4. \n
    5. Save data frame to a table

      \n
      (.registerTempTable df "df")\n
    6. \n
    7. use the sqlContext to query for particular information in the table

      \n
      (def df-keys (.sql sql-ctx "SELECT UPPER(k) as k FROM df"))\n;; Check results\n(.show df-keys)\n
    8. \n
    9. and how to convert the result from query into into RDD again for further analysis.

      \n
      (.toJavaRDD df-keys)\n
      \n

      or if you want vectors:

      \n
      (f/map (.toJavaRDD df-keys) sql/row->vec)\n
    10. \n
    \n soup wrap:

    I assume you already have spark-context (sc) and sql-context (sql-ctx). First lets import all the stuff we'll need:

    (import org.apache.spark.sql.RowFactory)
    (import org.apache.spark.sql.types.StructType)
    (import org.apache.spark.sql.types.StructField)
    (import org.apache.spark.sql.types.Metadata)
    (import org.apache.spark.sql.types.DataTypes)
    
    1. For each rdd (vector) convert it to rows

      ;; Vector to Row conversion
      (defn vec->row [v] 
        (RowFactory/create (into-array Object v)))
      
      ;; Example data
      (def rows (-> (f/parallelize sc [["foo" 1] ["bar" 2]])
                    (f/map vec->row)))
      
    2. Convert the rows to a data frame

      ;; Define schema
      (def schema
        (StructType.
         (into-array StructField
           [(StructField. "k" (DataTypes/StringType) false (Metadata/empty))
            (StructField. "v" (DataTypes/IntegerType) false (Metadata/empty))])))
      
      ;; Create data frame
      (def df (.createDataFrame sql-ctx rows schema))
      
      ;; See if it works
      (.show df)
      
    3. Save data frame to a table

      (.registerTempTable df "df")
      
    4. use the sqlContext to query for particular information in the table

      (def df-keys (.sql sql-ctx "SELECT UPPER(k) as k FROM df"))
      ;; Check results
      (.show df-keys)
      
    5. and how to convert the result from query into into RDD again for further analysis.

      (.toJavaRDD df-keys)
      

      or if you want vectors:

      (f/map (.toJavaRDD df-keys) sql/row->vec)
      
    qid & accept id: (31721962, 31722196) query: Update table with duplicate rows in another table soup:

    I think you need to do the update before doing the delete. This query will set the facility id to the lowest facility id for that name:

    \n
    update hotel_facility hf\n    set facility_id = (select min(f.facility_id)\n                       from facility f join\n                            facility f2\n                            on f.name = f2.name\n                       where f.id = hf.facility_id);\n
    \n

    Then you can delete all but the minimum:

    \n
    delete from facility f\n    where exists (select 1\n                  from facility f2\n                  where f2.name = f.name and f2.id > f.id\n                 );\n
    \n soup wrap:

    I think you need to do the update before doing the delete. This query will set the facility id to the lowest facility id for that name:

    update hotel_facility hf
        set facility_id = (select min(f.facility_id)
                           from facility f join
                                facility f2
                                on f.name = f2.name
                           where f.id = hf.facility_id);
    

    Then you can delete all but the minimum:

    delete from facility f
        where exists (select 1
                      from facility f2
                      where f2.name = f.name and f2.id > f.id
                     );
    
    qid & accept id: (31722522, 31722609) query: SQL Server all(select ...) is null soup:

    Do you want not exists?

    \n
    select t1.x \nfrom @tablename as t1 \nwhere not exists (select t2.y from @tablename as t2 where t1.x = t2.x) \n
    \n

    This tests that there are no matching values.

    \n

    Or, perhaps,

    \n
    select t1.x \nfrom @tablename t1 \nwhere not exists (select 1\n                  from @tablename as t2\n                  where t1.x = t2.x and t2.y is not null\n                 ) ;\n
    \n

    This tests that any matching value has NULL for y.

    \n soup wrap:

    Do you want not exists?

    select t1.x 
    from @tablename as t1 
    where not exists (select t2.y from @tablename as t2 where t1.x = t2.x) 
    

    This tests that there are no matching values.

    Or, perhaps,

    select t1.x 
    from @tablename t1 
    where not exists (select 1
                      from @tablename as t2
                      where t1.x = t2.x and t2.y is not null
                     ) ;
    

    This tests that any matching value has NULL for y.

    qid & accept id: (31723200, 31723773) query: Get date as 'from date' and 'to date' from same column in table soup:

    You need a window function to do this efficiently:

    \n
    SELECT id, percentage, vat, service_tax, labor_welfare,\n       daterange(lag(changed_date) OVER (PARTITION BY project_id ORDER BY changed_date)::date,\n                  changed_date::date, '()') AS changed_date_range, project_id\nFROM my_table\nORDER BY project_id, changed_date;\n
    \n

    The output is a daterange which will look like '(2015-07-02, 2015-07-15)'. If you prefer a string format you can change the daterange(...) phrase into something like:

    \n
    (to_char(lag(changed_date) OVER (PARTITION BY project_id ORDER BY changed_date), 'YYYY-MM-DD') ||\n' - ' || to_char(changed_date, 'YYYY-MM-DD') AS changed_date_range\n
    \n

    or you can simply have two columns:

    \n
    lag(changed_date) OVER (PARTITION BY project_id ORDER BY changed_date) AS date_from,\nchanged_date AS date_to\n
    \n soup wrap:

    You need a window function to do this efficiently:

    SELECT id, percentage, vat, service_tax, labor_welfare,
           daterange(lag(changed_date) OVER (PARTITION BY project_id ORDER BY changed_date)::date,
                      changed_date::date, '()') AS changed_date_range, project_id
    FROM my_table
    ORDER BY project_id, changed_date;
    

    The output is a daterange which will look like '(2015-07-02, 2015-07-15)'. If you prefer a string format you can change the daterange(...) phrase into something like:

    (to_char(lag(changed_date) OVER (PARTITION BY project_id ORDER BY changed_date), 'YYYY-MM-DD') ||
    ' - ' || to_char(changed_date, 'YYYY-MM-DD') AS changed_date_range
    

    or you can simply have two columns:

    lag(changed_date) OVER (PARTITION BY project_id ORDER BY changed_date) AS date_from,
    changed_date AS date_to
    
    qid & accept id: (31741652, 31742005) query: Sql update statement with variable soup:

    SQL Fiddle

    \n

    Schema details

    \n
    create table user\n(userid varchar(30));\n\ncreate table logs\n(log_detail varchar(100),\n userid varchar(30));\n\ninsert into user values('user1');\ninsert into user values('user2');\ninsert into user values('user3');\n\ninsert into logs values('update by user1','user3');\ninsert into logs values('inserted by user2','user2');\ninsert into logs values('inserted by user3',null);\n
    \n

    Table data before update

    \n
    |        log_detail | userid |\n|-------------------|--------|\n|   update by user1 |  user3 |\n| inserted by user2 |  user2 |\n| inserted by user3 | (null) |\n
    \n

    Update Query

    \n
     update logs join user\nset logs.userid=user.userid\nwhere logs.log_detail LIKE concat("%",user.userID,"%");\n
    \n

    Table data after update

    \n
    |        log_detail | userid |\n|-------------------|--------|\n|   update by user1 |  user1 |\n| inserted by user2 |  user2 |\n| inserted by user3 |  user3 |\n
    \n soup wrap:

    SQL Fiddle

    Schema details

    create table user
    (userid varchar(30));
    
    create table logs
    (log_detail varchar(100),
     userid varchar(30));
    
    insert into user values('user1');
    insert into user values('user2');
    insert into user values('user3');
    
    insert into logs values('update by user1','user3');
    insert into logs values('inserted by user2','user2');
    insert into logs values('inserted by user3',null);
    

    Table data before update

    |        log_detail | userid |
    |-------------------|--------|
    |   update by user1 |  user3 |
    | inserted by user2 |  user2 |
    | inserted by user3 | (null) |
    

    Update Query

     update logs join user
    set logs.userid=user.userid
    where logs.log_detail LIKE concat("%",user.userID,"%");
    

    Table data after update

    |        log_detail | userid |
    |-------------------|--------|
    |   update by user1 |  user1 |
    | inserted by user2 |  user2 |
    | inserted by user3 |  user3 |
    
    qid & accept id: (31771130, 31771199) query: How to add generic condition to sp select? soup:

    You can use AND/OR logic to simulate the If-else condition in where clause. Try something like this

    \n
    select * from users \nwhere\nparentid= @id \nand \n(\n(@Type = 1 and UserType <> 0)\nor \n(@Type = 2 and UserType = 0)\nor \n(@Type = 3)\n)\n
    \n

    or you can also use Dynamic sql to do this

    \n
    declare @Id uniqueidentifier = 'some parent guid'\ndeclare @Type int = 1 -- can be 1, 2 or 3\nDeclare @UserType varchar(max) --can be 0, anything else than 0, or all users at once\nDeclare @sql nvarchar(max)\n\nif(@Type = 1)\nset @UserType = ' and UserType <> 0'\nif(@Type = 2)\nset @UserType = ' and UserType = 0'\nif(@Type = 3)\nset @UserType = ''\n\nset @sql = 'select * from users where parentId ='''+ cast(@Id as varchar(25))+''''+ @UserType \n\n--Print @sql\n\nExec sp_executesql @sql\n
    \n soup wrap:

    You can use AND/OR logic to simulate the If-else condition in where clause. Try something like this

    select * from users 
    where
    parentid= @id 
    and 
    (
    (@Type = 1 and UserType <> 0)
    or 
    (@Type = 2 and UserType = 0)
    or 
    (@Type = 3)
    )
    

    or you can also use Dynamic sql to do this

    declare @Id uniqueidentifier = 'some parent guid'
    declare @Type int = 1 -- can be 1, 2 or 3
    Declare @UserType varchar(max) --can be 0, anything else than 0, or all users at once
    Declare @sql nvarchar(max)
    
    if(@Type = 1)
    set @UserType = ' and UserType <> 0'
    if(@Type = 2)
    set @UserType = ' and UserType = 0'
    if(@Type = 3)
    set @UserType = ''
    
    set @sql = 'select * from users where parentId ='''+ cast(@Id as varchar(25))+''''+ @UserType 
    
    --Print @sql
    
    Exec sp_executesql @sql
    
    qid & accept id: (31831068, 31832281) query: SQL Upvote Downvote system soup:

    I've created your database structure like this:

    \n
    CREATE TABLE `posts` (\n  `post_id` int(11) unsigned NOT NULL AUTO_INCREMENT,\n  `post_title` varchar(50) DEFAULT NULL,\n  `post_score` int(11) DEFAULT NULL,\n  PRIMARY KEY (`post_id`)\n) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=latin1;\n\nINSERT INTO `posts` VALUES (NULL, 'test', 0), (NULL, 'test2', 0);\n\nCREATE TABLE `pvotes` (\n  `pvote_id` int(11) unsigned NOT NULL AUTO_INCREMENT,\n  `fk_post_id` int(11) DEFAULT NULL,\n  `fk_user_id` int(11) DEFAULT NULL,\n  `pvote_score` int(11) DEFAULT NULL,\n  PRIMARY KEY (`pvote_id`)\n) ENGINE=InnoDB AUTO_INCREMENT=5 DEFAULT CHARSET=latin1;\n\nINSERT INTO `pvotes` VALUES (NULL, 1, 0, 2), (NULL, 1, 0, 3), (NULL, 1, 0, -1), (NULL, 2, 0, 2);\n
    \n

    This is the query that should do the trick:

    \n
    UPDATE posts SET post_score = (SELECT SUM(pvote_score) FROM pvotes WHERE fk_post_id = post_id);\n
    \n

    The result I've got is this:

    \n
    \n

    post_id | post_title | post_score

    \n

    1 | test | 4

    \n

    2 | test2 | 2

    \n
    \n soup wrap:

    I've created your database structure like this:

    CREATE TABLE `posts` (
      `post_id` int(11) unsigned NOT NULL AUTO_INCREMENT,
      `post_title` varchar(50) DEFAULT NULL,
      `post_score` int(11) DEFAULT NULL,
      PRIMARY KEY (`post_id`)
    ) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=latin1;
    
    INSERT INTO `posts` VALUES (NULL, 'test', 0), (NULL, 'test2', 0);
    
    CREATE TABLE `pvotes` (
      `pvote_id` int(11) unsigned NOT NULL AUTO_INCREMENT,
      `fk_post_id` int(11) DEFAULT NULL,
      `fk_user_id` int(11) DEFAULT NULL,
      `pvote_score` int(11) DEFAULT NULL,
      PRIMARY KEY (`pvote_id`)
    ) ENGINE=InnoDB AUTO_INCREMENT=5 DEFAULT CHARSET=latin1;
    
    INSERT INTO `pvotes` VALUES (NULL, 1, 0, 2), (NULL, 1, 0, 3), (NULL, 1, 0, -1), (NULL, 2, 0, 2);
    

    This is the query that should do the trick:

    UPDATE posts SET post_score = (SELECT SUM(pvote_score) FROM pvotes WHERE fk_post_id = post_id);
    

    The result I've got is this:

    post_id | post_title | post_score

    1 | test | 4

    2 | test2 | 2

    qid & accept id: (31840220, 31840237) query: Migrating from Oracle to SQL server. Dual table select query -> SQL server soup:

    You can use convert() and no from clause:

    \n
    SELECT REPLACE(CONVERT(VARCHAR(10), getdate(), 121), '-', '')\n
    \n

    or use 112:

    \n
    SELECT CONVERT(VARCHAR(8), getdate(), 112)\n
    \n soup wrap:

    You can use convert() and no from clause:

    SELECT REPLACE(CONVERT(VARCHAR(10), getdate(), 121), '-', '')
    

    or use 112:

    SELECT CONVERT(VARCHAR(8), getdate(), 112)
    
    qid & accept id: (31855759, 31855869) query: SQL Query/function to remove alphabets only from end soup:

    Assuming you only have alphanumeric characters, You can use PATINDEX with STUFF and REVERSE like this.

    \n

    Query

    \n
    SELECT\nISNULL(REVERSE(STUFF(REVERSE(col),1,PATINDEX('%[0-9]%',REVERSE(col)) -1,'')),'') as col\nFROM\n(\n    VALUES('ABCD123F'),('PORT123G67KK'),('123465'),('ABCDG')\n) as tab(col)\n
    \n

    OUTPUT

    \n
    col\nABCD123\nPORT123G67\n123465\n''\n
    \n soup wrap:

    Assuming you only have alphanumeric characters, You can use PATINDEX with STUFF and REVERSE like this.

    Query

    SELECT
    ISNULL(REVERSE(STUFF(REVERSE(col),1,PATINDEX('%[0-9]%',REVERSE(col)) -1,'')),'') as col
    FROM
    (
        VALUES('ABCD123F'),('PORT123G67KK'),('123465'),('ABCDG')
    ) as tab(col)
    

    OUTPUT

    col
    ABCD123
    PORT123G67
    123465
    ''
    
    qid & accept id: (31899032, 31899139) query: Uniqe Replace Query soup:

    Exactly how to do it depends on the DBMS you are using, but I think something like this should do the trick. Inspired by answers to this question.

    \n
    UPDATE table2\nSET table2.firstname = table1.firstname\nFROM table1, table2\nWHERE\n    table1.lastname = table2.lastname AND \n    table1.firstname LIKE CONCAT(table2.firstname, '%')\n
    \n

    The WHERE conditions finds a match in table1 that has the same lastname as in table2, and whos firstname begins with the same string. CONCAT is string concatenation, so you would get something looking like 'Bobby' LIKE 'Bob%'.

    \n

    Please note, that if there are several matches for one row in table2 (for instance, both Anna Smith and Anastasia Smith matching An... Smith), that row will be updated with both. The last one will be the one who sticks, but which one who happend to be last is pretty much random. To check if you have any cases like that, I think you could run this query:

    \n
    SELECT table2.firstname, table2.lastname\nFROM table1, table2\nWHERE\n    table1.lastname = table2.lastname AND \n    table1.firstname LIKE CONCAT(table2.firstname, '%')\nGROUP BY table2.firstname, table2.lastname\nHAVING COUNT(*) > 1\n
    \n

    Disclaimar: I have not tested any of this.

    \n soup wrap:

    Exactly how to do it depends on the DBMS you are using, but I think something like this should do the trick. Inspired by answers to this question.

    UPDATE table2
    SET table2.firstname = table1.firstname
    FROM table1, table2
    WHERE
        table1.lastname = table2.lastname AND 
        table1.firstname LIKE CONCAT(table2.firstname, '%')
    

    The WHERE conditions finds a match in table1 that has the same lastname as in table2, and whos firstname begins with the same string. CONCAT is string concatenation, so you would get something looking like 'Bobby' LIKE 'Bob%'.

    Please note, that if there are several matches for one row in table2 (for instance, both Anna Smith and Anastasia Smith matching An... Smith), that row will be updated with both. The last one will be the one who sticks, but which one who happend to be last is pretty much random. To check if you have any cases like that, I think you could run this query:

    SELECT table2.firstname, table2.lastname
    FROM table1, table2
    WHERE
        table1.lastname = table2.lastname AND 
        table1.firstname LIKE CONCAT(table2.firstname, '%')
    GROUP BY table2.firstname, table2.lastname
    HAVING COUNT(*) > 1
    

    Disclaimar: I have not tested any of this.

    qid & accept id: (31944662, 31945555) query: Building a daily view from a table with an "effective date" soup:

    You need to be very careful with this type of view. It will be easy to write a view that is good at giving all the individual dates that each record is valid for, but slow when asking which record is valid on one specific date.

    \n

    (Because to answer the second question involves answering the first question for Each and Every date first, then discarding the failures.)

    \n

    The following is reasonable at taking a date and returning the rows valid on that date.

    \n
    CREATE VIEW DAILY_VALUE_DATA AS (\n    SELECT\n        DATE_TABLE.date,\n        VALUE_TABLE.value\n    FROM\n        DATE_TABLE\n    LEFT JOIN\n        VALUE_DATA\n            ON  VALUE_DATA.start_date = (SELECT MAX(lookup.start_date)\n                                           FROM VALUE_DATA lookup\n                                          WHERE lookup.start_date <= DATE_TABLE.date\n                                        )\n);\n\nSELECT * FROM DAILY_VALUE_DATA WHERE date = '2015-08-11'\n
    \n

    Note: This assumes DateTable is a real persistent materialised table, not the in-line view you used, use of which will greatly compromise performance.

    \n

    It also assumes that VALUE_DATA is indexed by the start_date.

    \n


    \n

    EDIT:

    \n

    I also find it likely that your value table will likely have other columns. Let's say that it is a value per person. Maybe their address on any given date.

    \n

    To extend the query above you then also need to join on the person table...

    \n
    CREATE VIEW DAILY_VALUE_DATA AS (\n    SELECT\n        PERSON.id   AS person_id,\n        DATE_TABLE.date,\n        VALUE_TABLE.value\n    FROM\n        PERSON\n    INNER JOIN\n        DATE_TABLE\n            ON  DATE_TABLE.date >=          PERSON.date_of_birth\n            AND DATE_TABLE.date <  COALESCE(PERSON.date_of_death, CURDATE() + 1)\n    LEFT JOIN\n        VALUE_DATA\n            ON  VALUE_DATA.start_date = (SELECT MAX(lookup.start_date)\n                                           FROM VALUE_DATA lookup\n                                          WHERE lookup.start_date <= DATE_TABLE.date\n                                            AND lookup.person_id   = PERSON.id\n                                        )\n);\n\nSELECT * FROM DAILY_VALUE_DATA WHERE person_id = 1 AND date = '2015-08-11'\n
    \n


    \n

    EDIT:

    \n

    Another alternative to the LEFT JOIN is to embed the correllated sub-query in the SELECT block. This is effective when you only have one value to pull from the target table, but less effective if you need to pull many values from the target table...

    \n
    CREATE VIEW DAILY_VALUE_DATA AS (\n    SELECT\n        PERSON.id   AS person_id,\n        DATE_TABLE.date,\n        (SELECT VALUE_DATA.value\n           FROM VALUE_DATA\n          WHERE VALUE_DATA.start_date <= DATE_TABLE.date\n            AND VALUE_DATA.person_id   = PERSON.id\n       ORDER BY VALUE_DATA.start_date DESC\n          LIMIT 1\n        )   AS value\n    FROM\n        PERSON\n    INNER JOIN\n        DATE_TABLE\n            ON  DATE_TABLE.date >=          PERSON.date_of_birth\n            AND DATE_TABLE.date <  COALESCE(PERSON.date_of_death, CURDATE() + 1)\n);\n\nSELECT * FROM DAILY_VALUE_DATA WHERE person_id = 1 AND date = '2015-08-11'\n
    \n soup wrap:

    You need to be very careful with this type of view. It will be easy to write a view that is good at giving all the individual dates that each record is valid for, but slow when asking which record is valid on one specific date.

    (Because to answer the second question involves answering the first question for Each and Every date first, then discarding the failures.)

    The following is reasonable at taking a date and returning the rows valid on that date.

    CREATE VIEW DAILY_VALUE_DATA AS (
        SELECT
            DATE_TABLE.date,
            VALUE_TABLE.value
        FROM
            DATE_TABLE
        LEFT JOIN
            VALUE_DATA
                ON  VALUE_DATA.start_date = (SELECT MAX(lookup.start_date)
                                               FROM VALUE_DATA lookup
                                              WHERE lookup.start_date <= DATE_TABLE.date
                                            )
    );
    
    SELECT * FROM DAILY_VALUE_DATA WHERE date = '2015-08-11'
    

    Note: This assumes DateTable is a real persistent materialised table, not the in-line view you used, use of which will greatly compromise performance.

    It also assumes that VALUE_DATA is indexed by the start_date.


    EDIT:

    I also find it likely that your value table will likely have other columns. Let's say that it is a value per person. Maybe their address on any given date.

    To extend the query above you then also need to join on the person table...

    CREATE VIEW DAILY_VALUE_DATA AS (
        SELECT
            PERSON.id   AS person_id,
            DATE_TABLE.date,
            VALUE_TABLE.value
        FROM
            PERSON
        INNER JOIN
            DATE_TABLE
                ON  DATE_TABLE.date >=          PERSON.date_of_birth
                AND DATE_TABLE.date <  COALESCE(PERSON.date_of_death, CURDATE() + 1)
        LEFT JOIN
            VALUE_DATA
                ON  VALUE_DATA.start_date = (SELECT MAX(lookup.start_date)
                                               FROM VALUE_DATA lookup
                                              WHERE lookup.start_date <= DATE_TABLE.date
                                                AND lookup.person_id   = PERSON.id
                                            )
    );
    
    SELECT * FROM DAILY_VALUE_DATA WHERE person_id = 1 AND date = '2015-08-11'
    


    EDIT:

    Another alternative to the LEFT JOIN is to embed the correllated sub-query in the SELECT block. This is effective when you only have one value to pull from the target table, but less effective if you need to pull many values from the target table...

    CREATE VIEW DAILY_VALUE_DATA AS (
        SELECT
            PERSON.id   AS person_id,
            DATE_TABLE.date,
            (SELECT VALUE_DATA.value
               FROM VALUE_DATA
              WHERE VALUE_DATA.start_date <= DATE_TABLE.date
                AND VALUE_DATA.person_id   = PERSON.id
           ORDER BY VALUE_DATA.start_date DESC
              LIMIT 1
            )   AS value
        FROM
            PERSON
        INNER JOIN
            DATE_TABLE
                ON  DATE_TABLE.date >=          PERSON.date_of_birth
                AND DATE_TABLE.date <  COALESCE(PERSON.date_of_death, CURDATE() + 1)
    );
    
    SELECT * FROM DAILY_VALUE_DATA WHERE person_id = 1 AND date = '2015-08-11'
    
    qid & accept id: (31965908, 31966475) query: Using SQL to query numbers that have more than tenths decimal place soup:

    You can do this with charindex and reverse.

    \n
    select val \nfrom tbl\nwhere charindex('.', reverse(val)) = 3\n
    \n

    Demo

    \n
    declare @tbl table(val varchar(50))\ninsert into @tbl select '11.02'\ninsert into @tbl select '411.0'\ninsert into @tbl select '11.44'\ninsert into @tbl select '1144.03'\ninsert into @tbl select '1441.5'\n\nselect val \nfrom @tbl\nwhere charindex('.', reverse(val)) = 3\n\noutput:\n11.02\n11.44\n1144.03\n
    \n soup wrap:

    You can do this with charindex and reverse.

    select val 
    from tbl
    where charindex('.', reverse(val)) = 3
    

    Demo

    declare @tbl table(val varchar(50))
    insert into @tbl select '11.02'
    insert into @tbl select '411.0'
    insert into @tbl select '11.44'
    insert into @tbl select '1144.03'
    insert into @tbl select '1441.5'
    
    select val 
    from @tbl
    where charindex('.', reverse(val)) = 3
    
    output:
    11.02
    11.44
    1144.03
    
    qid & accept id: (31977969, 31978040) query: Oracle SQL - Produce multiple different aggregations using analytic functions? soup:

    I would do this with grouping sets:

    \n
    select prop1, prop2, sum(val)\nfrom test_data\ngroup by grouping sets ((prop1), (prop2))\n
    \n

    Here is your example.

    \n

    Getting your exact output requires a bit more work.

    \n
    select (case when prop1 is null then 'prop2' else 'prop1' end) as prop_name,\n       coalesce(prop1, prop2) as prop,\n       sum(value)\nfrom test_data\ngroup by grouping sets ((prop1), (prop2));\n
    \n

    This assumes that the first two columns do not contain NULL values. The better way to express the logic is using GROUPING_ID or GROUP_ID(), but I think the logic is easier to follow with COALESCE().

    \n soup wrap:

    I would do this with grouping sets:

    select prop1, prop2, sum(val)
    from test_data
    group by grouping sets ((prop1), (prop2))
    

    Here is your example.

    Getting your exact output requires a bit more work.

    select (case when prop1 is null then 'prop2' else 'prop1' end) as prop_name,
           coalesce(prop1, prop2) as prop,
           sum(value)
    from test_data
    group by grouping sets ((prop1), (prop2));
    

    This assumes that the first two columns do not contain NULL values. The better way to express the logic is using GROUPING_ID or GROUP_ID(), but I think the logic is easier to follow with COALESCE().

    qid & accept id: (31978339, 31978645) query: SQL: Return the values from an array where some of them don't exist in the table soup:

    It sounds like you’re looking for an anti-join. If you can insert $products into a temp table, you can use not exists to get product ids that are not in the table:

    \n
    select * from temp_product_ids t\nwhere not exists (\n    select 1 from products p\n    where p.id = t.product_id\n)\n
    \n

    Another approach is to get all the product ids that do exist

    \n
    select id from products where id in ( $my_products )\n
    \n

    and to use array_diff to see which values are missing

    \n
    $missing_products = array_diff($my_products,$database_products);\n
    \n soup wrap:

    It sounds like you’re looking for an anti-join. If you can insert $products into a temp table, you can use not exists to get product ids that are not in the table:

    select * from temp_product_ids t
    where not exists (
        select 1 from products p
        where p.id = t.product_id
    )
    

    Another approach is to get all the product ids that do exist

    select id from products where id in ( $my_products )
    

    and to use array_diff to see which values are missing

    $missing_products = array_diff($my_products,$database_products);
    
    qid & accept id: (32049478, 32049521) query: How can I get the last 12 months from the current date PLUS extra days till 1st of the last month retrieved soup:

    Using DATEADD and DATEDIFF:

    \n
    DECLARE @ThisDate DATE = '20150817'\nSELECT DATEADD(YEAR, -1, DATEADD(MONTH, DATEDIFF(MONTH, '19000101', @ThisDate), '19000101'))\n
    \n

    For more common date routines, see this article by Lynn Pettis.

    \n
    \n

    To use in your WHERE clause:

    \n
    DECLARE @ThisDate DATE = '20150817'\nSELECT *\nFROM \nWHERE\n     >= DATEADD(YEAR, -1, DATEADD(MONTH, DATEDIFF(MONTH, '19000101', @ThisDate), '19000101'))\n
    \n soup wrap:

    Using DATEADD and DATEDIFF:

    DECLARE @ThisDate DATE = '20150817'
    SELECT DATEADD(YEAR, -1, DATEADD(MONTH, DATEDIFF(MONTH, '19000101', @ThisDate), '19000101'))
    

    For more common date routines, see this article by Lynn Pettis.


    To use in your WHERE clause:

    DECLARE @ThisDate DATE = '20150817'
    SELECT *
    FROM 
    WHERE
         >= DATEADD(YEAR, -1, DATEADD(MONTH, DATEDIFF(MONTH, '19000101', @ThisDate), '19000101'))
    
    qid & accept id: (32053653, 32054534) query: Implementing range condition in a SQL query soup:

    First, an answer to your actual question: this is going to connect the two tables together, but the WHERE clause at the end will filter out the extra rows you are seeing. No idea what your actual table names are, so please replace them with whatever is necessary.

    \n
    SELECT *\nFROM \n    Perc p\n     LEFT JOIN \n    Thresh t ON \n        p.Percentage <= t.Percentage    -- return all ranges that are greater than the current value\nWHERE NOT EXISTS\n  (\n    SELECT 1 \n    FROM Thresh x \n    WHERE   -- eliminate ranges that are higher than this range and greater than the current value\n        t.Percentage > x.Percentage AND         \n        p.Percentage <= x.Percentage            \n  )\n
    \n

    Performance on that is likely not so great, since that extra WHERE NOT EXISTS will slow you down. A far, far better model would be (written on SQL Server, since I don't have a db2 instance available to me:

    \n
    CREATE TABLE #Threshold\n  (\n    Threshold_ID INT IDENTITY(1,1),\n    PercentageStart INT,\n    PercentageEnd INT,\n    Description VARCHAR(20)\n  )\n\nINSERT INTO #Threshold \n  (\n    PercentageStart,\n    PercentageEnd,\n    Description\n  )\n\nSELECT 0, 20, 'Low'\nUNION \nSELECT 21, 40, 'Medium'\nUNION\nSELECT 41, 60, 'On Track'\nUNION\nSELECT 61, 100, 'High'\n\n\nCREATE TABLE #Percentage (Percentage INT)\n\nINSERT INTO #Percentage \nSELECT 40\nUNION\nSELECT 50\n\n\nSELECT *\nFROM \n    #Percentage p \n     INNER JOIN \n    #Threshold t ON \n        p.Percentage BETWEEN t.PercentageStart AND t.PercentageEnd\n\n\nDROP TABLE #Percentage\nDROP TABLE #Threshold\n
    \n soup wrap:

    First, an answer to your actual question: this is going to connect the two tables together, but the WHERE clause at the end will filter out the extra rows you are seeing. No idea what your actual table names are, so please replace them with whatever is necessary.

    SELECT *
    FROM 
        Perc p
         LEFT JOIN 
        Thresh t ON 
            p.Percentage <= t.Percentage    -- return all ranges that are greater than the current value
    WHERE NOT EXISTS
      (
        SELECT 1 
        FROM Thresh x 
        WHERE   -- eliminate ranges that are higher than this range and greater than the current value
            t.Percentage > x.Percentage AND         
            p.Percentage <= x.Percentage            
      )
    

    Performance on that is likely not so great, since that extra WHERE NOT EXISTS will slow you down. A far, far better model would be (written on SQL Server, since I don't have a db2 instance available to me:

    CREATE TABLE #Threshold
      (
        Threshold_ID INT IDENTITY(1,1),
        PercentageStart INT,
        PercentageEnd INT,
        Description VARCHAR(20)
      )
    
    INSERT INTO #Threshold 
      (
        PercentageStart,
        PercentageEnd,
        Description
      )
    
    SELECT 0, 20, 'Low'
    UNION 
    SELECT 21, 40, 'Medium'
    UNION
    SELECT 41, 60, 'On Track'
    UNION
    SELECT 61, 100, 'High'
    
    
    CREATE TABLE #Percentage (Percentage INT)
    
    INSERT INTO #Percentage 
    SELECT 40
    UNION
    SELECT 50
    
    
    SELECT *
    FROM 
        #Percentage p 
         INNER JOIN 
        #Threshold t ON 
            p.Percentage BETWEEN t.PercentageStart AND t.PercentageEnd
    
    
    DROP TABLE #Percentage
    DROP TABLE #Threshold
    
    qid & accept id: (32095863, 32097206) query: SQL query filter, count if soup:

    I am not sure what is your real goal, because if you need to get the cases where a user has an app1 and at least one "not an app"

    \n

    Your expected result

    \n
    user\nname1\nname4 \nname5 \n
    \n

    is wrong.

    \n

    Check my fiddle: http://sqlfiddle.com/#!9/cbb566/7

    \n
    SELECT `user`\nFROM table1 t1\nGROUP BY `user`\nHAVING SUM(IF(`app`='app1',1,0))>0\n AND SUM(IF(`app`='not an app',1,0))>0\n
    \n

    UPDATE If you need any that starts with 'not an app'

    \n

    You can http://sqlfiddle.com/#!9/cbb566/11 :

    \n
    SELECT `user`\nFROM table1 t1\nGROUP BY `user`\nHAVING SUM(IF(`app`='app1',1,0))>0\n AND SUM(IF(`app` LIKE 'not an app%',1,0))>0\n
    \n soup wrap:

    I am not sure what is your real goal, because if you need to get the cases where a user has an app1 and at least one "not an app"

    Your expected result

    user
    name1
    name4 
    name5 
    

    is wrong.

    Check my fiddle: http://sqlfiddle.com/#!9/cbb566/7

    SELECT `user`
    FROM table1 t1
    GROUP BY `user`
    HAVING SUM(IF(`app`='app1',1,0))>0
     AND SUM(IF(`app`='not an app',1,0))>0
    

    UPDATE If you need any that starts with 'not an app'

    You can http://sqlfiddle.com/#!9/cbb566/11 :

    SELECT `user`
    FROM table1 t1
    GROUP BY `user`
    HAVING SUM(IF(`app`='app1',1,0))>0
     AND SUM(IF(`app` LIKE 'not an app%',1,0))>0
    
    qid & accept id: (32096616, 32096672) query: SQL Check duplicated values in different fields soup:
    SELECT\n  f,\n  COUNT(*)   overall_occurrences,\n  COUNT(CASE WHEN field = 1 THEN f END)   AS f1_occurrences,\n  COUNT(CASE WHEN field = 2 THEN f END)   AS f2_occurrences,\n  COUNT(CASE WHEN field = 3 THEN f END)   AS f3_occurrences,\n  COUNT(CASE WHEN field = 4 THEN f END)   AS f4_occurrences\nFROM\n(\n\n    SELECT 1 AS field, f1 AS f, datalist.* FROM datalist\n    UNION ALL\n    SELECT 2 AS field, f2 AS f, datalist.* FROM datalist\n    UNION ALL\n    SELECT 3 AS field, f3 AS f, datalist.* FROM datalist\n    UNION ALL\n    SELECT 4 AS field, f4 AS f, datalist.* FROM datalist\n)\n   pivotted\nWHERE\n   somefield = 0\nGROUP BY\n   f\nHAVING\n   COUNT(*) > 1\n
    \n

    EDIT: Updated to additionally show where the duplicates occur.

    \n

    EDIT; alternative for slightly less repeating of logic...

    \n

    (Possibly not necessary here, but example of method that can be useful in more complex scenarios.)

    \n
    SELECT\n  f,\n  COUNT(*)   overall_occurrences,\n  COUNT(CASE WHEN field = 1 THEN f END)   AS f1_occurrences,\n  COUNT(CASE WHEN field = 2 THEN f END)   AS f2_occurrences,\n  COUNT(CASE WHEN field = 3 THEN f END)   AS f3_occurrences,\n  COUNT(CASE WHEN field = 4 THEN f END)   AS f4_occurrences\nFROM\n(\n\n    SELECT\n        pivotter.field,\n        CASE pivotter.field\n            WHEN 1 THEN datalist.f1\n            WHEN 2 THEN datalist.f2\n            WHEN 3 THEN datalist.f3\n            WHEN 4 THEN datalist.f4\n        END   AS f,\n        datalist.*\n    FROM\n        datalist\n    CROSS JOIN\n    (\n        SELECT 1 AS field\n        UNION ALL\n        SELECT 2 AS field\n        UNION ALL\n        SELECT 3 AS field\n        UNION ALL\n        SELECT 4 AS field\n    )\n        AS pivotter\n)\n   pivotted\nWHERE\n   somefield = 0\nGROUP BY\n   f\nHAVING\n   COUNT(*) > 1\n
    \n soup wrap:
    SELECT
      f,
      COUNT(*)   overall_occurrences,
      COUNT(CASE WHEN field = 1 THEN f END)   AS f1_occurrences,
      COUNT(CASE WHEN field = 2 THEN f END)   AS f2_occurrences,
      COUNT(CASE WHEN field = 3 THEN f END)   AS f3_occurrences,
      COUNT(CASE WHEN field = 4 THEN f END)   AS f4_occurrences
    FROM
    (
    
        SELECT 1 AS field, f1 AS f, datalist.* FROM datalist
        UNION ALL
        SELECT 2 AS field, f2 AS f, datalist.* FROM datalist
        UNION ALL
        SELECT 3 AS field, f3 AS f, datalist.* FROM datalist
        UNION ALL
        SELECT 4 AS field, f4 AS f, datalist.* FROM datalist
    )
       pivotted
    WHERE
       somefield = 0
    GROUP BY
       f
    HAVING
       COUNT(*) > 1
    

    EDIT: Updated to additionally show where the duplicates occur.

    EDIT; alternative for slightly less repeating of logic...

    (Possibly not necessary here, but example of method that can be useful in more complex scenarios.)

    SELECT
      f,
      COUNT(*)   overall_occurrences,
      COUNT(CASE WHEN field = 1 THEN f END)   AS f1_occurrences,
      COUNT(CASE WHEN field = 2 THEN f END)   AS f2_occurrences,
      COUNT(CASE WHEN field = 3 THEN f END)   AS f3_occurrences,
      COUNT(CASE WHEN field = 4 THEN f END)   AS f4_occurrences
    FROM
    (
    
        SELECT
            pivotter.field,
            CASE pivotter.field
                WHEN 1 THEN datalist.f1
                WHEN 2 THEN datalist.f2
                WHEN 3 THEN datalist.f3
                WHEN 4 THEN datalist.f4
            END   AS f,
            datalist.*
        FROM
            datalist
        CROSS JOIN
        (
            SELECT 1 AS field
            UNION ALL
            SELECT 2 AS field
            UNION ALL
            SELECT 3 AS field
            UNION ALL
            SELECT 4 AS field
        )
            AS pivotter
    )
       pivotted
    WHERE
       somefield = 0
    GROUP BY
       f
    HAVING
       COUNT(*) > 1
    
    qid & accept id: (32112171, 32112499) query: SQL UNION or similar to supply missing absent rows? (Take two) soup:

    You can do it like this:

    \n
    select distinct b.Key_B, a.Key_A, a.A_1, b1.B_1\nfrom T_A a cross join (select key_b from T_B) b\nleft join T_B b1 on b1.Key_B = b.Key_B and b1.Key_A = a.Key_A;\n
    \n

    edit: since Access2k doesn't supprt cross join the query can be rewritten as:

    \n
    select distinct c.Key_B, c.Key_A, c.A_1, B1.B_1 \nfrom (\n    select a.Key_A, a.A_1, b.Key_B \n    from T_A a, (select key_b from T_B) b\n) c \nleft join T_B b1 on B1.Key_B = c.Key_B and B1.Key_A = c.Key_A ;\n
    \n soup wrap:

    You can do it like this:

    select distinct b.Key_B, a.Key_A, a.A_1, b1.B_1
    from T_A a cross join (select key_b from T_B) b
    left join T_B b1 on b1.Key_B = b.Key_B and b1.Key_A = a.Key_A;
    

    edit: since Access2k doesn't supprt cross join the query can be rewritten as:

    select distinct c.Key_B, c.Key_A, c.A_1, B1.B_1 
    from (
        select a.Key_A, a.A_1, b.Key_B 
        from T_A a, (select key_b from T_B) b
    ) c 
    left join T_B b1 on B1.Key_B = c.Key_B and B1.Key_A = c.Key_A ;
    
    qid & accept id: (32128843, 32128981) query: Oracle select query to filter rows soup:

    This will give you rows 1 and 3

    \n
    Select * from (\n   Select * , Row_number() Over(Partition by a_num, a_code order by id) r_num from Your_Table ) result\nWhere r_num = 1\n
    \n

    Just use DESC in order by and you will get rows 2 and 4

    \n
    Select * from (\n   Select * , Row_number() Over(Partition by a_num, a_code order by id desc) r_num from Your_Table ) result\nWhere r_num = 1\n
    \n soup wrap:

    This will give you rows 1 and 3

    Select * from (
       Select * , Row_number() Over(Partition by a_num, a_code order by id) r_num from Your_Table ) result
    Where r_num = 1
    

    Just use DESC in order by and you will get rows 2 and 4

    Select * from (
       Select * , Row_number() Over(Partition by a_num, a_code order by id desc) r_num from Your_Table ) result
    Where r_num = 1
    
    qid & accept id: (32150683, 32151508) query: SQL distribute values across rows soup:

    EDIT: Modified to use the FLOOR() function and changed the inequality per your comment.

    \n

    I would try something like this:

    \n
    FLOOR(B.MaxVal / COUNT(B.bId) OVER (PARTITION BY B.bId)) \n+ CASE \n    WHEN ROW_NUMBER() OVER (PARTITION BY b.bId ORDER BY b.bId) <= (B.MaxVal % COUNT(B.bId) OVER (PARTITION BY B.bId)) THEN 1 \n    ELSE 0 \nEND as "DISTRIBUTED_AVG"\n
    \n

    The first bit is the division you were already doing. ROUND() in SQL Server also serves as the truncate function. ROUND(,0,1) will truncate the value at the decimal point. Use the FLOOR() function.

    \n

    The next bit is complicated. Your description is basically that we need to spread out the remainder of the division of the maximum by the count. Well, the remainder of division is the modulo function. So, we know we probably need to use that. That's what (B.MaxVal % COUNT(B.bId) OVER (PARTITION BY B.bId)) is.

    \n

    Next, we need to know some way how much of the remainder we've used up. Because we're only dealing with the remainder, we know that we never need to give more than one extra item to any value. That also means we'll "consume" the remainder at a rate of 1 per row. So, we need to know which row in the group we're on. To do that, I used the ROW_NUMBER() function. It's partitioned the same as the COUNT() so it will have the same grouping. The only thing you may want to change is the ORDER BY; I just picked something. Basically, we know that when the remainder is equal to or less than the number of the rows we've gone through, then we've got remainder, well, remaining.

    \n

    I feel like my math is slightly off or I'm missing something, however, because I'm currently kind of tired. I encourage you to look at each of these individually understand what it's doing:

    \n
    SELECT DISTINCT A.aId,\n    B.bId,\n    B.MaxVal,\n    B.MaxVal / Count(B.bId) OVER (PARTITION BY B.bId) AS 'AVG'\n    FLOOR(B.MaxVal / COUNT(B.bId) OVER (PARTITION BY B.bId)), \n    ROW_NUMBER() OVER (PARTITION BY b.bId ORDER BY b.bId), \n    B.MaxVal % COUNT(B.bId) OVER (PARTITION BY B.bId),\n    ROUND(B.MaxVal / COUNT(B.bId) OVER (PARTITION BY B.bId),0,1) \n    + CASE \n        WHEN ROW_NUMBER() OVER (PARTITION BY b.bId ORDER BY b.bId) <= (B.MaxVal % COUNT(B.bId) OVER (PARTITION BY B.bId)) THEN 1 \n        ELSE 0 \n    END as "DISTRIBUTED_AVG"\nFROM [...]\n
    \n soup wrap:

    EDIT: Modified to use the FLOOR() function and changed the inequality per your comment.

    I would try something like this:

    FLOOR(B.MaxVal / COUNT(B.bId) OVER (PARTITION BY B.bId)) 
    + CASE 
        WHEN ROW_NUMBER() OVER (PARTITION BY b.bId ORDER BY b.bId) <= (B.MaxVal % COUNT(B.bId) OVER (PARTITION BY B.bId)) THEN 1 
        ELSE 0 
    END as "DISTRIBUTED_AVG"
    

    The first bit is the division you were already doing. ROUND() in SQL Server also serves as the truncate function. ROUND(,0,1) will truncate the value at the decimal point. Use the FLOOR() function.

    The next bit is complicated. Your description is basically that we need to spread out the remainder of the division of the maximum by the count. Well, the remainder of division is the modulo function. So, we know we probably need to use that. That's what (B.MaxVal % COUNT(B.bId) OVER (PARTITION BY B.bId)) is.

    Next, we need to know some way how much of the remainder we've used up. Because we're only dealing with the remainder, we know that we never need to give more than one extra item to any value. That also means we'll "consume" the remainder at a rate of 1 per row. So, we need to know which row in the group we're on. To do that, I used the ROW_NUMBER() function. It's partitioned the same as the COUNT() so it will have the same grouping. The only thing you may want to change is the ORDER BY; I just picked something. Basically, we know that when the remainder is equal to or less than the number of the rows we've gone through, then we've got remainder, well, remaining.

    I feel like my math is slightly off or I'm missing something, however, because I'm currently kind of tired. I encourage you to look at each of these individually understand what it's doing:

    SELECT DISTINCT A.aId,
        B.bId,
        B.MaxVal,
        B.MaxVal / Count(B.bId) OVER (PARTITION BY B.bId) AS 'AVG'
        FLOOR(B.MaxVal / COUNT(B.bId) OVER (PARTITION BY B.bId)), 
        ROW_NUMBER() OVER (PARTITION BY b.bId ORDER BY b.bId), 
        B.MaxVal % COUNT(B.bId) OVER (PARTITION BY B.bId),
        ROUND(B.MaxVal / COUNT(B.bId) OVER (PARTITION BY B.bId),0,1) 
        + CASE 
            WHEN ROW_NUMBER() OVER (PARTITION BY b.bId ORDER BY b.bId) <= (B.MaxVal % COUNT(B.bId) OVER (PARTITION BY B.bId)) THEN 1 
            ELSE 0 
        END as "DISTRIBUTED_AVG"
    FROM [...]
    
    qid & accept id: (32182160, 32182225) query: SQL count regex matches (PostgreSQL) soup:

    If you just want to count the first "word" in the message, then use substring_index():

    \n
    select substring_index(message, ' ', 1) as messageType, count(*)\nfrom table t\ngroup by substring_index(message, ' ', 1)\norder by count(*) desc;\n
    \n

    EDIT:

    \n

    You can do this in Postgres by looking for the first space:

    \n
    select left(message, position(' ' in message) as messageType, count(*)\nfrom table t\ngroup by messageType\norder by count(*) desc;\n
    \n soup wrap:

    If you just want to count the first "word" in the message, then use substring_index():

    select substring_index(message, ' ', 1) as messageType, count(*)
    from table t
    group by substring_index(message, ' ', 1)
    order by count(*) desc;
    

    EDIT:

    You can do this in Postgres by looking for the first space:

    select left(message, position(' ' in message) as messageType, count(*)
    from table t
    group by messageType
    order by count(*) desc;
    
    qid & accept id: (32241284, 32241629) query: Filter Columns which have id in splitted String in sqlserver 2008 soup:

    You can do something like this:

    \n
    \n

    SAMPLE DATA

    \n
    CREATE TABLE #Test\n(\n    Id NVARCHAR(100)\n)\nINSERT INTO #Test VALUES ('1;2;12;15;6;77')\n\nCREATE TABLE #Test2\n(\n    Id NVARCHAR(100),\n    Setcolumn NVARCHAR(100)\n)\nINSERT INTO #Test2 VALUES\n(1, 'false'), (2, 'false'), (3, 'false'), (4, 'false')\n
    \n
    \n

    QUERY

    \n
    ;WITH cte AS(\n SELECT   \n     Split.a.value('.', 'VARCHAR(100)') AS Data  \n FROM  \n (\n     SELECT  \n         CAST ('' + REPLACE(Id, ';', '') + '' AS XML) AS Data  \n     FROM  #Test\n ) AS A CROSS APPLY Data.nodes ('/M') AS Split(a) \n)\nUPDATE t\nSET t.Setcolumn = 'true'\nFROM cte \nJOIN #Test2 t ON cte.Data = t.Id\n\nSELECT * \nFROM #Test2\n
    \n
    \n

    DEMO

    \n

    You can test it at SQL FIDDLE

    \n soup wrap:

    You can do something like this:


    SAMPLE DATA

    CREATE TABLE #Test
    (
        Id NVARCHAR(100)
    )
    INSERT INTO #Test VALUES ('1;2;12;15;6;77')
    
    CREATE TABLE #Test2
    (
        Id NVARCHAR(100),
        Setcolumn NVARCHAR(100)
    )
    INSERT INTO #Test2 VALUES
    (1, 'false'), (2, 'false'), (3, 'false'), (4, 'false')
    

    QUERY

    ;WITH cte AS(
     SELECT   
         Split.a.value('.', 'VARCHAR(100)') AS Data  
     FROM  
     (
         SELECT  
             CAST ('' + REPLACE(Id, ';', '') + '' AS XML) AS Data  
         FROM  #Test
     ) AS A CROSS APPLY Data.nodes ('/M') AS Split(a) 
    )
    UPDATE t
    SET t.Setcolumn = 'true'
    FROM cte 
    JOIN #Test2 t ON cte.Data = t.Id
    
    SELECT * 
    FROM #Test2
    

    DEMO

    You can test it at SQL FIDDLE

    qid & accept id: (32277369, 32277449) query: How to add 10 seconds in current_timestamp SQL ( Oracle ) soup:

    In Oracle, if you want a timestamp as the result, rather than a date (a date always includes the time to the second, though, so you may just want a date), you'd want to add an interval to the timestamp. There are various ways to construct an interval-- you can use an interval literal

    \n
    select current_timestamp + interval '10' second\n  from dual\n
    \n

    or you could use the numtodsinterval function

    \n
    select current_timestamp + numToDSInterval( 10, 'second' )\n  from dual\n
    \n soup wrap:

    In Oracle, if you want a timestamp as the result, rather than a date (a date always includes the time to the second, though, so you may just want a date), you'd want to add an interval to the timestamp. There are various ways to construct an interval-- you can use an interval literal

    select current_timestamp + interval '10' second
      from dual
    

    or you could use the numtodsinterval function

    select current_timestamp + numToDSInterval( 10, 'second' )
      from dual
    
    qid & accept id: (32288840, 32289149) query: Filter results in SQL query for search soup:

    I think you want an aggregation and left outer join:

    \n
    SELECT T1.Name\nFROM  T1 JOIN\n     #TempSearch ts\n     ON T1.Name LIKE CONCAT('%', Ts.Value, '%')\nGROUP BY t1.Name\nHAVING COUNT(*) = (SELECT COUNT(*) FROM #TempSearch);\n
    \n

    This counts the number of matches and makes sure that all components match.\nYou can add more column to the SELECT and GROUP BY to get more columns.

    \n

    Note:

    \n

    The following simpler version would work for your example:

    \n
    select t1.*\nfrom t1\nwhere t1.name like '%' + replace(@ModelName, ' ', '%') + '%';\n
    \n soup wrap:

    I think you want an aggregation and left outer join:

    SELECT T1.Name
    FROM  T1 JOIN
         #TempSearch ts
         ON T1.Name LIKE CONCAT('%', Ts.Value, '%')
    GROUP BY t1.Name
    HAVING COUNT(*) = (SELECT COUNT(*) FROM #TempSearch);
    

    This counts the number of matches and makes sure that all components match. You can add more column to the SELECT and GROUP BY to get more columns.

    Note:

    The following simpler version would work for your example:

    select t1.*
    from t1
    where t1.name like '%' + replace(@ModelName, ' ', '%') + '%';
    
    qid & accept id: (32304479, 32304506) query: Query to total monthly hours for events spanning month borders soup:

    The easy way is using a months table because you can have empty months.

    \n
    create table months (\n   month_id integer,\n   date_ini datetime,\n   date_end datetime\n) \n
    \n

    Then you do a join with your table.

    \n

    SQL Fiddle Demo

    \n
    WITH ranges as (\n    SELECT *\n    FROM \n        months m \n        LEFT JOIN events e \n            on   e.StartDate <= m.d_end\n            and  e.EndDate >= m.d_begin\n ) \nSELECT r.*, \n       DATEDIFF(hour, \n                CASE \n                    WHEN StartDate > d_begin THEN StartDate\n                    WHEN StartDate IS NULL THEN NULL\n                    ELSE d_begin\n                END, \n                CASE \n                    WHEN EndDate < d_end THEN EndDate\n                    WHEN EndDate IS NULL THEN NULL\n                    ELSE DATEADD(day,1,d_end)\n                END) as Hours\nFROM ranges r\n
    \n

    You have 4 cases

    \n
      \n
    • a event where begin and end are inside the month
    • \n
    • a event where end is beyond the month end
    • \n
    • a event start before and end after a month
    • \n
    • a month with no events.
    • \n
    \n soup wrap:

    The easy way is using a months table because you can have empty months.

    create table months (
       month_id integer,
       date_ini datetime,
       date_end datetime
    ) 
    

    Then you do a join with your table.

    SQL Fiddle Demo

    WITH ranges as (
        SELECT *
        FROM 
            months m 
            LEFT JOIN events e 
                on   e.StartDate <= m.d_end
                and  e.EndDate >= m.d_begin
     ) 
    SELECT r.*, 
           DATEDIFF(hour, 
                    CASE 
                        WHEN StartDate > d_begin THEN StartDate
                        WHEN StartDate IS NULL THEN NULL
                        ELSE d_begin
                    END, 
                    CASE 
                        WHEN EndDate < d_end THEN EndDate
                        WHEN EndDate IS NULL THEN NULL
                        ELSE DATEADD(day,1,d_end)
                    END) as Hours
    FROM ranges r
    

    You have 4 cases

    • a event where begin and end are inside the month
    • a event where end is beyond the month end
    • a event start before and end after a month
    • a month with no events.
    qid & accept id: (32317079, 32317723) query: Search an array of records in a WHERE clause soup:

    Just unnest the array and join it to your table. I'm going to make some assumptions about your schema... This is the record you were referring to, from which you can create an array r[]:

    \n
    CREATE TYPE r AS (\n  id INT,\n  text_value TEXT\n);\n
    \n

    This is the table that contains values which you want to search for in your array of records:

    \n
    CREATE TABLE t(v) AS\nVALUES ('a'), ('b'), ('c'), ('d');\n
    \n

    Now, simply join the two:

    \n
    SELECT *\nFROM t\nJOIN unnest(array[row(1, 'a')::r, row(2, 'b')::r]) u\nON t.v = u.text_value\n
    \n

    This will yield

    \n
    v  id  text_value\n-----------------\na  1   a\nb  2   b\n
    \n soup wrap:

    Just unnest the array and join it to your table. I'm going to make some assumptions about your schema... This is the record you were referring to, from which you can create an array r[]:

    CREATE TYPE r AS (
      id INT,
      text_value TEXT
    );
    

    This is the table that contains values which you want to search for in your array of records:

    CREATE TABLE t(v) AS
    VALUES ('a'), ('b'), ('c'), ('d');
    

    Now, simply join the two:

    SELECT *
    FROM t
    JOIN unnest(array[row(1, 'a')::r, row(2, 'b')::r]) u
    ON t.v = u.text_value
    

    This will yield

    v  id  text_value
    -----------------
    a  1   a
    b  2   b
    
    qid & accept id: (32332145, 32373364) query: How to set and return a variable in a Sybase stored procedure soup:
    CREATE PROCEDURE RETURN_SELECT\nAS BEGIN\n\n    DECLARE @MY_VARIABLE int \n    SELECT @MY_VARIABLE = 2\n    SELECT @MY_VARIABLE\nEND\n\n\nEXEC RETURN_SELECT\n
    \n

    The output would look like this:

    \n
    @MY_VARIABLE\n2\n
    \n

    As simple as it gets, dont know if it helps, or you wanted something more?

    \n soup wrap:
    CREATE PROCEDURE RETURN_SELECT
    AS BEGIN
    
        DECLARE @MY_VARIABLE int 
        SELECT @MY_VARIABLE = 2
        SELECT @MY_VARIABLE
    END
    
    
    EXEC RETURN_SELECT
    

    The output would look like this:

    @MY_VARIABLE
    2
    

    As simple as it gets, dont know if it helps, or you wanted something more?

    qid & accept id: (32367084, 32367319) query: How to fetch details and how to connect category and book table structure ( how to use GROUP_CONCAT) soup:

    You need to create an additional table:

    \n
    CREATE TABLE book_categories (\n    book_id INT,\n    category_id INT,\n    PRIMARY KEY (book_id, category_id),\n    FOREIGN KEY book_id REFERENCES book (id),\n    FOREIGN KEY category_id REFERENCES category (id)\n)\n
    \n

    Then you can use a JOIN to get your result:

    \n
    SELECT book_name, GROUP_CONCAT(category_name)\nFROM book AS b\nJOIN book_categories AS bc ON bc.book_id = b.id\nJOIN categoriy AS c ON c.id = bc.category_id\nGROUP BY b.id\n
    \n soup wrap:

    You need to create an additional table:

    CREATE TABLE book_categories (
        book_id INT,
        category_id INT,
        PRIMARY KEY (book_id, category_id),
        FOREIGN KEY book_id REFERENCES book (id),
        FOREIGN KEY category_id REFERENCES category (id)
    )
    

    Then you can use a JOIN to get your result:

    SELECT book_name, GROUP_CONCAT(category_name)
    FROM book AS b
    JOIN book_categories AS bc ON bc.book_id = b.id
    JOIN categoriy AS c ON c.id = bc.category_id
    GROUP BY b.id
    
    qid & accept id: (32372280, 32373823) query: How to return an array of values in output parameter in Stored Proc soup:

    Instead of using an OUTPUT parameter, you could do a SELECT inside your stored procedure using the OUTPUT and INTO clause. Here is an example:

    \n

    Let's create our test data:

    \n
    CREATE TABLE MyTable(\n    ID  INT IDENTITY(1, 1),\n    IDType  INT,\n    B   VARCHAR(6)\n)\nINSERT INTO MyTable(IDType, B) VALUES\n(2, 'Rev'), (2, 'Rev'),\n(2, 'Rev'), (1, 'Rev'),\n(1, 'Rev'), (1, 'Rev'),\n(1, 'NotRev'), (1, 'NotRev');\n
    \n

    MyTable:

    \n
    ID          IDType      B\n----------- ----------- ------\n1           2           Rev\n2           2           Rev\n3           2           Rev\n4           1           Rev\n5           1           Rev\n6           1           Rev\n7           1           NotRev\n8           1           NotRev\n
    \n

    What we want is to update rows WHERE IDType = 2 AND B = 'Rev'. In this case, the rows to be updated are ID IN(4, 5, 6).

    \n

    Now create your stored procedure:

    \n
    CREATE PROCEDURE MyStoredProc\nAS\nBEGIN\n\n    UPDATE MyTable\n        SET IDType = 2\n    OUTPUT INSERTED.ID -- Returns the IDs of the updated rows\n    WHERE \n        IDType = 1\n        AND B = 'REV'\n\nEND\n
    \n

    To get the updated rows, you use the OUTPUT clause.

    \n

    Executing your stored procedure will return:

    \n
    ID\n-----------\n4\n5\n6\n
    \n soup wrap:

    Instead of using an OUTPUT parameter, you could do a SELECT inside your stored procedure using the OUTPUT and INTO clause. Here is an example:

    Let's create our test data:

    CREATE TABLE MyTable(
        ID  INT IDENTITY(1, 1),
        IDType  INT,
        B   VARCHAR(6)
    )
    INSERT INTO MyTable(IDType, B) VALUES
    (2, 'Rev'), (2, 'Rev'),
    (2, 'Rev'), (1, 'Rev'),
    (1, 'Rev'), (1, 'Rev'),
    (1, 'NotRev'), (1, 'NotRev');
    

    MyTable:

    ID          IDType      B
    ----------- ----------- ------
    1           2           Rev
    2           2           Rev
    3           2           Rev
    4           1           Rev
    5           1           Rev
    6           1           Rev
    7           1           NotRev
    8           1           NotRev
    

    What we want is to update rows WHERE IDType = 2 AND B = 'Rev'. In this case, the rows to be updated are ID IN(4, 5, 6).

    Now create your stored procedure:

    CREATE PROCEDURE MyStoredProc
    AS
    BEGIN
    
        UPDATE MyTable
            SET IDType = 2
        OUTPUT INSERTED.ID -- Returns the IDs of the updated rows
        WHERE 
            IDType = 1
            AND B = 'REV'
    
    END
    

    To get the updated rows, you use the OUTPUT clause.

    Executing your stored procedure will return:

    ID
    -----------
    4
    5
    6
    
    qid & accept id: (32381393, 32381574) query: SQL order by and group by city soup:

    Your query:

    \n
    SELECT count(city), city\nFROM cities\nWHERE userid = '1'\nGROUP BY city\nORDER BY count(city) DESC\nLIMIT 5;\n
    \n

    is correct for what you want to do. If you are getting the same city on different rows, then perhaps you have a data issue. For instance, perhaps there are unprintable characters after the city name that look like spaces, but are not. One way to tell is by delimiting the city name and looking at its length, something like:

    \n
    SELECT count(city), concat('"', city, '"'), length(city)\nFROM cities\nWHERE userid = '1'\nGROUP BY city\nORDER BY count(city) DESC\nLIMIT 5;\n
    \n soup wrap:

    Your query:

    SELECT count(city), city
    FROM cities
    WHERE userid = '1'
    GROUP BY city
    ORDER BY count(city) DESC
    LIMIT 5;
    

    is correct for what you want to do. If you are getting the same city on different rows, then perhaps you have a data issue. For instance, perhaps there are unprintable characters after the city name that look like spaces, but are not. One way to tell is by delimiting the city name and looking at its length, something like:

    SELECT count(city), concat('"', city, '"'), length(city)
    FROM cities
    WHERE userid = '1'
    GROUP BY city
    ORDER BY count(city) DESC
    LIMIT 5;
    
    qid & accept id: (32427391, 32453469) query: SQL query with JOIN involving two criteria from same table soup:

    When you join tables, you basically query off a result set containing all the combinations of rows from those joined tables that your where clauses then operate off of. Since you are joining to the Emp_Certs table just once and linking only by Employee_ID, you are getting a result set that looks like this (only showing two columns):

    \n
    Last_Name    Cert_ID\nJones        1\nJones        3\nJones        4\nSmith        1\nSmith        2\n
    \n

    Your where clause then filters those rows, only accepting rows that have Cert_ID = 1 AND Cert_ID = 4, which is impossible so you should not get any rows.

    \n

    I'm not sure if Access has limititations, but in SQL Server you could handle it in at least two ways:

    \n

    1) Link to the table twice, joining for each of the certifications. Table alias 'a' joins to the Emp_Certs table where the Cert_ID is 1 and table alias 'b' joins to the Emp_Certs table where the Cert_ID is 4:

    \n
    SELECT \n    Employees.Employee_ID, Employees.Last_Name, Employees.First_Name \nFROM \n    Employees \nINNER JOIN \n    Emp_Certs a ON Employees.Employee_ID = a.Employee_ID AND a.Cert_ID = 1\nINNER JOIN \n    Emp_Certs b ON Employees.Employee_ID = b.Employee_ID AND b.Cert_ID = 4\nWHERE \n    Employees.Active_Member = Yes\nORDER BY Employees.Last_Name;\n
    \n

    This gives you a result set that looks like this (Smith doesn't show up because the join criteria doesn't allow any rows unless the employee can link to table a and b):

    \n
    Last_Name    a.Cert_ID   b.Cert_ID\nJones        1           4\n
    \n

    2) Use sub-selects in the where clause to filter the employee id on ids with those certifications (looks like Access 2010 supports it):

    \n
    SELECT \n    Employees.Employee_ID, Employees.Last_Name, Employees.First_Name \nFROM \n    Employees \nWHERE \n    Active_Member = Yes\n    AND Employee_ID in (SELECT Employee_ID FROM Emp_Certs WHERE Cert_ID = 1)\n    AND Employee_ID in (SELECT Employee_ID FROM Emp_Certs WHERE Cert_ID = 4)\nORDER BY Employees.Last_Name;\n
    \n soup wrap:

    When you join tables, you basically query off a result set containing all the combinations of rows from those joined tables that your where clauses then operate off of. Since you are joining to the Emp_Certs table just once and linking only by Employee_ID, you are getting a result set that looks like this (only showing two columns):

    Last_Name    Cert_ID
    Jones        1
    Jones        3
    Jones        4
    Smith        1
    Smith        2
    

    Your where clause then filters those rows, only accepting rows that have Cert_ID = 1 AND Cert_ID = 4, which is impossible so you should not get any rows.

    I'm not sure if Access has limititations, but in SQL Server you could handle it in at least two ways:

    1) Link to the table twice, joining for each of the certifications. Table alias 'a' joins to the Emp_Certs table where the Cert_ID is 1 and table alias 'b' joins to the Emp_Certs table where the Cert_ID is 4:

    SELECT 
        Employees.Employee_ID, Employees.Last_Name, Employees.First_Name 
    FROM 
        Employees 
    INNER JOIN 
        Emp_Certs a ON Employees.Employee_ID = a.Employee_ID AND a.Cert_ID = 1
    INNER JOIN 
        Emp_Certs b ON Employees.Employee_ID = b.Employee_ID AND b.Cert_ID = 4
    WHERE 
        Employees.Active_Member = Yes
    ORDER BY Employees.Last_Name;
    

    This gives you a result set that looks like this (Smith doesn't show up because the join criteria doesn't allow any rows unless the employee can link to table a and b):

    Last_Name    a.Cert_ID   b.Cert_ID
    Jones        1           4
    

    2) Use sub-selects in the where clause to filter the employee id on ids with those certifications (looks like Access 2010 supports it):

    SELECT 
        Employees.Employee_ID, Employees.Last_Name, Employees.First_Name 
    FROM 
        Employees 
    WHERE 
        Active_Member = Yes
        AND Employee_ID in (SELECT Employee_ID FROM Emp_Certs WHERE Cert_ID = 1)
        AND Employee_ID in (SELECT Employee_ID FROM Emp_Certs WHERE Cert_ID = 4)
    ORDER BY Employees.Last_Name;
    
    qid & accept id: (32437153, 32437670) query: SQL command to update column value in table soup:

    So you want to replace domain in emails, here is test select:

    \n
    select email, replace(email, '@gmail.com', '@custom.com') as new_email \nfrom auth_user \nwhere email like '%@gmail.com';\n
    \n

    And update will be:

    \n
    update auth_user \nset email = replace(email, '@gmail.com', '@custom.com') \nwhere email like '%@gmail.com';\n
    \n soup wrap:

    So you want to replace domain in emails, here is test select:

    select email, replace(email, '@gmail.com', '@custom.com') as new_email 
    from auth_user 
    where email like '%@gmail.com';
    

    And update will be:

    update auth_user 
    set email = replace(email, '@gmail.com', '@custom.com') 
    where email like '%@gmail.com';
    
    qid & accept id: (32461027, 32461313) query: How to not apply logic in SQL where block to all when blocks soup:

    Your logic is close, but it is slightly different:

    \n
      SELECT DISTINCT\n         (CASE WHEN column2 IN (1, 2) THEN 'name2' // line 5\n               WHEN column2 IN (3, 4) THEN 'name3' // line 6\n              ELSE 'UNKNOWN'\n          END) AS column1\n  FROM table\n  WHERE column3 IS null AND\n        (column4 = True OR column2 NOT IN (3, 4)) \n
    \n

    Note: the query in your question can be simplified. The subquery is unnecessary.

    \n

    It might be easier to follow the logic as:

    \n
      WHERE column3 IS null AND\n        NOT (column4 <> True AND column2 IN (3, 4)) \n
    \n soup wrap:

    Your logic is close, but it is slightly different:

      SELECT DISTINCT
             (CASE WHEN column2 IN (1, 2) THEN 'name2' // line 5
                   WHEN column2 IN (3, 4) THEN 'name3' // line 6
                  ELSE 'UNKNOWN'
              END) AS column1
      FROM table
      WHERE column3 IS null AND
            (column4 = True OR column2 NOT IN (3, 4)) 
    

    Note: the query in your question can be simplified. The subquery is unnecessary.

    It might be easier to follow the logic as:

      WHERE column3 IS null AND
            NOT (column4 <> True AND column2 IN (3, 4)) 
    
    qid & accept id: (32526207, 32526511) query: SQL Join Duplicate records soup:

    You've identified the problem correctly. The solution is to pre-aggregate the data before the join:

    \n
    SELECT RP.POLNUMBER, RP.EFFDATE, LP.PREMIUM\nFROM TBL_A RP INNER JOIN\n     (SELECT LP.POLNUMBER, SUM(LP.PREMIUM) as PREMIUM\n      FROM TBL_B\n      GROUP BY LP.POLNUMBER\n     ) LP\n     ON RP.POLNUMBER = LP.POLNUMBER\n WHERE RP.MOSTRECENTMODEL = 1 AND RP.POLNUMBER = 'ABC123';\n
    \n

    Actually, for performance purposes a correlated subquery probably should work better:

    \n
    SELECT RP.POLNUMBER, RP.EFFDATE\n       (SELECT SUM(LP.PREMIUM) as PREMIUM\n        FROM TBL_B LP\n        WHERE RP.POLNUMBER = LP.POLNUMBER\n       ) as PREMIUM\nFROM TBL_A RP \nWHERE RP.MOSTRECENTMODEL = 1 AND RP.POLNUMBER = 'ABC123';\n
    \n

    Your filter conditions look highly selective. There is no reason to aggregate the entire table to just return results for a handful of policies.

    \n soup wrap:

    You've identified the problem correctly. The solution is to pre-aggregate the data before the join:

    SELECT RP.POLNUMBER, RP.EFFDATE, LP.PREMIUM
    FROM TBL_A RP INNER JOIN
         (SELECT LP.POLNUMBER, SUM(LP.PREMIUM) as PREMIUM
          FROM TBL_B
          GROUP BY LP.POLNUMBER
         ) LP
         ON RP.POLNUMBER = LP.POLNUMBER
     WHERE RP.MOSTRECENTMODEL = 1 AND RP.POLNUMBER = 'ABC123';
    

    Actually, for performance purposes a correlated subquery probably should work better:

    SELECT RP.POLNUMBER, RP.EFFDATE
           (SELECT SUM(LP.PREMIUM) as PREMIUM
            FROM TBL_B LP
            WHERE RP.POLNUMBER = LP.POLNUMBER
           ) as PREMIUM
    FROM TBL_A RP 
    WHERE RP.MOSTRECENTMODEL = 1 AND RP.POLNUMBER = 'ABC123';
    

    Your filter conditions look highly selective. There is no reason to aggregate the entire table to just return results for a handful of policies.

    qid & accept id: (32563912, 32564121) query: Select sequential column records and also find the longest sequence soup:

    Try this. The CTE gets the ids with more than one record, and the query extracts just those records.

    \n
    WITH ids_recurring_more_than_once AS\n(SELECT id FROM mytable GROUP BY id HAVING COUNT(*) >1)\nSELECT m.* FROM mytable m\nINNER JOIN ids_recurring_more_than_once \nON m.id = ids_recurring_more_than_once.id\n
    \n

    By "longest sequence", do you mean the id with the most recurrences? In that case, replace the CTE with:

    \n
    SELECT id FROM mytable GROUP BY id ORDER BY COUNT(*) DESC LIMIT 1\n
    \n soup wrap:

    Try this. The CTE gets the ids with more than one record, and the query extracts just those records.

    WITH ids_recurring_more_than_once AS
    (SELECT id FROM mytable GROUP BY id HAVING COUNT(*) >1)
    SELECT m.* FROM mytable m
    INNER JOIN ids_recurring_more_than_once 
    ON m.id = ids_recurring_more_than_once.id
    

    By "longest sequence", do you mean the id with the most recurrences? In that case, replace the CTE with:

    SELECT id FROM mytable GROUP BY id ORDER BY COUNT(*) DESC LIMIT 1
    
    qid & accept id: (32565311, 32565396) query: SELECT records from two table soup:

    Not quite sure if I got you right. This will return people who have job 56565 and/or 23232:

    \n
    select distinct p.name\nfrom people p\n  join jobs j on p.id = j.peopleid\nwhere j.id in (56565, 23232)\n
    \n

    If BOTH jobs are required:

    \n
    select p.name\nfrom people p\n  join jobs j on p.id = j.peopleid\nwhere j.id in (56565, 23232)\ngroup by p.name\nhaving count(*) > 1\n
    \n

    The HAVING clause can also be written as

    \n
    having max(j.id) <> min(j.id)\n
    \n

    Perhaps better performance that way.

    \n soup wrap:

    Not quite sure if I got you right. This will return people who have job 56565 and/or 23232:

    select distinct p.name
    from people p
      join jobs j on p.id = j.peopleid
    where j.id in (56565, 23232)
    

    If BOTH jobs are required:

    select p.name
    from people p
      join jobs j on p.id = j.peopleid
    where j.id in (56565, 23232)
    group by p.name
    having count(*) > 1
    

    The HAVING clause can also be written as

    having max(j.id) <> min(j.id)
    

    Perhaps better performance that way.

    qid & accept id: (32594433, 32594484) query: CASE in WHERE clause? Filter query by different column depending a value SQL Server 2008 R2 soup:

    You can use simple AND, OR operations to get what you want:

    \n
    SELECT * \nFROM TABLE \nWHERE ACTIVE_FLAG = 1 \n      AND (\n       (@param < '1/1/2015' AND COLUMN_1 = 'Warehouse')\n        OR\n       (@param >= '1/1/2015' AND COLUMN_2 = 'Warehouse, CA')\n      )\n
    \n

    If @param < '1/1/2015', then the WHERE clause becomes:

    \n
    ACTIVE_FLAG = 1 AND COLUMN_1 = 'Warehouse'\n
    \n

    otherwise, in case when @param >= '1/1/2015', the WHERE clause becomes:

    \n
    ACTIVE_FLAG = 1 AND COLUMN_2 = 'Warehouse, CA'\n
    \n soup wrap:

    You can use simple AND, OR operations to get what you want:

    SELECT * 
    FROM TABLE 
    WHERE ACTIVE_FLAG = 1 
          AND (
           (@param < '1/1/2015' AND COLUMN_1 = 'Warehouse')
            OR
           (@param >= '1/1/2015' AND COLUMN_2 = 'Warehouse, CA')
          )
    

    If @param < '1/1/2015', then the WHERE clause becomes:

    ACTIVE_FLAG = 1 AND COLUMN_1 = 'Warehouse'
    

    otherwise, in case when @param >= '1/1/2015', the WHERE clause becomes:

    ACTIVE_FLAG = 1 AND COLUMN_2 = 'Warehouse, CA'
    
    qid & accept id: (32615598, 32615696) query: Full Outer Join with Group By soup:

    try this:

    \n
    SELECT\n    u.UserID AS 'User',\n    u.FullName AS Name,\n    isnull(SUM(Minutes) / 60,0) AS [Time]\nFROM\n    MainUsers u left OUTER JOIN \n    TimeSheet t  ON \n    u.UserID = t.UserID\nGROUP BY\n    u.UserID,\n    u.FullName\nORDER BY\n    u.UserID\n
    \n

    SQL Fiddle

    \n

    if you want to include conditions on your timesheet table, such as \nmonth(timestamp) = 9 and year(timestamp) = 2015\nand you do it in the WHERE clause, it converts your outer join to an inner join because the WHERE clause requires fields in the timestamp table. To limit by month and year of your left outer joined table, you put the conditions in the JOIN clause instead of WHERE, like:

    \n
    SELECT\n    u.UserID AS 'User',\n    u.FullName AS Name,\n    isnull(SUM(Minutes) / 60,0) AS [Time]\nFROM\n    MainUsers u left OUTER JOIN \n    TimeSheet t  ON \n    u.UserID = t.UserID and\n    month(timestamp) = 9 and year(timestamp) = 2015\nGROUP BY\n    u.UserID,\n    u.FullName\nORDER BY\n    u.UserID\n
    \n

    sql fiddle

    \n soup wrap:

    try this:

    SELECT
        u.UserID AS 'User',
        u.FullName AS Name,
        isnull(SUM(Minutes) / 60,0) AS [Time]
    FROM
        MainUsers u left OUTER JOIN 
        TimeSheet t  ON 
        u.UserID = t.UserID
    GROUP BY
        u.UserID,
        u.FullName
    ORDER BY
        u.UserID
    

    SQL Fiddle

    if you want to include conditions on your timesheet table, such as month(timestamp) = 9 and year(timestamp) = 2015 and you do it in the WHERE clause, it converts your outer join to an inner join because the WHERE clause requires fields in the timestamp table. To limit by month and year of your left outer joined table, you put the conditions in the JOIN clause instead of WHERE, like:

    SELECT
        u.UserID AS 'User',
        u.FullName AS Name,
        isnull(SUM(Minutes) / 60,0) AS [Time]
    FROM
        MainUsers u left OUTER JOIN 
        TimeSheet t  ON 
        u.UserID = t.UserID and
        month(timestamp) = 9 and year(timestamp) = 2015
    GROUP BY
        u.UserID,
        u.FullName
    ORDER BY
        u.UserID
    

    sql fiddle

    qid & accept id: (32633252, 32633372) query: SQL - Add multiple columns to a VIEW soup:

    You're very close. You just have a little extra happening in there. Take out the bit after the comma on your FROM .. line:

    \n
    CREATE VIEW [dbo].[V_PS_DA]\nAS WITH\ntoday AS\n(   SELECT * \n    FROM dbo.LK_NET_WORK_DAYS -- This contains the date data needed\n    WHERE [DATE] = CAST(GETDATE() AS DATE)\n)\nSELECT \n  p.*,\n  hrs.DATE_ORDINAL      ENTER_HRSC_ORDINAL,\n  strt.DATE_ORDINAL     START_DATE_ORDINAL,\n  ndt.DATE_ORDINAL      END_DATE_ORDINAL,\n  today.DATE_ORDINAL    TODAY_ORDINAL,\n  kst.[Small Title] Small_Title,\n  kt.[Title]    Title,\n  kd.[Demonstration]  Demonstration,\n  ks.SLS    SLS\n\nFROM dbo.PS_DA p\nLEFT JOIN dbo.LK_NET_WORK_DAYS hrs\n  ON p.ENTER_HRSC = hrs.[DATE]\nLEFT JOIN dbo.LK_NET_WORK_DAYS strt\n  ON p.START_DATE = strt.[DATE]\nLEFT JOIN dbo.LK_NET_WORK_DAYS ndt\n  ON p.END_DATE = ndt.[DATE]\nCROSS JOIN today,\nLEFT JOIN dbo.LK_METRICS k\n  ON k.METRIC_ID_OLD = METRIC_NUMBER\n
    \n

    The only other thing is specifying which table METRIC_NUMBER is from. IS that p.METRIC_NUMBER? Chances are it won't make a difference overall since you probably only have a single table with the field METRIC_NUMBER but with SQL it's a good idea to be as explicit as possible.

    \n

    Lastly, you can then use fields from your K table in your SELECT statement like:

    \n
    CREATE VIEW [dbo].[V_PS_DA]\nAS WITH\ntoday AS\n(   SELECT * \n    FROM dbo.LK_NET_WORK_DAYS -- This contains the date data needed\n    WHERE [DATE] = CAST(GETDATE() AS DATE)\n)\nSELECT \n  p.*,\n  k.somefield,\n  k.someotherfield,\n  hrs.DATE_ORDINAL      ENTER_HRSC_ORDINAL,\n  strt.DATE_ORDINAL     START_DATE_ORDINAL,\n  ndt.DATE_ORDINAL      END_DATE_ORDINAL,\n  today.DATE_ORDINAL    TODAY_ORDINAL,\n  kst.[Small Title] Small_Title,\n  kt.[Title]    Title,\n  kd.[Demonstration]  Demonstration,\n  ks.SLS    SLS\n\nFROM dbo.PS_DA p\nLEFT JOIN dbo.LK_NET_WORK_DAYS hrs\n  ON p.ENTER_HRSC = hrs.[DATE]\nLEFT JOIN dbo.LK_NET_WORK_DAYS strt\n  ON p.START_DATE = strt.[DATE]\nLEFT JOIN dbo.LK_NET_WORK_DAYS ndt\n  ON p.END_DATE = ndt.[DATE]\nCROSS JOIN today,\nLEFT JOIN dbo.LK_METRICS k\n  ON k.METRIC_ID_OLD = METRIC_NUMBER\n
    \n soup wrap:

    You're very close. You just have a little extra happening in there. Take out the bit after the comma on your FROM .. line:

    CREATE VIEW [dbo].[V_PS_DA]
    AS WITH
    today AS
    (   SELECT * 
        FROM dbo.LK_NET_WORK_DAYS -- This contains the date data needed
        WHERE [DATE] = CAST(GETDATE() AS DATE)
    )
    SELECT 
      p.*,
      hrs.DATE_ORDINAL      ENTER_HRSC_ORDINAL,
      strt.DATE_ORDINAL     START_DATE_ORDINAL,
      ndt.DATE_ORDINAL      END_DATE_ORDINAL,
      today.DATE_ORDINAL    TODAY_ORDINAL,
      kst.[Small Title] Small_Title,
      kt.[Title]    Title,
      kd.[Demonstration]  Demonstration,
      ks.SLS    SLS
    
    FROM dbo.PS_DA p
    LEFT JOIN dbo.LK_NET_WORK_DAYS hrs
      ON p.ENTER_HRSC = hrs.[DATE]
    LEFT JOIN dbo.LK_NET_WORK_DAYS strt
      ON p.START_DATE = strt.[DATE]
    LEFT JOIN dbo.LK_NET_WORK_DAYS ndt
      ON p.END_DATE = ndt.[DATE]
    CROSS JOIN today,
    LEFT JOIN dbo.LK_METRICS k
      ON k.METRIC_ID_OLD = METRIC_NUMBER
    

    The only other thing is specifying which table METRIC_NUMBER is from. IS that p.METRIC_NUMBER? Chances are it won't make a difference overall since you probably only have a single table with the field METRIC_NUMBER but with SQL it's a good idea to be as explicit as possible.

    Lastly, you can then use fields from your K table in your SELECT statement like:

    CREATE VIEW [dbo].[V_PS_DA]
    AS WITH
    today AS
    (   SELECT * 
        FROM dbo.LK_NET_WORK_DAYS -- This contains the date data needed
        WHERE [DATE] = CAST(GETDATE() AS DATE)
    )
    SELECT 
      p.*,
      k.somefield,
      k.someotherfield,
      hrs.DATE_ORDINAL      ENTER_HRSC_ORDINAL,
      strt.DATE_ORDINAL     START_DATE_ORDINAL,
      ndt.DATE_ORDINAL      END_DATE_ORDINAL,
      today.DATE_ORDINAL    TODAY_ORDINAL,
      kst.[Small Title] Small_Title,
      kt.[Title]    Title,
      kd.[Demonstration]  Demonstration,
      ks.SLS    SLS
    
    FROM dbo.PS_DA p
    LEFT JOIN dbo.LK_NET_WORK_DAYS hrs
      ON p.ENTER_HRSC = hrs.[DATE]
    LEFT JOIN dbo.LK_NET_WORK_DAYS strt
      ON p.START_DATE = strt.[DATE]
    LEFT JOIN dbo.LK_NET_WORK_DAYS ndt
      ON p.END_DATE = ndt.[DATE]
    CROSS JOIN today,
    LEFT JOIN dbo.LK_METRICS k
      ON k.METRIC_ID_OLD = METRIC_NUMBER
    
    qid & accept id: (32697433, 32698110) query: How to Auto-Number Duplicate Rows Using Sequence Based on Multiple Duplicate Columns (T-SQL) soup:

    You can use DENSE_RANK() to give each of your unique combinations of Surname, BirthDate and Sex a unique number, then simply place this into an update statement to update your column:

    \n
    UPDATE  t\nSET     ExtID = NewExtID\nFROM    (   SELECT  ExtID,\n                    NewExtID = 'R' + CAST(DENSE_RANK() \n                                            OVER(ORDER BY Surname, Birthdate, Sex) \n                                        AS VARCHAR(10))\n            FROM    dbo.YourTableName\n        ) AS t;\n
    \n

    FULL WORKING EXAMPLE

    \n
    IF OBJECT_ID(N'tempdb..#T', 'U') IS NOT NULL\n    DROP TABLE #T;\n\nCREATE TABLE #T\n(   Ref INT, \n    Surname VARCHAR(50), \n    Firstname VARCHAR(50), \n    Birthdate DATE, \n    Sex CHAR(1), \n    ExternalSource VARCHAR(50), \n    ExtID VARCHAR(11)\n);\n\nINSERT #T (Ref, Surname, Firstname, Birthdate, Sex, ExternalSource)\nVALUES\n    (1, 'AAA', 'AA', '2000-01-01', 'M', 'Alpha'),\n    (2, 'BBB', 'BB', '2001-01-01', 'F', 'Beta'),\n    (3, 'AAA', 'AA', '2000-01-01', 'M', 'Beta'),\n    (4, 'CCC', 'CC', '2003-01-01', 'M', 'Alpha'),\n    (5, 'BBB', 'BB', '2001-01-01', 'F', 'Gamma'),\n    (6, 'DDD', 'DD', '2004-01-01', 'M', 'Beta'),\n    (7, 'CCC', 'CC', '2003-01-01', 'M', 'Alpha'),\n    (8, 'AAA', 'AA', '2000-01-01', 'M', 'Gamma');\n\nUPDATE  t\nSET     ExtID = NewExtID\nFROM    (   SELECT  ExtID,\n                    NewExtID = 'R' + CAST(DENSE_RANK() \n                                            OVER(ORDER BY Surname, Birthdate, Sex) \n                                        AS VARCHAR(10))\n            FROM    #T\n        ) AS t;\n\nSELECT  *\nFROM    #T\nORDER BY Ref;       \n
    \n
    \n

    ADDENDUM

    \n

    For maintaining this, I would suggest a slightly different approach, and have a separate table to maintain your ExtID, which would allow you to leverage an identity column:

    \n
    CREATE TABLE dbo.Ext \n(\n        ID INT IDENTITY(1, 1) NOT NULL,\n        Surname VARCHAR(50) NOT NULL,\n        BirthDate DATE NOT NULL,\n        Sex CHAR(1) NOT NULL,\n        ExtID AS 'R' + CAST(ExtIntID AS VARCHAR(10)),\n    CONSTRAINT PK_Ext__ID PRIMARY KEY (ID),\n);\nCREATE UNIQUE NONCLUSTERED INDEX UQ_Ext__Surname_Birthdate_Sex ON dbo.Ext (Surname, Birthdate, Sex);\n
    \n

    Realistically, with a similar index on your base tables you probably don't need this ExtID column, you can just join to the above table to get the ExtID with not a huge performance hit, but on the off chance you did need to update the ExtID column you could use:

    \n
    MERGE dbo.Ext AS e WITH (HOLDLOCK)\nUSING \n(   SELECT  DISTINCT Surname, Birthdate, Sex\n    FROM    dbo.YourTable\n) AS t\n    ON t.Surname = e.Surname\n    AND t.Birthdate = e.Birthdate\n    AND t.Sex = e.Sex\nWHEN NOT MATCHED THEN \n    INSERT (Surname, Birthdate, Sex)\n    VALUES (t.Surname, t.Birthdate, t.Sex);\n\nUPDATE  t\nSET     ExtID = r.ExtID\nFROM    db.YourTable AS t\n        INNER JOIN dbo.Ext AS e\n            ON e.Surname = t.Surname\n            AND e.Birthdate = t.Birthdate\n            AND e.Sex = t.Sex\nWHERE   t.ExtID IS NULL;\n
    \n

    I have used MERGE WITH (HOLDLOCK) because this is the least vulnerable method I know of meeting a race condition, and getting unique constraint violations.

    \n

    If all of this is not suitable, then I would still suggest as above (if possible) removing the R from the identifier, and making it just an integer. You can, if needed, create the text column as a computed column:

    \n
    CREATE TABLE #T\n(   Ref INT, \n    Surname VARCHAR(50), \n    Firstname VARCHAR(50), \n    Birthdate DATE, \n    Sex CHAR(1), \n    ExternalSource VARCHAR(50), \n    ExtIntID INT,\n    ExtID AS 'R' + CAST(ExtIntID AS VARCHAR(10))\n);\n
    \n

    This will just make getting the maximim easier, and would probably make other uses easier too.

    \n

    Then, your update statement is fairly similar:

    \n
    UPDATE  t\nSET     ExtIntID = NewExtID\nFROM    (   SELECT  t.ExtIntID,\n                    NewExtID = CASE WHEN e.ExtIntID IS NOT NULL THEN e.ExtIntID\n                                ELSE\n                                    ISNULL(m.MaxID, 0) + \n                                    DENSE_RANK() OVER(PARTITION BY e.ExtIntID\n                                                    ORDER BY t.Surname, t.Birthdate, t.Sex) \n                                END\n            FROM    #T AS t\n                    LEFT JOIN\n                    (   SELECT  Surname, Birthdate, Sex, ExtIntID = MAX(ExtIntID)\n                        FROM     #T\n                        GROUP BY Surname, Birthdate, Sex\n                    ) AS e\n                        ON e.Surname = t.Surname\n                        AND e.Birthdate = t.Birthdate\n                        AND e.Sex = t.Sex\n                    OUTER APPLY (SELECT MAX(ExtIntID) FROM #T) AS m (MaxID)\n            WHERE   t.ExtIntID IS NULL              \n        ) AS t;\n
    \n

    If you can't make an INT column, again the update is pretty similar, you just need to mess around with formatting more:

    \n
    UPDATE  t\nSET     ExtID = NewExtID\nFROM    (   SELECT  t.ExtID,\n                    NewExtID = CASE WHEN e.ExtID IS NOT NULL THEN e.ExtID\n                                ELSE\n                                    'R' + \n                                    CAST(ISNULL(m.MaxID, 0) + \n                                        DENSE_RANK() OVER(PARTITION BY e.ExtID\n                                                            ORDER BY t.Surname, t.Birthdate, t.Sex) \n                                        AS VARCHAR(10))\n                                END\n            FROM    #T AS t\n                    LEFT JOIN\n                    (   SELECT  Surname, Birthdate, Sex, ExtID = MAX(ExtID)\n                        FROM     #T\n                        GROUP BY Surname, Birthdate, Sex\n                    ) AS e\n                        ON e.Surname = t.Surname\n                        AND e.Birthdate = t.Birthdate\n                        AND e.Sex = t.Sex\n                    OUTER APPLY (SELECT MAX(CONVERT(INT, SUBSTRING(ExtID, 2, LEN(ExtID)))) FROM #T) AS m (MaxID)\n            WHERE   t.ExtID IS NULL             \n        ) AS t;\n
    \n soup wrap:

    You can use DENSE_RANK() to give each of your unique combinations of Surname, BirthDate and Sex a unique number, then simply place this into an update statement to update your column:

    UPDATE  t
    SET     ExtID = NewExtID
    FROM    (   SELECT  ExtID,
                        NewExtID = 'R' + CAST(DENSE_RANK() 
                                                OVER(ORDER BY Surname, Birthdate, Sex) 
                                            AS VARCHAR(10))
                FROM    dbo.YourTableName
            ) AS t;
    

    FULL WORKING EXAMPLE

    IF OBJECT_ID(N'tempdb..#T', 'U') IS NOT NULL
        DROP TABLE #T;
    
    CREATE TABLE #T
    (   Ref INT, 
        Surname VARCHAR(50), 
        Firstname VARCHAR(50), 
        Birthdate DATE, 
        Sex CHAR(1), 
        ExternalSource VARCHAR(50), 
        ExtID VARCHAR(11)
    );
    
    INSERT #T (Ref, Surname, Firstname, Birthdate, Sex, ExternalSource)
    VALUES
        (1, 'AAA', 'AA', '2000-01-01', 'M', 'Alpha'),
        (2, 'BBB', 'BB', '2001-01-01', 'F', 'Beta'),
        (3, 'AAA', 'AA', '2000-01-01', 'M', 'Beta'),
        (4, 'CCC', 'CC', '2003-01-01', 'M', 'Alpha'),
        (5, 'BBB', 'BB', '2001-01-01', 'F', 'Gamma'),
        (6, 'DDD', 'DD', '2004-01-01', 'M', 'Beta'),
        (7, 'CCC', 'CC', '2003-01-01', 'M', 'Alpha'),
        (8, 'AAA', 'AA', '2000-01-01', 'M', 'Gamma');
    
    UPDATE  t
    SET     ExtID = NewExtID
    FROM    (   SELECT  ExtID,
                        NewExtID = 'R' + CAST(DENSE_RANK() 
                                                OVER(ORDER BY Surname, Birthdate, Sex) 
                                            AS VARCHAR(10))
                FROM    #T
            ) AS t;
    
    SELECT  *
    FROM    #T
    ORDER BY Ref;       
    

    ADDENDUM

    For maintaining this, I would suggest a slightly different approach, and have a separate table to maintain your ExtID, which would allow you to leverage an identity column:

    CREATE TABLE dbo.Ext 
    (
            ID INT IDENTITY(1, 1) NOT NULL,
            Surname VARCHAR(50) NOT NULL,
            BirthDate DATE NOT NULL,
            Sex CHAR(1) NOT NULL,
            ExtID AS 'R' + CAST(ExtIntID AS VARCHAR(10)),
        CONSTRAINT PK_Ext__ID PRIMARY KEY (ID),
    );
    CREATE UNIQUE NONCLUSTERED INDEX UQ_Ext__Surname_Birthdate_Sex ON dbo.Ext (Surname, Birthdate, Sex);
    

    Realistically, with a similar index on your base tables you probably don't need this ExtID column, you can just join to the above table to get the ExtID with not a huge performance hit, but on the off chance you did need to update the ExtID column you could use:

    MERGE dbo.Ext AS e WITH (HOLDLOCK)
    USING 
    (   SELECT  DISTINCT Surname, Birthdate, Sex
        FROM    dbo.YourTable
    ) AS t
        ON t.Surname = e.Surname
        AND t.Birthdate = e.Birthdate
        AND t.Sex = e.Sex
    WHEN NOT MATCHED THEN 
        INSERT (Surname, Birthdate, Sex)
        VALUES (t.Surname, t.Birthdate, t.Sex);
    
    UPDATE  t
    SET     ExtID = r.ExtID
    FROM    db.YourTable AS t
            INNER JOIN dbo.Ext AS e
                ON e.Surname = t.Surname
                AND e.Birthdate = t.Birthdate
                AND e.Sex = t.Sex
    WHERE   t.ExtID IS NULL;
    

    I have used MERGE WITH (HOLDLOCK) because this is the least vulnerable method I know of meeting a race condition, and getting unique constraint violations.

    If all of this is not suitable, then I would still suggest as above (if possible) removing the R from the identifier, and making it just an integer. You can, if needed, create the text column as a computed column:

    CREATE TABLE #T
    (   Ref INT, 
        Surname VARCHAR(50), 
        Firstname VARCHAR(50), 
        Birthdate DATE, 
        Sex CHAR(1), 
        ExternalSource VARCHAR(50), 
        ExtIntID INT,
        ExtID AS 'R' + CAST(ExtIntID AS VARCHAR(10))
    );
    

    This will just make getting the maximim easier, and would probably make other uses easier too.

    Then, your update statement is fairly similar:

    UPDATE  t
    SET     ExtIntID = NewExtID
    FROM    (   SELECT  t.ExtIntID,
                        NewExtID = CASE WHEN e.ExtIntID IS NOT NULL THEN e.ExtIntID
                                    ELSE
                                        ISNULL(m.MaxID, 0) + 
                                        DENSE_RANK() OVER(PARTITION BY e.ExtIntID
                                                        ORDER BY t.Surname, t.Birthdate, t.Sex) 
                                    END
                FROM    #T AS t
                        LEFT JOIN
                        (   SELECT  Surname, Birthdate, Sex, ExtIntID = MAX(ExtIntID)
                            FROM     #T
                            GROUP BY Surname, Birthdate, Sex
                        ) AS e
                            ON e.Surname = t.Surname
                            AND e.Birthdate = t.Birthdate
                            AND e.Sex = t.Sex
                        OUTER APPLY (SELECT MAX(ExtIntID) FROM #T) AS m (MaxID)
                WHERE   t.ExtIntID IS NULL              
            ) AS t;
    

    If you can't make an INT column, again the update is pretty similar, you just need to mess around with formatting more:

    UPDATE  t
    SET     ExtID = NewExtID
    FROM    (   SELECT  t.ExtID,
                        NewExtID = CASE WHEN e.ExtID IS NOT NULL THEN e.ExtID
                                    ELSE
                                        'R' + 
                                        CAST(ISNULL(m.MaxID, 0) + 
                                            DENSE_RANK() OVER(PARTITION BY e.ExtID
                                                                ORDER BY t.Surname, t.Birthdate, t.Sex) 
                                            AS VARCHAR(10))
                                    END
                FROM    #T AS t
                        LEFT JOIN
                        (   SELECT  Surname, Birthdate, Sex, ExtID = MAX(ExtID)
                            FROM     #T
                            GROUP BY Surname, Birthdate, Sex
                        ) AS e
                            ON e.Surname = t.Surname
                            AND e.Birthdate = t.Birthdate
                            AND e.Sex = t.Sex
                        OUTER APPLY (SELECT MAX(CONVERT(INT, SUBSTRING(ExtID, 2, LEN(ExtID)))) FROM #T) AS m (MaxID)
                WHERE   t.ExtID IS NULL             
            ) AS t;
    
    qid & accept id: (32719239, 32720510) query: ORACLE: How to check for and remove repeating column values soup:

    Limiting your results to 3 B Numbers at most is easy using the row_number() analytic function.

    \n
    select a_number, b_number\n  from (select a_number, b_number,\n               row_number() over (partition by b_number order by null) as rn\n          from your_table)\n where rn <= 3\n
    \n

    However, the above query is not explicit about which 3 rows it will preserve (order by null).

    \n

    If you want to keep the first 3 occurrences of a B Number in your list, then you need a way to explicitly define the order of your list. Do you have some timestamp field perhaps?

    \n

    In any case, whatever field(s) define(s) the order of your list, use that in the order by clause of the row_number() function call:

    \n
    row_number() over (partition by b_number order by pick_an_ordering_column)\n
    \n soup wrap:

    Limiting your results to 3 B Numbers at most is easy using the row_number() analytic function.

    select a_number, b_number
      from (select a_number, b_number,
                   row_number() over (partition by b_number order by null) as rn
              from your_table)
     where rn <= 3
    

    However, the above query is not explicit about which 3 rows it will preserve (order by null).

    If you want to keep the first 3 occurrences of a B Number in your list, then you need a way to explicitly define the order of your list. Do you have some timestamp field perhaps?

    In any case, whatever field(s) define(s) the order of your list, use that in the order by clause of the row_number() function call:

    row_number() over (partition by b_number order by pick_an_ordering_column)
    
    qid & accept id: (32732880, 32737692) query: Creating views with different columns than table soup:

    I assume the table and data as below.

    \n
    create table visit (visit_to text, visit_by text, visit_on datetime, value int);\ninsert into visit values('x', 'a', '2015-02-02 00:00:00', 1);\ninsert into visit values('x', 'b', '2015-02-16 00:00:00', 2);\ninsert into visit values('y', 'c', '2015-02-18 00:00:00', 3);\ninsert into visit values('x', 'd', '2015-02-14 00:00:00', 4);\n
    \n

    And the query is like this.

    \n
    select\n    visit_to,\n    date(visit_on, 'start of month') year_month,\n    replace(rtrim(group_concat((case when date(date(visit_on, '-15 days'), 'start of month') <> date(visit_on, 'start of month') then visit_by else '' end), (case when date(date(visit_on, '-15 days'), 'start of month') <> date(visit_on, 'start of month') then ' ' else '' end))), ' ', ',') fortnite1,\n    replace(rtrim(group_concat((case when date(date(visit_on, '-15 days'), 'start of month') = date(visit_on, 'start of month') then visit_by else '' end), (case when date(date(visit_on, '-15 days'), 'start of month') = date(visit_on, 'start of month') then ' ' else '' end))), ' ', ',') fortnite2\nfrom visit\ngroup by visit_to, date(visit_on, 'start of month')\n;\n
    \n

    You can try http://goo.gl/TXomRO

    \n

    Hope this help.

    \n soup wrap:

    I assume the table and data as below.

    create table visit (visit_to text, visit_by text, visit_on datetime, value int);
    insert into visit values('x', 'a', '2015-02-02 00:00:00', 1);
    insert into visit values('x', 'b', '2015-02-16 00:00:00', 2);
    insert into visit values('y', 'c', '2015-02-18 00:00:00', 3);
    insert into visit values('x', 'd', '2015-02-14 00:00:00', 4);
    

    And the query is like this.

    select
        visit_to,
        date(visit_on, 'start of month') year_month,
        replace(rtrim(group_concat((case when date(date(visit_on, '-15 days'), 'start of month') <> date(visit_on, 'start of month') then visit_by else '' end), (case when date(date(visit_on, '-15 days'), 'start of month') <> date(visit_on, 'start of month') then ' ' else '' end))), ' ', ',') fortnite1,
        replace(rtrim(group_concat((case when date(date(visit_on, '-15 days'), 'start of month') = date(visit_on, 'start of month') then visit_by else '' end), (case when date(date(visit_on, '-15 days'), 'start of month') = date(visit_on, 'start of month') then ' ' else '' end))), ' ', ',') fortnite2
    from visit
    group by visit_to, date(visit_on, 'start of month')
    ;
    

    You can try http://goo.gl/TXomRO

    Hope this help.

    qid & accept id: (32768245, 32769165) query: Select * from table where desired period does not overlap with existing periods soup:

    You'll want to check that records don't exist where 'date from' is less than or equal to the end date in your range and 'date to' is greater than or equal to the start date in your range.

    \n
    select t1.room\nfrom reservations t1\nwhere not exists (\n  select *\n  from reservations t2\n  where t2.room = t1.room\n  and t2.datefrom <= '2015-08-26'\n  and t2.dateto >= '2015-08-13'\n)\ngroup by room\n
    \n

    You can try it out here: http://sqlfiddle.com/#!9/cbd59/5

    \n

    I'm new to the site, so it won't let me post a comment, but I think the problem on the first answer is that the operators should be reversed.

    \n

    As mentioned in a previous comment, this is only good if all of the rooms have a reservation record. If not, better to select from your rooms table like this: http://sqlfiddle.com/#!9/0b96e/1

    \n
    select room\nfrom rooms\nwhere not exists (\n  select *\n  from reservations\n  where rooms.room = reservations.room\n  and reservations.datefrom <= '2015-08-26'\n  and reservations.dateto >= '2015-08-13'\n)\n
    \n soup wrap:

    You'll want to check that records don't exist where 'date from' is less than or equal to the end date in your range and 'date to' is greater than or equal to the start date in your range.

    select t1.room
    from reservations t1
    where not exists (
      select *
      from reservations t2
      where t2.room = t1.room
      and t2.datefrom <= '2015-08-26'
      and t2.dateto >= '2015-08-13'
    )
    group by room
    

    You can try it out here: http://sqlfiddle.com/#!9/cbd59/5

    I'm new to the site, so it won't let me post a comment, but I think the problem on the first answer is that the operators should be reversed.

    As mentioned in a previous comment, this is only good if all of the rooms have a reservation record. If not, better to select from your rooms table like this: http://sqlfiddle.com/#!9/0b96e/1

    select room
    from rooms
    where not exists (
      select *
      from reservations
      where rooms.room = reservations.room
      and reservations.datefrom <= '2015-08-26'
      and reservations.dateto >= '2015-08-13'
    )
    
    qid & accept id: (32802567, 32802614) query: SQLite query; Group By Hour And By Device soup:

    You just need to add Gate to the group by and select:

    \n
    SELECT gate, strftime('%Y-%m-%dT%H:00:00.000', DateTime),\n       max(Count) - min(Count) \nFROM GateTbl \nGROUP by gate, strftime('%Y-%m-%dT%H:00:00.000', DateTime);\n
    \n

    EDIT:

    \n

    Oops, sorry about that. What you actually need is the first count from the next hour. SQL has a great function for that, lag(). But not SQLite. But, you can do it with a correlated subquery:

    \n
    WITH gh as (\n      SELECT gate, strftime('%Y-%m-%dT%H:00:00.000', DateTime) as dt,\n             MIN(Count) as mincount, MAX(count) as maxcount\n      FROM GateTbl \n      GROUP by gate, strftime('%Y-%m-%dT%H:00:00.000', DateTime)\n     )\nSELECT gh.gate, gh.dt,\n       COALESCE(gh.next_mincount, gh.maxcount) - gh.mincount\nFROM (SELECT gh.*,\n             (SELECT gh2.mincount\n              FROM gh gh2\n              WHERE gh2.gate = gh.gate AND gh2.dt > gh.dt\n              ORDER BY gh2.dt\n              LIMIT 1\n             ) as next_mincount\n      FROM gh\n     ) gh;\n
    \n

    The coalesce() is just for the last hour. This uses the last timestamp during that hour. If there is only one, then you will get 0 for that hour.

    \n soup wrap:

    You just need to add Gate to the group by and select:

    SELECT gate, strftime('%Y-%m-%dT%H:00:00.000', DateTime),
           max(Count) - min(Count) 
    FROM GateTbl 
    GROUP by gate, strftime('%Y-%m-%dT%H:00:00.000', DateTime);
    

    EDIT:

    Oops, sorry about that. What you actually need is the first count from the next hour. SQL has a great function for that, lag(). But not SQLite. But, you can do it with a correlated subquery:

    WITH gh as (
          SELECT gate, strftime('%Y-%m-%dT%H:00:00.000', DateTime) as dt,
                 MIN(Count) as mincount, MAX(count) as maxcount
          FROM GateTbl 
          GROUP by gate, strftime('%Y-%m-%dT%H:00:00.000', DateTime)
         )
    SELECT gh.gate, gh.dt,
           COALESCE(gh.next_mincount, gh.maxcount) - gh.mincount
    FROM (SELECT gh.*,
                 (SELECT gh2.mincount
                  FROM gh gh2
                  WHERE gh2.gate = gh.gate AND gh2.dt > gh.dt
                  ORDER BY gh2.dt
                  LIMIT 1
                 ) as next_mincount
          FROM gh
         ) gh;
    

    The coalesce() is just for the last hour. This uses the last timestamp during that hour. If there is only one, then you will get 0 for that hour.

    qid & accept id: (32819225, 32820633) query: MySQL Limit joined table soup:

    If you can live with the answer as 3 rows instead of 9, the easiest way is:

    \n
    select c.*,\n       substring_index(group_concat(q.id order by rand()), ',', 3) as question_ids\nfrom category c join\n     question q\n     on c.id = q.cat_id\ngroup by c.id\norder by rand();\n
    \n

    Otherwise, you can do this using variables:

    \n
    select cq.*\nfrom (select c.*, q.*,\n             (@rn := if(@c = c.id, @rn + 1,\n                        if(@c := c.id, 1, 1)\n                       )\n             ) as rn\n      from (select c.*\n            from category c\n            order by rand()\n            limit 3\n           ) c join\n           question q\n           on c.id = q.cat_id cross join\n           (select @c := 0, @rn := 0) params\n      order by c.id, rand()\n     ) cq\nwhere rn <= 3;\n
    \n

    Of course, you should select the columns you actually need rather than using *.

    \n soup wrap:

    If you can live with the answer as 3 rows instead of 9, the easiest way is:

    select c.*,
           substring_index(group_concat(q.id order by rand()), ',', 3) as question_ids
    from category c join
         question q
         on c.id = q.cat_id
    group by c.id
    order by rand();
    

    Otherwise, you can do this using variables:

    select cq.*
    from (select c.*, q.*,
                 (@rn := if(@c = c.id, @rn + 1,
                            if(@c := c.id, 1, 1)
                           )
                 ) as rn
          from (select c.*
                from category c
                order by rand()
                limit 3
               ) c join
               question q
               on c.id = q.cat_id cross join
               (select @c := 0, @rn := 0) params
          order by c.id, rand()
         ) cq
    where rn <= 3;
    

    Of course, you should select the columns you actually need rather than using *.

    qid & accept id: (32832623, 32832771) query: SQL database (Django) - Relate all records in table B to each record in table A soup:

    You will need another model, can be Question. The final result would be something like:

    \n
    class User(models.Model):\n    user_name = models.CharField(...)\n\nclass Question(models.Model):\n    question_text = models.CharField(...)\n\nclass UserAnswer(models.Model):\n    question = models.ForeignKey(Question)\n    user = models.ForeignKey(User)\n    answer = models.CharField(...)\n
    \n

    If you want more complicated answers, like especific values, lists of values, you can create one more model:

    \n
    class QuestionAlternative(models.Model):\n    question = models.ForeignKey(Question)\n    value = models.CharField(...)\n
    \n

    And then redefine UserAnswer:

    \n
    class UserAnswer(models.Model):\n    question = models.ForeignKey(Question)\n    user = models.ForeignKey(User)\n    answer = models.ForeignKey(QuestionAlternative)\n
    \n

    With this, you will have the Questions in one place, one UserAnswer per question, and the QuestionAlternatives how many times they must exist. Does not worry about the ForeignKey fields, they are not overheads and they build beautiful, reusable structures.

    \n soup wrap:

    You will need another model, can be Question. The final result would be something like:

    class User(models.Model):
        user_name = models.CharField(...)
    
    class Question(models.Model):
        question_text = models.CharField(...)
    
    class UserAnswer(models.Model):
        question = models.ForeignKey(Question)
        user = models.ForeignKey(User)
        answer = models.CharField(...)
    

    If you want more complicated answers, like especific values, lists of values, you can create one more model:

    class QuestionAlternative(models.Model):
        question = models.ForeignKey(Question)
        value = models.CharField(...)
    

    And then redefine UserAnswer:

    class UserAnswer(models.Model):
        question = models.ForeignKey(Question)
        user = models.ForeignKey(User)
        answer = models.ForeignKey(QuestionAlternative)
    

    With this, you will have the Questions in one place, one UserAnswer per question, and the QuestionAlternatives how many times they must exist. Does not worry about the ForeignKey fields, they are not overheads and they build beautiful, reusable structures.

    qid & accept id: (32849144, 32849584) query: SQL to compare field values within a set of rows grouped by a unique ID per group soup:

    You can JOIN a table to itself (a self-join). So just extract records that have AUDITTYPE = 1 (after update) with a different KEYNAME than the record with the same AUDITKEY but with AUDITTYPE = 0

    \n

    SQL Fiddle

    \n

    MS SQL Server 2008 Schema Setup:

    \n
    CREATE TABLE Table1\n    ([AUDITKEY] varchar(36), [AUDITTYPE] int, [AUDITTYPECODE] varchar(13), [KEYNAME] varchar(9))\n;\n\nINSERT INTO Table1\n    ([AUDITKEY], [AUDITTYPE], [AUDITTYPECODE], [KEYNAME])\nVALUES\n    ('12345678-1234-1234-1234-123456789012', 0, 'before update', 'BLABLABLA'),\n    ('12345678-1234-1234-1234-123456789012', 1, 'after update', 'BLABLABLA'),\n    ('22345678-1234-1234-1234-123456789012', 0, 'before update', 'BLABLA'),\n    ('22345678-1234-1234-1234-123456789012', 1, 'after update', 'ALBALB')\n;\n
    \n

    Query 1:

    \n
    SELECT T2.AUDITKEY, T2.KEYNAME\nFROM Table1 T1\nINNER JOIN Table1 T2 ON T1.AUDITKEY = T2.AUDITKEY AND\n                        T1.AUDITTYPE = 0 AND T2.AUDITTYPE = 1 AND\n                        T1.KEYNAME <> T2.KEYNAME\n
    \n

    Results:

    \n
    |                             AUDITKEY | KEYNAME |\n|--------------------------------------|---------|\n| 22345678-1234-1234-1234-123456789012 |  ALBALB |\n
    \n soup wrap:

    You can JOIN a table to itself (a self-join). So just extract records that have AUDITTYPE = 1 (after update) with a different KEYNAME than the record with the same AUDITKEY but with AUDITTYPE = 0

    SQL Fiddle

    MS SQL Server 2008 Schema Setup:

    CREATE TABLE Table1
        ([AUDITKEY] varchar(36), [AUDITTYPE] int, [AUDITTYPECODE] varchar(13), [KEYNAME] varchar(9))
    ;
    
    INSERT INTO Table1
        ([AUDITKEY], [AUDITTYPE], [AUDITTYPECODE], [KEYNAME])
    VALUES
        ('12345678-1234-1234-1234-123456789012', 0, 'before update', 'BLABLABLA'),
        ('12345678-1234-1234-1234-123456789012', 1, 'after update', 'BLABLABLA'),
        ('22345678-1234-1234-1234-123456789012', 0, 'before update', 'BLABLA'),
        ('22345678-1234-1234-1234-123456789012', 1, 'after update', 'ALBALB')
    ;
    

    Query 1:

    SELECT T2.AUDITKEY, T2.KEYNAME
    FROM Table1 T1
    INNER JOIN Table1 T2 ON T1.AUDITKEY = T2.AUDITKEY AND
                            T1.AUDITTYPE = 0 AND T2.AUDITTYPE = 1 AND
                            T1.KEYNAME <> T2.KEYNAME
    

    Results:

    |                             AUDITKEY | KEYNAME |
    |--------------------------------------|---------|
    | 22345678-1234-1234-1234-123456789012 |  ALBALB |
    
    qid & accept id: (32864721, 32864764) query: Where clause to check against two columns in another table soup:

    Use exists:

    \n
    select t.location, t.warehouse\nfrom table1 t\nwhere exists (select 1\n              from table2 t2\n              where t.location = t2.area and t.warehouse = t2.code\n             );\n
    \n

    I should point out that some databases support row constructors with in. That allows you to do:

    \n
    select t.location, t.warehouse\nfrom table1 t\nwhere(t1.location, t1.warehouse) in (select t2.area, t2.code from table2 t2);\n
    \n soup wrap:

    Use exists:

    select t.location, t.warehouse
    from table1 t
    where exists (select 1
                  from table2 t2
                  where t.location = t2.area and t.warehouse = t2.code
                 );
    

    I should point out that some databases support row constructors with in. That allows you to do:

    select t.location, t.warehouse
    from table1 t
    where(t1.location, t1.warehouse) in (select t2.area, t2.code from table2 t2);
    
    qid & accept id: (32880727, 32880894) query: how to get Latest date from table in oracle procedure soup:

    Have you tried using MAX()?

    \n
    select * from oehr_employees WHERE HIRE_DATE = (SELECT MAX(HIRE_DATE) FROM OEHR_EMPLOYEES)\n
    \n

    OUTPUT:

    \n
    EMPLOYEE_ID FIRST_NAME           LAST_NAME                 EMAIL                     PHONE_NUMBER         HIRE_DATE JOB_ID         SALARY COMMISSION_PCT MANAGER_ID DEPARTMENT_ID\n\n167 Amit    Banda   ABANDA  011.44.1346.729268  21-APR-00   SA_REP  6200    0.1 147 80\n173 Sundita Kumar   SKUMAR  011.44.1343.329268  21-APR-00   SA_REP  6100    0.1 148 80\n
    \n soup wrap:

    Have you tried using MAX()?

    select * from oehr_employees WHERE HIRE_DATE = (SELECT MAX(HIRE_DATE) FROM OEHR_EMPLOYEES)
    

    OUTPUT:

    EMPLOYEE_ID FIRST_NAME           LAST_NAME                 EMAIL                     PHONE_NUMBER         HIRE_DATE JOB_ID         SALARY COMMISSION_PCT MANAGER_ID DEPARTMENT_ID
    
    167 Amit    Banda   ABANDA  011.44.1346.729268  21-APR-00   SA_REP  6200    0.1 147 80
    173 Sundita Kumar   SKUMAR  011.44.1343.329268  21-APR-00   SA_REP  6100    0.1 148 80
    
    qid & accept id: (32928624, 32928803) query: SQL foreach table and get number for duplicate data using reference date soup:

    Step one is to get a list of the newest dates. You can use this with MAX(date) but that alone will just get you the newest date in the table. You can tell the database you want the newest date per name with a GROUP BY clause. In this case, GROUP BY name.

    \n
    SELECT name, MAX(date)\nFROM names\nGROUP BY name\n
    \n

    Now you can do some date math on MAX(date) to determine how old it is. MySQL has DATEDIFF to get the difference between two dates in days. CURRENT_DATE() gives the current date. So DATEDIFF(MAX(date), CURRENT_DATE()).

    \n
    SELECT name, DATEDIFF(MAX(date), CURRENT_DATE()) as Days\nFROM names\nGROUP BY name\n
    \n

    Finally, to append the "days" part, use CONCAT.

    \n
    SELECT name, CONCAT(DATEDIFF(MAX(date), CURRENT_DATE()), " days") as Days\nFROM names\nGROUP BY name\n
    \n

    You can play around with it in SQLFiddle.

    \n

    I would recommend not doing that last part in SQL. You won't get the formatting quite right ("1 days" is bad grammar) and the data is more useful as a number. Instead, do the formatting at the point you want to display the data.

    \n soup wrap:

    Step one is to get a list of the newest dates. You can use this with MAX(date) but that alone will just get you the newest date in the table. You can tell the database you want the newest date per name with a GROUP BY clause. In this case, GROUP BY name.

    SELECT name, MAX(date)
    FROM names
    GROUP BY name
    

    Now you can do some date math on MAX(date) to determine how old it is. MySQL has DATEDIFF to get the difference between two dates in days. CURRENT_DATE() gives the current date. So DATEDIFF(MAX(date), CURRENT_DATE()).

    SELECT name, DATEDIFF(MAX(date), CURRENT_DATE()) as Days
    FROM names
    GROUP BY name
    

    Finally, to append the "days" part, use CONCAT.

    SELECT name, CONCAT(DATEDIFF(MAX(date), CURRENT_DATE()), " days") as Days
    FROM names
    GROUP BY name
    

    You can play around with it in SQLFiddle.

    I would recommend not doing that last part in SQL. You won't get the formatting quite right ("1 days" is bad grammar) and the data is more useful as a number. Instead, do the formatting at the point you want to display the data.

    qid & accept id: (32933497, 32933534) query: MySQL select records between day of the year and year soup:

    Your method of storing the dates is exactly wrong for what you want to do. The general form would be:

    \n
    where (year = 2014 and day >= 275 or year > 2014) and\n      (year = 2015 and day <= 176 or year < 2015)\n
    \n

    This will work for any pair of years, not just those that are one year apart.

    \n

    Now, if you stored the dates normally, then you would simply do:

    \n
    where date >= makedate(2014, 275) and date <= makedate(2015, 176)\n
    \n

    What is really, really nice about this structure is that MySQL can use an index on date. That isn't possible with your query.

    \n

    In general, a micro-optimization such as using integers instead of some other data type is not worth the effort in relational databases. Usually, the cost of reading the data is much more expensive than processing within a row. And, in fact, this example is a great example of it. Why try to increase the speed of comparisons when you can remove the need for them entirely using an index?

    \n soup wrap:

    Your method of storing the dates is exactly wrong for what you want to do. The general form would be:

    where (year = 2014 and day >= 275 or year > 2014) and
          (year = 2015 and day <= 176 or year < 2015)
    

    This will work for any pair of years, not just those that are one year apart.

    Now, if you stored the dates normally, then you would simply do:

    where date >= makedate(2014, 275) and date <= makedate(2015, 176)
    

    What is really, really nice about this structure is that MySQL can use an index on date. That isn't possible with your query.

    In general, a micro-optimization such as using integers instead of some other data type is not worth the effort in relational databases. Usually, the cost of reading the data is much more expensive than processing within a row. And, in fact, this example is a great example of it. Why try to increase the speed of comparisons when you can remove the need for them entirely using an index?

    qid & accept id: (32943515, 32943669) query: Get records between two datetimes in SQL Server soup:

    I suspect that you are refactoring your query to make it sargable and to use possible indexes on columns DateFrom and DateTo.

    \n

    Those will not result in same results because your query will omit rows where date part of wkenddate equals datepart of DateFrom or DateTo column values. For example let's say wkenddate = '20151005' and your column DateFrom = '20151005 15:30'. First query will include this row since both dateparts are equal. And your second query will omit this row since '20151005 15:30' > '20151005'.

    \n

    Consider these example:

    \n
    DECLARE @t TABLE(d DATETIME)\n\nINSERT INTO @t VALUES\n('20151001 10:30'),\n('20151004 10:30'),\n('20151005 10:30')\n\nDECLARE @wkstdate DATE = '20151001', @wkenddate DATE = '20151005'\n\nSELECT * FROM @t WHERE CAST(d AS DATE) BETWEEN @wkstdate AND @wkenddate\nSELECT * FROM @t WHERE d >= @wkstdate AND d <= @wkenddate\nSELECT * FROM @t WHERE d >= @wkstdate AND d < DATEADD(dd, 1, @wkenddate)\n
    \n

    Outputs:

    \n
    2015-10-01 10:30:00.000\n2015-10-04 10:30:00.000\n2015-10-05 10:30:00.000\n\n2015-10-01 10:30:00.000\n2015-10-04 10:30:00.000\n\n2015-10-01 10:30:00.000\n2015-10-04 10:30:00.000\n2015-10-05 10:30:00.000\n
    \n

    You should rewrite as:

    \n
    SELECT office\nFROM   officebudget\nWHERE  officeid = @officeid\n       AND (\n               (\n                   bkto.DateFrom >= @wkstdate\n                   AND bkto.DateFrom < dateadd(dd, 1 , @wkenddate)\n               )\n               OR (bkto.DateTo >= @wkstdate\n               AND bkto.DateTo < dateadd(dd, 1, @wkenddate))\n           );\n
    \n soup wrap:

    I suspect that you are refactoring your query to make it sargable and to use possible indexes on columns DateFrom and DateTo.

    Those will not result in same results because your query will omit rows where date part of wkenddate equals datepart of DateFrom or DateTo column values. For example let's say wkenddate = '20151005' and your column DateFrom = '20151005 15:30'. First query will include this row since both dateparts are equal. And your second query will omit this row since '20151005 15:30' > '20151005'.

    Consider these example:

    DECLARE @t TABLE(d DATETIME)
    
    INSERT INTO @t VALUES
    ('20151001 10:30'),
    ('20151004 10:30'),
    ('20151005 10:30')
    
    DECLARE @wkstdate DATE = '20151001', @wkenddate DATE = '20151005'
    
    SELECT * FROM @t WHERE CAST(d AS DATE) BETWEEN @wkstdate AND @wkenddate
    SELECT * FROM @t WHERE d >= @wkstdate AND d <= @wkenddate
    SELECT * FROM @t WHERE d >= @wkstdate AND d < DATEADD(dd, 1, @wkenddate)
    

    Outputs:

    2015-10-01 10:30:00.000
    2015-10-04 10:30:00.000
    2015-10-05 10:30:00.000
    
    2015-10-01 10:30:00.000
    2015-10-04 10:30:00.000
    
    2015-10-01 10:30:00.000
    2015-10-04 10:30:00.000
    2015-10-05 10:30:00.000
    

    You should rewrite as:

    SELECT office
    FROM   officebudget
    WHERE  officeid = @officeid
           AND (
                   (
                       bkto.DateFrom >= @wkstdate
                       AND bkto.DateFrom < dateadd(dd, 1 , @wkenddate)
                   )
                   OR (bkto.DateTo >= @wkstdate
                   AND bkto.DateTo < dateadd(dd, 1, @wkenddate))
               );
    
    qid & accept id: (32958815, 32958917) query: query execution stops when overtake a threshold soup:

    It seems like you are trying to estimate the count rather than actually calculating it, right?

    \n

    There is an interesting article explaining how to do just that.\nIt states that it is much faster than executing the queries themselves, so it might be just what you need:\nhttps://wiki.postgresql.org/wiki/Count_estimate

    \n

    Basically, the idea is that you either query the catalog table pg_class:

    \n
     SELECT reltuples FROM pg_class WHERE relname = 'tbl';\n
    \n

    Or, if you have a more complex query:

    \n
     SELECT count_estimate('SELECT * FROM tbl WHERE t < 100');\n
    \n

    Where count_estimate is function that analyzes the execution plan to get the estimation:

    \n
    CREATE FUNCTION count_estimate(query text) RETURNS INTEGER AS\n$func$\nDECLARE\n    rec   record;\n    ROWS  INTEGER;\nBEGIN\n    FOR rec IN EXECUTE 'EXPLAIN ' || query LOOP\n    ROWS := SUBSTRING(rec."QUERY PLAN" FROM ' rows=([[:digit:]]+)');\n    EXIT WHEN ROWS IS NOT NULL;\nEND LOOP;\n\nRETURN ROWS;\nEND\n$func$ LANGUAGE plpgsql;\n
    \n soup wrap:

    It seems like you are trying to estimate the count rather than actually calculating it, right?

    There is an interesting article explaining how to do just that. It states that it is much faster than executing the queries themselves, so it might be just what you need: https://wiki.postgresql.org/wiki/Count_estimate

    Basically, the idea is that you either query the catalog table pg_class:

     SELECT reltuples FROM pg_class WHERE relname = 'tbl';
    

    Or, if you have a more complex query:

     SELECT count_estimate('SELECT * FROM tbl WHERE t < 100');
    

    Where count_estimate is function that analyzes the execution plan to get the estimation:

    CREATE FUNCTION count_estimate(query text) RETURNS INTEGER AS
    $func$
    DECLARE
        rec   record;
        ROWS  INTEGER;
    BEGIN
        FOR rec IN EXECUTE 'EXPLAIN ' || query LOOP
        ROWS := SUBSTRING(rec."QUERY PLAN" FROM ' rows=([[:digit:]]+)');
        EXIT WHEN ROWS IS NOT NULL;
    END LOOP;
    
    RETURN ROWS;
    END
    $func$ LANGUAGE plpgsql;
    
    qid & accept id: (32962508, 32962625) query: How to convert number to words - ORACLE soup:

    Use the force Luke ;)

    \n

    SqlFiddleDemo

    \n
    SELECT UPPER(TO_CHAR(TO_DATE(500,'J'),'Jsp')) || '/=' AS new_value\nFROM dual;  \n
    \n

    The clue is Date in spelled format.

    \n

    EDIT:

    \n

    Adding support for negative numbers:

    \n

    SqlFiddleDemo

    \n
    WITH cte AS\n(\n  SELECT 10 AS num      FROM dual\n  UNION ALL SELECT -500 FROM dual\n  UNION ALL SELECT 0    FROM dual\n)\nSELECT num AS old_value,\n       decode( sign( num ), -1, 'NEGATIVE ', 0, 'ZERO', NULL ) ||\n       decode( sign( abs(num) ), +1, to_char( to_date( abs(num),'J'),'JSP') ) || '/=' AS new_value\nFROM cte\n
    \n

    EDIT 2:

    \n

    Adding limited support for float:

    \n

    SqlFiddleDemo3

    \n
    WITH cte AS\n(\n  SELECT 10 AS num       FROM dual\n  UNION ALL SELECT -500  FROM dual\n  UNION ALL SELECT 0     FROM dual\n  UNION ALL SELECT 10.3  FROM dual\n  UNION ALL SELECT -10.7 FROM dual\n)\nSELECT \n  num AS old_value,\n  decode( sign( num ), -1, 'NEGATIVE ', 0, 'ZERO', NULL )\n  || decode( sign( abs(num) ), +1, to_char( to_date( abs(TRUNC(num)),'J'),'JSP') )\n  ||\n  CASE\n     WHEN INSTR (num, '.') > 0\n     THEN  ' POINT ' || TO_CHAR (TO_DATE (TO_NUMBER (SUBSTR(num, INSTR (num, '.') + 1)),'J'),'JSP')\n     ELSE NULL\n  END AS new_value\nFROM cte\n
    \n soup wrap:

    Use the force Luke ;)

    SqlFiddleDemo

    SELECT UPPER(TO_CHAR(TO_DATE(500,'J'),'Jsp')) || '/=' AS new_value
    FROM dual;  
    

    The clue is Date in spelled format.

    EDIT:

    Adding support for negative numbers:

    SqlFiddleDemo

    WITH cte AS
    (
      SELECT 10 AS num      FROM dual
      UNION ALL SELECT -500 FROM dual
      UNION ALL SELECT 0    FROM dual
    )
    SELECT num AS old_value,
           decode( sign( num ), -1, 'NEGATIVE ', 0, 'ZERO', NULL ) ||
           decode( sign( abs(num) ), +1, to_char( to_date( abs(num),'J'),'JSP') ) || '/=' AS new_value
    FROM cte
    

    EDIT 2:

    Adding limited support for float:

    SqlFiddleDemo3

    WITH cte AS
    (
      SELECT 10 AS num       FROM dual
      UNION ALL SELECT -500  FROM dual
      UNION ALL SELECT 0     FROM dual
      UNION ALL SELECT 10.3  FROM dual
      UNION ALL SELECT -10.7 FROM dual
    )
    SELECT 
      num AS old_value,
      decode( sign( num ), -1, 'NEGATIVE ', 0, 'ZERO', NULL )
      || decode( sign( abs(num) ), +1, to_char( to_date( abs(TRUNC(num)),'J'),'JSP') )
      ||
      CASE
         WHEN INSTR (num, '.') > 0
         THEN  ' POINT ' || TO_CHAR (TO_DATE (TO_NUMBER (SUBSTR(num, INSTR (num, '.') + 1)),'J'),'JSP')
         ELSE NULL
      END AS new_value
    FROM cte
    
    qid & accept id: (32983858, 32983924) query: Count Two Tables on shared date in Postgresql soup:

    Just use a cte, just have to be carefull if you dont have date in every day. In that case you would need a date table to get 0 when no sales.

    \n

    Also try not use reserved words like date as fieldnames

    \n
    with countA as (\n     SELECT date, count(*) as CountA\n     from tableA\n     group by date\n),\ncountB as (\n     SELECT date, count(*) as CountB\n     from tableB\n     group by date\n)\nSELECT A.date, A.CountA, B.CountB\nFROM CountA  A\nINNER JOIN  CountB B\n   ON A.date = B.date\n
    \n

    With a table AllDates to solve day without sales

    \n
    SELECT T.date, \n       CASE \n          WHEN A.CountA IS NULL THEN 0\n          ELSE A.CountA\n       END as CountA,\n       CASE \n          WHEN B.CountB IS NULL THEN 0\n          ELSE B.CountB\n       END as CountB\n\nFROM AllDates T \nLEFT JOIN CountA  A\n       ON T.date = A.date\nLEFT JOIN CountB B\n       ON T.date = B.date\n
    \n soup wrap:

    Just use a cte, just have to be carefull if you dont have date in every day. In that case you would need a date table to get 0 when no sales.

    Also try not use reserved words like date as fieldnames

    with countA as (
         SELECT date, count(*) as CountA
         from tableA
         group by date
    ),
    countB as (
         SELECT date, count(*) as CountB
         from tableB
         group by date
    )
    SELECT A.date, A.CountA, B.CountB
    FROM CountA  A
    INNER JOIN  CountB B
       ON A.date = B.date
    

    With a table AllDates to solve day without sales

    SELECT T.date, 
           CASE 
              WHEN A.CountA IS NULL THEN 0
              ELSE A.CountA
           END as CountA,
           CASE 
              WHEN B.CountB IS NULL THEN 0
              ELSE B.CountB
           END as CountB
    
    FROM AllDates T 
    LEFT JOIN CountA  A
           ON T.date = A.date
    LEFT JOIN CountB B
           ON T.date = B.date
    
    qid & accept id: (32985768, 32986038) query: how to increase cache hit ratio in Oracle database? soup:

    Source

    \n

    Many DBAs do their best to get a 99% or better buffer cache hit ratio, but quickly discover that the performance of their database isn't improving as the hit ratio gets better.

    \n

    Here is a query to get your database's current hit ratio:

    \n
    SQL> -- Get initial Buffer Hit Ratio reading...\nSQL> SELECT ROUND((1-(phy.value / (cur.value + con.value)))*100,2) "Cache Hit Ratio"\n  2    FROM v$sysstat cur, v$sysstat con, v$sysstat phy\n  3   WHERE cur.name = 'db block gets'\n  4     AND con.name = 'consistent gets'\n  5     AND phy.name = 'physical reads'\n  6  /\n\nCache Hit Ratio\n---------------\n         90.75\n
    \n

    However, to show how meaningless this number is, let's artificially increase it:

    \n
    SQL> -- Let's artificially increase the buffer hit ratio...\nSQL> DECLARE\n  2    v_dummy dual.dummy%TYPE;\n  3  BEGIN\n  4    FOR I IN 1..10000000 LOOP\n  5      SELECT dummy INTO v_dummy FROM dual;\n  6    END LOOP;\n  7  END;\n  8  /\n\nPL/SQL procedure successfully completed.\n
    \n

    Let's see what happened:

    \n
    SQL> -- Let's measure it again...\nSQL> SELECT ROUND((1-(phy.value / (cur.value + con.value)))*100,2) "Cache Hit Ratio"\n  2    FROM v$sysstat cur, v$sysstat con, v$sysstat phy\n  3   WHERE cur.name = 'db block gets'\n  4     AND con.name = 'consistent gets'\n  5     AND phy.name = 'physical reads'\n  6  /\n\nCache Hit Ratio\n---------------\n          99.94\n
    \n

    Conclusion: Don't even bother trying to tune the Buffer Hit Ratio!

    \n

    There are better ways to tune now. The Oracle Wait Interface (OWI) provides exact details. No need to rely on fuzzy meaningless counters anymore.

    \n soup wrap:

    Source

    Many DBAs do their best to get a 99% or better buffer cache hit ratio, but quickly discover that the performance of their database isn't improving as the hit ratio gets better.

    Here is a query to get your database's current hit ratio:

    SQL> -- Get initial Buffer Hit Ratio reading...
    SQL> SELECT ROUND((1-(phy.value / (cur.value + con.value)))*100,2) "Cache Hit Ratio"
      2    FROM v$sysstat cur, v$sysstat con, v$sysstat phy
      3   WHERE cur.name = 'db block gets'
      4     AND con.name = 'consistent gets'
      5     AND phy.name = 'physical reads'
      6  /
    
    Cache Hit Ratio
    ---------------
             90.75
    

    However, to show how meaningless this number is, let's artificially increase it:

    SQL> -- Let's artificially increase the buffer hit ratio...
    SQL> DECLARE
      2    v_dummy dual.dummy%TYPE;
      3  BEGIN
      4    FOR I IN 1..10000000 LOOP
      5      SELECT dummy INTO v_dummy FROM dual;
      6    END LOOP;
      7  END;
      8  /
    
    PL/SQL procedure successfully completed.
    

    Let's see what happened:

    SQL> -- Let's measure it again...
    SQL> SELECT ROUND((1-(phy.value / (cur.value + con.value)))*100,2) "Cache Hit Ratio"
      2    FROM v$sysstat cur, v$sysstat con, v$sysstat phy
      3   WHERE cur.name = 'db block gets'
      4     AND con.name = 'consistent gets'
      5     AND phy.name = 'physical reads'
      6  /
    
    Cache Hit Ratio
    ---------------
              99.94
    

    Conclusion: Don't even bother trying to tune the Buffer Hit Ratio!

    There are better ways to tune now. The Oracle Wait Interface (OWI) provides exact details. No need to rely on fuzzy meaningless counters anymore.

    qid & accept id: (33020243, 33020588) query: SELECT @@ROWCOUNT Oracle equivalent soup:

    I don't know of any exact equivalent in Oracle that you can use in pure SQL.

    \n

    An alternative that may work for you, depending on your specific need, is to add a count(*) over () to your select statement to give you the total number of rows. It would at least save you from having to re-execute the query a 2nd time.

    \n
    select t.*,\n       count(*) over () as num_rows\n  from table t\n where ...\n
    \n

    Or, if you can't change the original query, then you can wrap it like this:

    \n
    select t.*,\n       count(*) over () as num_rows\n  from (query) t\n
    \n soup wrap:

    I don't know of any exact equivalent in Oracle that you can use in pure SQL.

    An alternative that may work for you, depending on your specific need, is to add a count(*) over () to your select statement to give you the total number of rows. It would at least save you from having to re-execute the query a 2nd time.

    select t.*,
           count(*) over () as num_rows
      from table t
     where ...
    

    Or, if you can't change the original query, then you can wrap it like this:

    select t.*,
           count(*) over () as num_rows
      from (query) t
    
    qid & accept id: (33050763, 33052821) query: Mysql : join three tables with cardinality 1:N soup:

    I think you forgot GROUP BY, please try the following.

    \n
    SELECT\n    ST.ID_ST, ST.`NAME`, GROUP_CONCAT(DISTINCT SU.`NAME`) AS subjects\nFROM STUDENTS AS ST\nLEFT JOIN QUALIFICATIONS AS QU ON\n    ST.ID_ST = QU.ID_ST\nLEFT JOIN SUBJECTS AS SU ON\n    SU.ID_SB = QU.ID_SB\nGROUP BY ST.ID_ST\n
    \n

    This should only display each student once, the subjects should be separated with comma(,) and each subject should only appear once per student (but only if the student has the subject).

    \n

    ID - Name - Subjects
    \n1 - Eve - Subj1, Subj3, Subj7

    \n
    \n

    But, it wasn't what you asked for, you want each subject to have it's own column, right?

    \n

    To do that, you could use a subquery in the select to get all subjects and use IF to print yes or no for each subject. You would still need to use the GROUP_CONCAT. But I would solve it in a different way.

    \n

    I would use a different language to separate the columns. First you need the student information and ID for all the student subjects, like this:

    \n
    SELECT\n    ST.ID_ST, ST.`NAME`, GROUP_CONCAT(DISTINCT QU.`ID_SB`) AS subject_ids\nFROM STUDENTS AS ST\nLEFT JOIN QUALIFICATIONS AS QU ON\n    ST.ID_ST = QU.ID_ST\nGROUP BY ST.ID_ST\n
    \n

    Then get the subjects:

    \n
    SELECT * FROM SUBJECTS\n
    \n

    The last part is creating the Excel. The example below is written in PHP:

    \n
    $excelRows = array();\nforeach($students as $student){\n    $excelRow = array($student->id, $student->name);\n    foreach($subjects as $subject){\n        array_push($excelRow, (in_array($subject->id, $student->subject_ids))?'yes':'no');\n    }\n    array_push($excelRows, $excelRow);\n}\n
    \n

    So, we just loop the students, inside the student-loop we loop the subjects.

    \n
    \n

    It isn't tested, but I think most of it should work. If it fails, please show me what you have tried and explain why it doesn't work.

    \n soup wrap:

    I think you forgot GROUP BY, please try the following.

    SELECT
        ST.ID_ST, ST.`NAME`, GROUP_CONCAT(DISTINCT SU.`NAME`) AS subjects
    FROM STUDENTS AS ST
    LEFT JOIN QUALIFICATIONS AS QU ON
        ST.ID_ST = QU.ID_ST
    LEFT JOIN SUBJECTS AS SU ON
        SU.ID_SB = QU.ID_SB
    GROUP BY ST.ID_ST
    

    This should only display each student once, the subjects should be separated with comma(,) and each subject should only appear once per student (but only if the student has the subject).

    ID - Name - Subjects
    1 - Eve - Subj1, Subj3, Subj7


    But, it wasn't what you asked for, you want each subject to have it's own column, right?

    To do that, you could use a subquery in the select to get all subjects and use IF to print yes or no for each subject. You would still need to use the GROUP_CONCAT. But I would solve it in a different way.

    I would use a different language to separate the columns. First you need the student information and ID for all the student subjects, like this:

    SELECT
        ST.ID_ST, ST.`NAME`, GROUP_CONCAT(DISTINCT QU.`ID_SB`) AS subject_ids
    FROM STUDENTS AS ST
    LEFT JOIN QUALIFICATIONS AS QU ON
        ST.ID_ST = QU.ID_ST
    GROUP BY ST.ID_ST
    

    Then get the subjects:

    SELECT * FROM SUBJECTS
    

    The last part is creating the Excel. The example below is written in PHP:

    $excelRows = array();
    foreach($students as $student){
        $excelRow = array($student->id, $student->name);
        foreach($subjects as $subject){
            array_push($excelRow, (in_array($subject->id, $student->subject_ids))?'yes':'no');
        }
        array_push($excelRows, $excelRow);
    }
    

    So, we just loop the students, inside the student-loop we loop the subjects.


    It isn't tested, but I think most of it should work. If it fails, please show me what you have tried and explain why it doesn't work.

    qid & accept id: (33085816, 33086044) query: Mysql Left join rotate 90° table soup:

    Magical Join! I love it. Must-have feature for the next release of MySQL. :-)

    \n

    In the meantime, here's what you do.

    \n
    SELECT m.id, \n       m.cust_id,\n       a.value   AS author,\n       r.value   AS release,\n       p.value   AS price, \n       m.filename,\n       m.hidden\n  FROM media m\n  LEFT JOIN other_table a ON m.id = a.id AND a.rowname = 'author'\n  LEFT JOIN other_table p ON m.id = p.id AND p.rowname = 'price'\n  LEFT JOIN other_table r ON m.id = r.id AND r.rowname = 'release'\n
    \n

    This use of a LEFT JOIN for each distinct attribute in your other_table (which you might call an attributes table or a metadata table) will allow your query to work even when some of your original rows don't have some of the metadata items.

    \n

    Then if you need to filter or order on some of these items, you wrap this query in an outer query. For example.

    \n
     SELECT *\n   FROM (\n     SELECT m.id, \n            m.cust_id,\n            a.value   AS author,\n            r.value   AS release,\n            p.value   AS price, \n            m.filename,\n            m.hidden\n       FROM media m\n       LEFT JOIN other_table a ON m.id = a.id AND a.rowname = 'author'\n       LEFT JOIN other_table p ON m.id = p.id AND p.rowname = 'price'\n       LEFT JOIN other_table r ON m.id = r.id AND r.rowname = 'release'\n    ) all\n WHERE release >= '2010-01-01'\n ORDER BY author\n
    \n

    It happens, for what it's worth, that WordPress uses this strategy to store arbitrary information in its wp_postmeta table.

    \n soup wrap:

    Magical Join! I love it. Must-have feature for the next release of MySQL. :-)

    In the meantime, here's what you do.

    SELECT m.id, 
           m.cust_id,
           a.value   AS author,
           r.value   AS release,
           p.value   AS price, 
           m.filename,
           m.hidden
      FROM media m
      LEFT JOIN other_table a ON m.id = a.id AND a.rowname = 'author'
      LEFT JOIN other_table p ON m.id = p.id AND p.rowname = 'price'
      LEFT JOIN other_table r ON m.id = r.id AND r.rowname = 'release'
    

    This use of a LEFT JOIN for each distinct attribute in your other_table (which you might call an attributes table or a metadata table) will allow your query to work even when some of your original rows don't have some of the metadata items.

    Then if you need to filter or order on some of these items, you wrap this query in an outer query. For example.

     SELECT *
       FROM (
         SELECT m.id, 
                m.cust_id,
                a.value   AS author,
                r.value   AS release,
                p.value   AS price, 
                m.filename,
                m.hidden
           FROM media m
           LEFT JOIN other_table a ON m.id = a.id AND a.rowname = 'author'
           LEFT JOIN other_table p ON m.id = p.id AND p.rowname = 'price'
           LEFT JOIN other_table r ON m.id = r.id AND r.rowname = 'release'
        ) all
     WHERE release >= '2010-01-01'
     ORDER BY author
    

    It happens, for what it's worth, that WordPress uses this strategy to store arbitrary information in its wp_postmeta table.

    qid & accept id: (33087355, 33087581) query: SQL Inner join on one field when second is null and on second when first is null soup:

    It is a little big statement to do it in comment so I will post it as an answer. If my understanding of the problem is correct then it will be like:

    \n
    select * \nfrom sizeconditionstable t1\njoin specalloytable t2\non (t1.c4 is not null and t2.c4 is not null and t1.c4 = t2.c4) or \n   (t1.c5 is not null and t2.c5 is not null and t1.c5 = t2.c5)\n
    \n

    Edit:

    \n
    select * \n    from sizeconditionstable t1\n    join specalloytable t2\n    on (t1.utc = t2.utc and t1.colnum = t2.colnum) and\n       ((t1.c4 = t2.c4) or (t1.c4 is null and t2.c4 is null)) and\n       ((t1.c5 = t2.c5) or (t1.c5 is null and t2.c5 is null))\n
    \n

    This is the version which will join always on utc and colnum and also on c4 and c5 if they are filled in both tables.

    \n soup wrap:

    It is a little big statement to do it in comment so I will post it as an answer. If my understanding of the problem is correct then it will be like:

    select * 
    from sizeconditionstable t1
    join specalloytable t2
    on (t1.c4 is not null and t2.c4 is not null and t1.c4 = t2.c4) or 
       (t1.c5 is not null and t2.c5 is not null and t1.c5 = t2.c5)
    

    Edit:

    select * 
        from sizeconditionstable t1
        join specalloytable t2
        on (t1.utc = t2.utc and t1.colnum = t2.colnum) and
           ((t1.c4 = t2.c4) or (t1.c4 is null and t2.c4 is null)) and
           ((t1.c5 = t2.c5) or (t1.c5 is null and t2.c5 is null))
    

    This is the version which will join always on utc and colnum and also on c4 and c5 if they are filled in both tables.

    qid & accept id: (33108437, 33108735) query: SQL Server 2014 Join Doubling Line Items soup:
    SELECT  [Order].[OrderNumber]\n        ,CASE   WHEN [ShopifyOrder].[PaymentStatusCode] = '2' THEN 'Paid'\n                WHEN [ShopifyOrder].[PaymentStatusCode] = '4' THEN 'Refunded'\n                WHEN [ShopifyOrder].[PaymentStatusCode] = '5' THEN 'Voided'\n                WHEN [ShopifyOrder].[PaymentStatusCode] = '6' THEN 'Partially Refunded'\n                END AS 'PaymentStatus'\n        ,[Store].[StoreName] as 'MarketplaceNames'\n        ,[OrderItem].[SKU]\n        ,[LookupList].[MainSKU]\n        ,[ShippingCharge].[Description] as shippingDescription\n        ,[ShippingCharge].[Amount] as shippingAmount\n        ,[DiscountCharge].[Description] as discountDescription\n        ,[DiscountCharge].[Amount] as discountAmount\n        ,[LookupList].[Classification] as 'Classification'\n        ,[LookupList].[Cost]\n        ,([OrderItem].[Quantity]* [OrderItem].[UnitPrice]) AS 'Sales'\n        ,(([OrderItem].[Quantity] * [LookupList].[Quantity]) * [LookupList].[Cost]) AS 'Total Cost'\n        ,[OrderItem].[Quantity] * [LookupList].[Quantity] AS 'Total Qty'\nFROM [SHIPSERVER].[dbo].[Order]\nJOIN [SHIPSERVER].[dbo].[ShopifyOrder]\nON [Order].[OrderID]=[ShopifyOrder].[OrderID]\nJOIN [SHIPSERVER].[dbo].[OrderItem]\nON [OrderItem].[OrderID]=[Order].[OrderID]\nJOIN [SHIPSERVER].[dbo].[Store]\nON [Order].[StoreID]=[Store].[StoreID]\nLEFT JOIN [SHIPSERVER].[dbo].[LookupList]\nON [OrderItem].[SKU]=[LookupList].[SKU]\nLEFT JOIN [SHIPSERVER].[dbo].[OrderCharge] [ShippingCharge]\nON [Order].[OrderID]=[ShippingCharge].[OrderID] AND [ShippingCharge].[Type] = 'SHIPPING'\nLEFT JOIN [SHIPSERVER].[dbo].[OrderCharge] [DiscountCharge]\nON [Order].[OrderID]=[DiscountCharge].[OrderID] AND [DiscountCharge].[Type] = 'DISCOUNT'\nWHERE ([Store].[StoreName]= 'Shopify')\nAND ([Order].[OrderDate] BETWEEN '2015-09-01 00:00:00.000' AND '2015-09-30 23:59:59.999')\nAND ([Order].[IsManual] = '0')\n
    \n

    The differences are :

    \n
            ,[ShippingCharge].[Description] as shippingDescription\n        ,[ShippingCharge].[Amount] as shippingAmount\n        ,[DiscountCharge].[Description] as discountDescription\n        ,[DiscountCharge].[Amount] as discountAmount\n
    \n

    and

    \n
    LEFT JOIN [SHIPSERVER].[dbo].[OrderCharge] [ShippingCharge]\nON [Order].[OrderID]=[ShippingCharge].[OrderID] AND [ShippingCharge].[Type] = 'SHIPPING'\nLEFT JOIN [SHIPSERVER].[dbo].[OrderCharge] [DiscountCharge]\nON [Order].[OrderID]=[DiscountCharge].[OrderID] AND [DiscountCharge].[Type] = 'DISCOUNT'\n
    \n

    Basically, what I did is I left joined on OrderCharge twice, once for Discount and once for Shipping, with a different alias each time. This means that you're potentially linked to a discount row and potentially linked to a shipping row, and from there getting the data is incredibly easy.

    \n

    As @thab pointed out in comments though, there are glaring issues with this. First of all, having more than one Shipping or Discount entries will duplicate rows, at which point you would have to use a sum on the [Amount] (and probably an XML concatenation on the description). This also means that the query must be altered whenever a new Type of ChargeOrder appears.

    \n

    The idea solution would be using Pivot, but I haven't dabbled with that yet so I can't help you with that one. I do believe that Pivot tables run slower though (well, at least dynamic ones do), so as long as your problem doesn't change you should be fine.

    \n soup wrap:
    SELECT  [Order].[OrderNumber]
            ,CASE   WHEN [ShopifyOrder].[PaymentStatusCode] = '2' THEN 'Paid'
                    WHEN [ShopifyOrder].[PaymentStatusCode] = '4' THEN 'Refunded'
                    WHEN [ShopifyOrder].[PaymentStatusCode] = '5' THEN 'Voided'
                    WHEN [ShopifyOrder].[PaymentStatusCode] = '6' THEN 'Partially Refunded'
                    END AS 'PaymentStatus'
            ,[Store].[StoreName] as 'MarketplaceNames'
            ,[OrderItem].[SKU]
            ,[LookupList].[MainSKU]
            ,[ShippingCharge].[Description] as shippingDescription
            ,[ShippingCharge].[Amount] as shippingAmount
            ,[DiscountCharge].[Description] as discountDescription
            ,[DiscountCharge].[Amount] as discountAmount
            ,[LookupList].[Classification] as 'Classification'
            ,[LookupList].[Cost]
            ,([OrderItem].[Quantity]* [OrderItem].[UnitPrice]) AS 'Sales'
            ,(([OrderItem].[Quantity] * [LookupList].[Quantity]) * [LookupList].[Cost]) AS 'Total Cost'
            ,[OrderItem].[Quantity] * [LookupList].[Quantity] AS 'Total Qty'
    FROM [SHIPSERVER].[dbo].[Order]
    JOIN [SHIPSERVER].[dbo].[ShopifyOrder]
    ON [Order].[OrderID]=[ShopifyOrder].[OrderID]
    JOIN [SHIPSERVER].[dbo].[OrderItem]
    ON [OrderItem].[OrderID]=[Order].[OrderID]
    JOIN [SHIPSERVER].[dbo].[Store]
    ON [Order].[StoreID]=[Store].[StoreID]
    LEFT JOIN [SHIPSERVER].[dbo].[LookupList]
    ON [OrderItem].[SKU]=[LookupList].[SKU]
    LEFT JOIN [SHIPSERVER].[dbo].[OrderCharge] [ShippingCharge]
    ON [Order].[OrderID]=[ShippingCharge].[OrderID] AND [ShippingCharge].[Type] = 'SHIPPING'
    LEFT JOIN [SHIPSERVER].[dbo].[OrderCharge] [DiscountCharge]
    ON [Order].[OrderID]=[DiscountCharge].[OrderID] AND [DiscountCharge].[Type] = 'DISCOUNT'
    WHERE ([Store].[StoreName]= 'Shopify')
    AND ([Order].[OrderDate] BETWEEN '2015-09-01 00:00:00.000' AND '2015-09-30 23:59:59.999')
    AND ([Order].[IsManual] = '0')
    

    The differences are :

            ,[ShippingCharge].[Description] as shippingDescription
            ,[ShippingCharge].[Amount] as shippingAmount
            ,[DiscountCharge].[Description] as discountDescription
            ,[DiscountCharge].[Amount] as discountAmount
    

    and

    LEFT JOIN [SHIPSERVER].[dbo].[OrderCharge] [ShippingCharge]
    ON [Order].[OrderID]=[ShippingCharge].[OrderID] AND [ShippingCharge].[Type] = 'SHIPPING'
    LEFT JOIN [SHIPSERVER].[dbo].[OrderCharge] [DiscountCharge]
    ON [Order].[OrderID]=[DiscountCharge].[OrderID] AND [DiscountCharge].[Type] = 'DISCOUNT'
    

    Basically, what I did is I left joined on OrderCharge twice, once for Discount and once for Shipping, with a different alias each time. This means that you're potentially linked to a discount row and potentially linked to a shipping row, and from there getting the data is incredibly easy.

    As @thab pointed out in comments though, there are glaring issues with this. First of all, having more than one Shipping or Discount entries will duplicate rows, at which point you would have to use a sum on the [Amount] (and probably an XML concatenation on the description). This also means that the query must be altered whenever a new Type of ChargeOrder appears.

    The idea solution would be using Pivot, but I haven't dabbled with that yet so I can't help you with that one. I do believe that Pivot tables run slower though (well, at least dynamic ones do), so as long as your problem doesn't change you should be fine.

    qid & accept id: (33118186, 33118387) query: Optimizing window function in PostgreSQL to use index soup:

    To match the index you created:

    \n
    CREATE INDEX ON foo(id, date)\n
    \n

    you would have to make that:

    \n
    ROW_NUMBER() OVER (PARTITION BY id ORDER BY date DESC NULLS LAST) 
    \n

    which is the perfect reverse order of ASC.

    \n

    That aside, you could just run:

    \n
    SELECT DISTINCT ON (id)\n       id, date\nFROM   foo\nORDER  BY id, date DESC NULLS LAST;\n
    \n

    But that's probably not what you wanted to ask. Either way, I would make the index:

    \n
    CREATE INDEX ON foo(id, date DESC NULLS LAST)\n
    \n

    so that max(date) is the first index entry per id.\nRelated:

    \n\n soup wrap:

    To match the index you created:

    CREATE INDEX ON foo(id, date)
    

    you would have to make that:

    ROW_NUMBER() OVER (PARTITION BY id ORDER BY date DESC NULLS LAST) 

    which is the perfect reverse order of ASC.

    That aside, you could just run:

    SELECT DISTINCT ON (id)
           id, date
    FROM   foo
    ORDER  BY id, date DESC NULLS LAST;
    

    But that's probably not what you wanted to ask. Either way, I would make the index:

    CREATE INDEX ON foo(id, date DESC NULLS LAST)
    

    so that max(date) is the first index entry per id. Related:

    qid & accept id: (33135937, 33138799) query: Retrieve records against most recent state/attribute value soup:

    The best query depends on various details: selectivity of the query predicate, cardinalities, data distribution. If state = 'A' is a selective condition (view rows qualify), this query should be substantially faster:

    \n
    SELECT c.user_id, c.state\nFROM   customer_properties c\nLEFT   JOIN customer_properties c1 ON c1.user_id = c.user_id\n                                  AND c1.created_at > c.created_at\nWHERE  c.state = 'A'\nAND    c1.user_id IS NULL;\n
    \n

    Provided, there is an index on (state) (or even (state, user_id, created_at)) and another one on (user_id, created_at).

    \n

    There are various ways to make sure a later version of the row does not exist:

    \n\n

    If 'A' is a common value in state, this more generic query will be faster:

    \n
    SELECT user_id, state\nFROM (\n   SELECT user_id, state\n        , row_number() OVER (PARTITION BY user_id ORDER BY created_at DESC) AS rn\n   FROM   customer_properties\n   ) t\nWHERE  t.rn = 1\nAND    t.state = 'A';\n
    \n

    I removed NULLS LAST, assuming that created_at is defined NOT NULL. Also, I don't think Redshift has it:

    \n\n

    Both queries should work with the limited functionality of Redshift. With modern Postgres, there are better options:

    \n\n

    Your original would return all rows per user_id, if the latest row matches. You would have to fold duplicates, needless work ...

    \n soup wrap:

    The best query depends on various details: selectivity of the query predicate, cardinalities, data distribution. If state = 'A' is a selective condition (view rows qualify), this query should be substantially faster:

    SELECT c.user_id, c.state
    FROM   customer_properties c
    LEFT   JOIN customer_properties c1 ON c1.user_id = c.user_id
                                      AND c1.created_at > c.created_at
    WHERE  c.state = 'A'
    AND    c1.user_id IS NULL;
    

    Provided, there is an index on (state) (or even (state, user_id, created_at)) and another one on (user_id, created_at).

    There are various ways to make sure a later version of the row does not exist:

    If 'A' is a common value in state, this more generic query will be faster:

    SELECT user_id, state
    FROM (
       SELECT user_id, state
            , row_number() OVER (PARTITION BY user_id ORDER BY created_at DESC) AS rn
       FROM   customer_properties
       ) t
    WHERE  t.rn = 1
    AND    t.state = 'A';
    

    I removed NULLS LAST, assuming that created_at is defined NOT NULL. Also, I don't think Redshift has it:

    Both queries should work with the limited functionality of Redshift. With modern Postgres, there are better options:

    Your original would return all rows per user_id, if the latest row matches. You would have to fold duplicates, needless work ...

    qid & accept id: (33137311, 33487726) query: How to model several tournaments / brackets types into a SQL database? soup:

    You could create tables to hold tournament types, league types, playoff types, and have a schedule table, showing an even name along with its tournament type, and then use that relationship to retrieve information about that tournament. Note, this is not MySQL, this is more generic SQL language:

    \n
    CREATE TABLE tournTypes (\nID int autoincrement primary key,\nleagueId int constraint foreign key references leagueTypes.ID,\nplayoffId int constraint foreign key references playoffTypes.ID\n--...other attributes would necessitate more tables\n)\n\nCREATE TABLE leagueTypes(\nID int autoincrement primary key,\nnoOfTeams int,\nnoOfDivisions int,\ninterDivPlay bit -- e.g. a flag indicating if teams in different divisions would play\n)\n\nCREATE TABLE playoffTypes(\nID int autoincrement primary key,\nnoOfTeams int,\nisDoubleElim bit -- e.g. flag if it is double elimination\n)\n\nCREATE TABLE Schedule(\nID int autoincrement primary key,\nName text,\nstartDate datetime,\nendDate datetime,\ntournId int constraint foreign key references tournTypes.ID\n)\n
    \n

    Populating the tables...

    \n
    INSERT INTO tournTypes VALUES\n(1,2),\n(1,3),\n(2,3),\n(3,1)\n\nINSERT INTO leagueTypes VALUES\n(16,2,0), -- 16 teams, 2 divisions, teams only play within own division\n(8,1,0),\n(28,4,1)\n\nINSERT INTO playoffTypes VALUES\n(8,0), -- 8 teams, single elimination\n(4,0),\n(8,1)\n\nINSERT INTO Schedule VALUES\n('Champions league','2015-12-10','2016-02-10',1),\n('Rec league','2015-11-30','2016-03-04-,2)\n
    \n

    Getting info on a tournament...

    \n
    SELECT Name\n,startDate\n,endDate\n,l.noOfTeams as LeagueSize\n,p.noOfTeams as PlayoffTeams\n,case p.doubleElim when 0 then 'Single' when 1 then 'Double' end as Elimination\nFROM Schedule s\nINNER JOIN tournTypes t\nON s.tournId = t.ID\nINNER JOIN leagueTypes l\nON t.leagueId = l.ID\nINNER JOIN playoffTypes p\nON t.playoffId = p.ID\n
    soup wrap:

    You could create tables to hold tournament types, league types, playoff types, and have a schedule table, showing an even name along with its tournament type, and then use that relationship to retrieve information about that tournament. Note, this is not MySQL, this is more generic SQL language:

    CREATE TABLE tournTypes (
    ID int autoincrement primary key,
    leagueId int constraint foreign key references leagueTypes.ID,
    playoffId int constraint foreign key references playoffTypes.ID
    --...other attributes would necessitate more tables
    )
    
    CREATE TABLE leagueTypes(
    ID int autoincrement primary key,
    noOfTeams int,
    noOfDivisions int,
    interDivPlay bit -- e.g. a flag indicating if teams in different divisions would play
    )
    
    CREATE TABLE playoffTypes(
    ID int autoincrement primary key,
    noOfTeams int,
    isDoubleElim bit -- e.g. flag if it is double elimination
    )
    
    CREATE TABLE Schedule(
    ID int autoincrement primary key,
    Name text,
    startDate datetime,
    endDate datetime,
    tournId int constraint foreign key references tournTypes.ID
    )
    

    Populating the tables...

    INSERT INTO tournTypes VALUES
    (1,2),
    (1,3),
    (2,3),
    (3,1)
    
    INSERT INTO leagueTypes VALUES
    (16,2,0), -- 16 teams, 2 divisions, teams only play within own division
    (8,1,0),
    (28,4,1)
    
    INSERT INTO playoffTypes VALUES
    (8,0), -- 8 teams, single elimination
    (4,0),
    (8,1)
    
    INSERT INTO Schedule VALUES
    ('Champions league','2015-12-10','2016-02-10',1),
    ('Rec league','2015-11-30','2016-03-04-,2)
    

    Getting info on a tournament...

    SELECT Name
    ,startDate
    ,endDate
    ,l.noOfTeams as LeagueSize
    ,p.noOfTeams as PlayoffTeams
    ,case p.doubleElim when 0 then 'Single' when 1 then 'Double' end as Elimination
    FROM Schedule s
    INNER JOIN tournTypes t
    ON s.tournId = t.ID
    INNER JOIN leagueTypes l
    ON t.leagueId = l.ID
    INNER JOIN playoffTypes p
    ON t.playoffId = p.ID
    
    qid & accept id: (33149080, 33150154) query: SQL Server Query to Count Number of Changing Values in a Column Sequentially soup:

    Here's one way to do it using window functions:

    \n
    SELECT tenant, area, [date], sales,\n       DENSE_RANK() OVER (ORDER BY grpOrder) AS counter\nFROM (\n  SELECT tenant, area, date, sales,       \n         MIN([date]) OVER (PARTITION BY area, grp) AS grpOrder\n  FROM (\n    SELECT tenant, area, [date], sales,           \n           ROW_NUMBER() OVER (ORDER BY date) -\n           ROW_NUMBER() OVER (PARTITION BY area ORDER BY [date]) AS grp\n    FROM tenant ) AS t ) AS s\n
    \n

    The inner query identifies islands of consecutive area values. See grp value in below partial output from this sub-query:

    \n
    area date       grp\n--------------------\n18   2015-01-01  0\n18   2015-01-02  0\n18   2015-01-05  2\n18   2015-01-06  2\n20   2015-01-03  2\n20   2015-01-04  2\n
    \n

    Using window version of MIN we can calculate grp order: field grpOrder holds the minimum date per group.

    \n

    Using DENSE_RANK() in the outer query we can now easily calculate counter values: first group gets a value of 1, next group a value of 2, etc.

    \n

    Demo here

    \n soup wrap:

    Here's one way to do it using window functions:

    SELECT tenant, area, [date], sales,
           DENSE_RANK() OVER (ORDER BY grpOrder) AS counter
    FROM (
      SELECT tenant, area, date, sales,       
             MIN([date]) OVER (PARTITION BY area, grp) AS grpOrder
      FROM (
        SELECT tenant, area, [date], sales,           
               ROW_NUMBER() OVER (ORDER BY date) -
               ROW_NUMBER() OVER (PARTITION BY area ORDER BY [date]) AS grp
        FROM tenant ) AS t ) AS s
    

    The inner query identifies islands of consecutive area values. See grp value in below partial output from this sub-query:

    area date       grp
    --------------------
    18   2015-01-01  0
    18   2015-01-02  0
    18   2015-01-05  2
    18   2015-01-06  2
    20   2015-01-03  2
    20   2015-01-04  2
    

    Using window version of MIN we can calculate grp order: field grpOrder holds the minimum date per group.

    Using DENSE_RANK() in the outer query we can now easily calculate counter values: first group gets a value of 1, next group a value of 2, etc.

    Demo here

    qid & accept id: (33156695, 33156797) query: How to add averages w/ sum (case when...) in MySQL? soup:

    Your query looks ok. But, you don't need the subquery, so a simpler version is:

    \n
    SELECT ft.fruit,   \n       COUNT(ftl.fruit_attribute) As attributes_shared_lemon,\n       SUM(ftl.fruit_attribute IS NULL) As attributes_not_shared_lemon\nFROM fruits ft LEFT JOIN\n     fruits ftl\n     ON ft.fruit_attribute = ftl.fruit_attribute and ftl.fruit = 'Lemon'\nGROUP BY ft.fruit;\n
    \n

    I removed the submissions column, because it is not unique on each row.

    \n

    EDIT:

    \n

    If you want the average of the submissions columns for these groups, use case:

    \n
    SELECT ft.fruit,  \n       AVG(CASE WHEN ftl.fruit_attribute IS NOT NULL THEN ft.submissions END) as avg_shared, \n       AVG(CASE WHEN ftl.fruit_attribute IS NULL THEN ft.submissions END) as avg_notshared, \n       COUNT(ftl.fruit_attribute) As attributes_shared_lemon,\n       SUM(ftl.fruit_attribute IS NULL) As attributes_not_shared_lemon\nFROM fruits ft LEFT JOIN\n     fruits ftl\n     ON ft.fruit_attribute = ftl.fruit_attribute and ftl.fruit = 'Lemon'\nGROUP BY ft.fruit;\n
    \n soup wrap:

    Your query looks ok. But, you don't need the subquery, so a simpler version is:

    SELECT ft.fruit,   
           COUNT(ftl.fruit_attribute) As attributes_shared_lemon,
           SUM(ftl.fruit_attribute IS NULL) As attributes_not_shared_lemon
    FROM fruits ft LEFT JOIN
         fruits ftl
         ON ft.fruit_attribute = ftl.fruit_attribute and ftl.fruit = 'Lemon'
    GROUP BY ft.fruit;
    

    I removed the submissions column, because it is not unique on each row.

    EDIT:

    If you want the average of the submissions columns for these groups, use case:

    SELECT ft.fruit,  
           AVG(CASE WHEN ftl.fruit_attribute IS NOT NULL THEN ft.submissions END) as avg_shared, 
           AVG(CASE WHEN ftl.fruit_attribute IS NULL THEN ft.submissions END) as avg_notshared, 
           COUNT(ftl.fruit_attribute) As attributes_shared_lemon,
           SUM(ftl.fruit_attribute IS NULL) As attributes_not_shared_lemon
    FROM fruits ft LEFT JOIN
         fruits ftl
         ON ft.fruit_attribute = ftl.fruit_attribute and ftl.fruit = 'Lemon'
    GROUP BY ft.fruit;
    
    qid & accept id: (33234319, 33235283) query: Join two database table zend framework 1.12 soup:

    Having in mind Zend's Join Inner declaration:

    \n
    public function joinInner($name, $cond, $cols = self::SQL_WILDCARD, $schema = null)\n
    \n

    And being '$this', for example, a Zend_Db_Table_Abstract implementation with adapter set to db1 (with _setAdapter()) and schema to "@@@@@" (this is not really necessary because it'll use it as default):

    \n
    $select = $this->select(true)->setIntegrityCheck(false)\n               ->from(array('t1'=>'table1'),array('somefield')\n               ->joinInner(array('t1b'=>'table1'),\n                          't1.someid = t1b.someid',\n                           array('t1b.somefield'),\n                           '######')\n               ->where('t1.somefield = ?', $queryCrit); \n
    \n

    Please, note the the fourth parameter of the Inner Join method.

    \n

    Hope this helps.

    \n soup wrap:

    Having in mind Zend's Join Inner declaration:

    public function joinInner($name, $cond, $cols = self::SQL_WILDCARD, $schema = null)
    

    And being '$this', for example, a Zend_Db_Table_Abstract implementation with adapter set to db1 (with _setAdapter()) and schema to "@@@@@" (this is not really necessary because it'll use it as default):

    $select = $this->select(true)->setIntegrityCheck(false)
                   ->from(array('t1'=>'table1'),array('somefield')
                   ->joinInner(array('t1b'=>'table1'),
                              't1.someid = t1b.someid',
                               array('t1b.somefield'),
                               '######')
                   ->where('t1.somefield = ?', $queryCrit); 
    

    Please, note the the fourth parameter of the Inner Join method.

    Hope this helps.

    qid & accept id: (33258020, 33258293) query: Escaping special characters when naming a table column without setting define off soup:

    You would need to quote the identifier, but this is a really bad idea; every reference to the column everywhere will also have to be quoted and match the case exactly. See this, and the documentation, which advises against using quoted identifiers.

    \n

    It's an even worse idea with an ampersand because of its use for substitution variables, as you're seeing. To create the table and then use substitution variables in the same script you would need to turn off defines before the creation, and then turn them back on afterwards:

    \n
    set define off\ncreate table bad_idea("this&that" number);\nset define on\n
    \n

    But you still couldn't refer to the table name and a substitution variable in the same statement, unless you set define to something non-standard:

    \n
    set define "^"\ninsert into bad_idea("this&that") values (^var);\n
    \n

    But again everything that ever refers to that column will need to take that into account too, as well as the case and quoting.

    \n

    I'd seriously reconsider and make it something like this_and_that, or omit the 'and' part completely if it isn't really adding anything (or your real column name is approaching the length limit).

    \n

    If you only need it as a column alias you can do the same thing, and it would be slightly less painful, but still not ideal:

    \n
    set define "^"\nselect fieldA "this&that" from tableA where fieldB = ^var;\n
    \n soup wrap:

    You would need to quote the identifier, but this is a really bad idea; every reference to the column everywhere will also have to be quoted and match the case exactly. See this, and the documentation, which advises against using quoted identifiers.

    It's an even worse idea with an ampersand because of its use for substitution variables, as you're seeing. To create the table and then use substitution variables in the same script you would need to turn off defines before the creation, and then turn them back on afterwards:

    set define off
    create table bad_idea("this&that" number);
    set define on
    

    But you still couldn't refer to the table name and a substitution variable in the same statement, unless you set define to something non-standard:

    set define "^"
    insert into bad_idea("this&that") values (^var);
    

    But again everything that ever refers to that column will need to take that into account too, as well as the case and quoting.

    I'd seriously reconsider and make it something like this_and_that, or omit the 'and' part completely if it isn't really adding anything (or your real column name is approaching the length limit).

    If you only need it as a column alias you can do the same thing, and it would be slightly less painful, but still not ideal:

    set define "^"
    select fieldA "this&that" from tableA where fieldB = ^var;
    
    qid & accept id: (33267250, 33267347) query: SQL for finding types that have all ids on some row soup:

    In regular SQL, you could write:

    \n
    select type\nfrom mytable \ngroup by type\nhaving count(distinct id) = (select count(distinct id) from mytable);\n
    \n

    That will not quite work in MS Access because Access does not support COUNT(DISTINCT).

    \n

    But, if we assume no duplicates in mytable, then this variant should work:

    \n
    select type\nfrom mytable\ngroup by type\nhaving count(*) = (select count(*)\n                   from (select distinct id from mytable) as t\n                  );\n
    \n

    EDIT:

    \n

    If the table could contain duplicates, then you can remove them before the aggregation:

    \n
    select type\nfrom (select distinct type, id from mytable ) as ti\ngroup by type\nhaving count(*) = (select count(*)\n                   from (select distinct id from mytable) as t\n                  );\n
    \n soup wrap:

    In regular SQL, you could write:

    select type
    from mytable 
    group by type
    having count(distinct id) = (select count(distinct id) from mytable);
    

    That will not quite work in MS Access because Access does not support COUNT(DISTINCT).

    But, if we assume no duplicates in mytable, then this variant should work:

    select type
    from mytable
    group by type
    having count(*) = (select count(*)
                       from (select distinct id from mytable) as t
                      );
    

    EDIT:

    If the table could contain duplicates, then you can remove them before the aggregation:

    select type
    from (select distinct type, id from mytable ) as ti
    group by type
    having count(*) = (select count(*)
                       from (select distinct id from mytable) as t
                      );
    
    qid & accept id: (33273676, 33274138) query: MySQL find all rows where rows with number of rows for possible values of column is less than n soup:

    That's a simple aggregate. You want a row per foo_name in your results, so you GROUP BY foo_name. Then limit your results in HAVING:

    \n
    select foo_name\nfrom my_table\ngroup by foo_name\nhaving count(distinct foo_type) < 3;\n
    \n

    You can easily change your HAVING clause in order to know what types where found for a foo_name, e.g.:

    \n
    select foo_name\nfrom my_table\ngroup by foo_name\nhaving max(case when foo_type = 'A' then 1 else 0 end) = 0 -- A not found\n   and max(case when foo_type = 'B' then 1 else 0 end) = 1 -- B found\n   and max(case when foo_type = 'C' then 1 else 0 end) = 1 -- C found\n
    \n

    EDIT: Here is the same with another HAVING clause which may be easier to understand:

    \n
    select foo_name\nfrom my_table\ngroup by foo_name\nhaving group_concat(distinct foo_type order by foo_type) = 'B,C';\n
    \n soup wrap:

    That's a simple aggregate. You want a row per foo_name in your results, so you GROUP BY foo_name. Then limit your results in HAVING:

    select foo_name
    from my_table
    group by foo_name
    having count(distinct foo_type) < 3;
    

    You can easily change your HAVING clause in order to know what types where found for a foo_name, e.g.:

    select foo_name
    from my_table
    group by foo_name
    having max(case when foo_type = 'A' then 1 else 0 end) = 0 -- A not found
       and max(case when foo_type = 'B' then 1 else 0 end) = 1 -- B found
       and max(case when foo_type = 'C' then 1 else 0 end) = 1 -- C found
    

    EDIT: Here is the same with another HAVING clause which may be easier to understand:

    select foo_name
    from my_table
    group by foo_name
    having group_concat(distinct foo_type order by foo_type) = 'B,C';
    
    qid & accept id: (33275284, 33275341) query: How to select parent rows where all children statisfies a condition in SQL? soup:

    A general solution is to use NOT EXISTS with a reverse condition (<> instead of =):

    \n
    SELECT DISTINCT p.ProjectID\nFROM TblProjects p INNER JOIN TblCustomers ct\n  ON ct.ProjectID = p.ProjectID\nWHERE NOT EXISTS\n  (SELECT 1\n   FROM TblCustomers c\n   WHERE c.ProjectID = p.ProjectID AND (Number % 100) <> 0)\n
    \n

    Here's a SQLFiddle.

    \n
    \n

    Alternatively, specific for this use case, you can use a cleaner query:

    \n
    SELECT p.ProjectID\nFROM TblProjects p INNER JOIN TblCustomers ct\n  ON ct.ProjectID = p.ProjectID\nGROUP BY p.ProjectID\nHAVING MAX(ct.Number % 100) = 0\n
    \n

    Here's a SQLFiddle.

    \n
    \n

    P.S. if you only need ProjectID, you don't need to join anything at all, just use TblCustomers directly.

    \n soup wrap:

    A general solution is to use NOT EXISTS with a reverse condition (<> instead of =):

    SELECT DISTINCT p.ProjectID
    FROM TblProjects p INNER JOIN TblCustomers ct
      ON ct.ProjectID = p.ProjectID
    WHERE NOT EXISTS
      (SELECT 1
       FROM TblCustomers c
       WHERE c.ProjectID = p.ProjectID AND (Number % 100) <> 0)
    

    Here's a SQLFiddle.


    Alternatively, specific for this use case, you can use a cleaner query:

    SELECT p.ProjectID
    FROM TblProjects p INNER JOIN TblCustomers ct
      ON ct.ProjectID = p.ProjectID
    GROUP BY p.ProjectID
    HAVING MAX(ct.Number % 100) = 0
    

    Here's a SQLFiddle.


    P.S. if you only need ProjectID, you don't need to join anything at all, just use TblCustomers directly.

    qid & accept id: (33279715, 33279906) query: Filtering orders having one item and not having another one in the same time soup:

    I prefer to approach these types of questions using group by and having:

    \n
    SELECT i.order_id\nFROM items i\nGROUP BY i.order_id\nHAVING SUM(i.item_id = 1) > 0 AND\n       SUM(i.item_id = 2) = 0;\n
    \n

    Some notes:

    \n
      \n
    • You don't need to join in orders, because you have order_id in items.
    • \n
    • Each condition in the having clause is counting the number of items. The first says there is at least one of item 1 and the second that there is no item 2.
    • \n
    • I removed the where clause. If you were to have one, then it would be WHERE i.item_id IN (1, 2).
    • \n
    \n

    EDIT:

    \n

    In any database other than MySQL, you would use this HAVING clause:

    \n
    HAVING SUM(CASE WHEN i.item_id = 1 THEN 1 ELSE 0 END) > 0 AND\n       SUM(CASE WHEN i.item_id = 2 THEN 1 ELSE 0 END) = 0;\n
    \n

    This will work in MySQL as well; I just like the shorter notation.

    \n soup wrap:

    I prefer to approach these types of questions using group by and having:

    SELECT i.order_id
    FROM items i
    GROUP BY i.order_id
    HAVING SUM(i.item_id = 1) > 0 AND
           SUM(i.item_id = 2) = 0;
    

    Some notes:

    • You don't need to join in orders, because you have order_id in items.
    • Each condition in the having clause is counting the number of items. The first says there is at least one of item 1 and the second that there is no item 2.
    • I removed the where clause. If you were to have one, then it would be WHERE i.item_id IN (1, 2).

    EDIT:

    In any database other than MySQL, you would use this HAVING clause:

    HAVING SUM(CASE WHEN i.item_id = 1 THEN 1 ELSE 0 END) > 0 AND
           SUM(CASE WHEN i.item_id = 2 THEN 1 ELSE 0 END) = 0;
    

    This will work in MySQL as well; I just like the shorter notation.

    qid & accept id: (33346218, 33346237) query: Sum values in MySQL soup:

    You need simple +:

    \n
    SELECT id, hodnota1, hodnota2, hodnota1 +  hodnota2 AS spolu\nFROM test;\n
    \n

    For automatic calculating you need to use trigger or generated column.

    \n

    Generated columns 5.7+

    \n
    CREATE TABLE test(\n  id INT PRIMARY KEY AUTO_INCREMNET,\n  hodnota1 INT,\n  hodnota2 INT,\n  spolu INT AS (hodnota1 + hodnota2)\n);\n
    \n

    Another way is to create view:

    \n
    CREATE VIEW vw_test\nAS\nSELECT id, hodnota1, hodnota2, hodnota1 +  hodnota2 AS spolu\nFROM test;\n
    \n soup wrap:

    You need simple +:

    SELECT id, hodnota1, hodnota2, hodnota1 +  hodnota2 AS spolu
    FROM test;
    

    For automatic calculating you need to use trigger or generated column.

    Generated columns 5.7+

    CREATE TABLE test(
      id INT PRIMARY KEY AUTO_INCREMNET,
      hodnota1 INT,
      hodnota2 INT,
      spolu INT AS (hodnota1 + hodnota2)
    );
    

    Another way is to create view:

    CREATE VIEW vw_test
    AS
    SELECT id, hodnota1, hodnota2, hodnota1 +  hodnota2 AS spolu
    FROM test;
    
    qid & accept id: (33367569, 33367652) query: SQL - where (value1,value2) in list of lists soup:

    In terms of optimization, it is often best to put the "constant" list in a temporary table and use a join. In many databases, this would look like:

    \n
    select t.*\nfrom table t join\n     (select 1 as x, 1 as y union all select 1, 3 union all select 2, 2\n     ) list\n     on t.x = list.x and t.y = list.y;\n
    \n

    Database optimizers often work better on joins than on complicated where clauses.

    \n

    Some databases will also support a where clause like this:

    \n
    where (x, y) in ((1, 1), (1, 3), (2, 2))\n
    \n

    Of course, you can always use the sequence of comparisons suggested by Juergen, which works in any database.

    \n soup wrap:

    In terms of optimization, it is often best to put the "constant" list in a temporary table and use a join. In many databases, this would look like:

    select t.*
    from table t join
         (select 1 as x, 1 as y union all select 1, 3 union all select 2, 2
         ) list
         on t.x = list.x and t.y = list.y;
    

    Database optimizers often work better on joins than on complicated where clauses.

    Some databases will also support a where clause like this:

    where (x, y) in ((1, 1), (1, 3), (2, 2))
    

    Of course, you can always use the sequence of comparisons suggested by Juergen, which works in any database.

    qid & accept id: (33372188, 33374251) query: Oracle REGEXP_LIKE match up to a period (decimal point) or find next sequence in numbers such as Dewey Decimal soup:

    If the column PARENT_OCS_ID is unreliable, simple ignore it and calculate it correct from the child key.\nThe rest is your original approach

    \n
     with fix_parent as\n (select OC_ID,\n         SUBSTR (EB_OCS.oc_id, 1, INSTR (EB_OCS.oc_id, '.', -1)-1) as PARENT_OC_ID,\n         TO_NUMBER (SUBSTR (EB_OCS.oc_id, INSTR (EB_OCS.oc_id, '.', -1)+1)) child_number\n  from   TST EB_OCS)\n select   \n    PARENT_OC_ID,  max(child_number) +1 next_child_number\n from fix_parent\n where PARENT_OC_ID in ('4.0.1.1','4.0','4')\n group by PARENT_OC_ID\n order by PARENT_OC_ID; \n
    \n

    .

    \n
     PARENT_OC_ID NEXT_CHILD_NUMBER\n ------------ -----------------\n 4.0                         13 \n 4.0.1.1                      4\n
    \n

    To get a result for parent '4' add a line

    \n
     insert into TST values ('4.0','4');\n
    \n soup wrap:

    If the column PARENT_OCS_ID is unreliable, simple ignore it and calculate it correct from the child key. The rest is your original approach

     with fix_parent as
     (select OC_ID,
             SUBSTR (EB_OCS.oc_id, 1, INSTR (EB_OCS.oc_id, '.', -1)-1) as PARENT_OC_ID,
             TO_NUMBER (SUBSTR (EB_OCS.oc_id, INSTR (EB_OCS.oc_id, '.', -1)+1)) child_number
      from   TST EB_OCS)
     select   
        PARENT_OC_ID,  max(child_number) +1 next_child_number
     from fix_parent
     where PARENT_OC_ID in ('4.0.1.1','4.0','4')
     group by PARENT_OC_ID
     order by PARENT_OC_ID; 
    

    .

     PARENT_OC_ID NEXT_CHILD_NUMBER
     ------------ -----------------
     4.0                         13 
     4.0.1.1                      4
    

    To get a result for parent '4' add a line

     insert into TST values ('4.0','4');
    
    qid & accept id: (33381751, 33381818) query: ORACLE SQL - Count soup:

    Try like this

    \n
    select * from DVD\ninner join MonthlyStatement on DVD.dvdID =MonthlyStatement.dvdID\nwhere to_char(MonthlyStatement.dateHired,'Mon-YYYY')='Oct-2015'\norder by DVD.dvdID\n
    \n

    COUNT

    \n
    select DVD.dvdID,DVD.datePurchased,DVD.filmID,Count(MonthlyStatement.dateHired) from DVD\ninner join MonthlyStatement on DVD.dvdID =MonthlyStatement.dvdID\nwhere to_char(MonthlyStatement.dateHired,'Mon-YYYY')='Oct-2015'\ngroup by DVD.dvdID,DVD.datePurchased,DVD.filmID\norder by DVD.dvdID\n
    \n soup wrap:

    Try like this

    select * from DVD
    inner join MonthlyStatement on DVD.dvdID =MonthlyStatement.dvdID
    where to_char(MonthlyStatement.dateHired,'Mon-YYYY')='Oct-2015'
    order by DVD.dvdID
    

    COUNT

    select DVD.dvdID,DVD.datePurchased,DVD.filmID,Count(MonthlyStatement.dateHired) from DVD
    inner join MonthlyStatement on DVD.dvdID =MonthlyStatement.dvdID
    where to_char(MonthlyStatement.dateHired,'Mon-YYYY')='Oct-2015'
    group by DVD.dvdID,DVD.datePurchased,DVD.filmID
    order by DVD.dvdID
    
    qid & accept id: (33383829, 33396171) query: Sorting on child model's price attribute soup:

    To order by the min value of each price range:

    \n
    SELECT id\nFROM (\n  SELECT p.id, min(s.price) as min_price\n  FROM product p\n  JOIN sku s on p.id = s.product_id\n  GROUP BY p.id\n) x\nORDER BY min_price ASC\n
    \n
    \n

    You probably want this

    \n
    SELECT p.productid\nfrom (\n  SELECT p.productid, s.price,\n       ROW_NUMBER() OVER (PARTITION BY p.productid, ORDER BY s.price ASC) as rn\n  from product p\n  JOIN sku s on p.id = s.product_id\n) x\nwhere rn = 1\n
    \n

    and this

    \n
    SELECT p.productid\nfrom (\n  SELECT p.productid, s.price,\n       ROW_NUMBER() OVER (PARTITION BY p.productid, ORDER BY s.price DESC) as rn\n  from product p\n  join sku s on p.id = s.product_id\n) x\nwhere rn = 1\n
    \n

    But as I said I'm still not sure if you are ordering by price and removing dup productids (as this is) or if you want to order by the min value and max value of each products price range (as the "duplicate" does).

    \n soup wrap:

    To order by the min value of each price range:

    SELECT id
    FROM (
      SELECT p.id, min(s.price) as min_price
      FROM product p
      JOIN sku s on p.id = s.product_id
      GROUP BY p.id
    ) x
    ORDER BY min_price ASC
    

    You probably want this

    SELECT p.productid
    from (
      SELECT p.productid, s.price,
           ROW_NUMBER() OVER (PARTITION BY p.productid, ORDER BY s.price ASC) as rn
      from product p
      JOIN sku s on p.id = s.product_id
    ) x
    where rn = 1
    

    and this

    SELECT p.productid
    from (
      SELECT p.productid, s.price,
           ROW_NUMBER() OVER (PARTITION BY p.productid, ORDER BY s.price DESC) as rn
      from product p
      join sku s on p.id = s.product_id
    ) x
    where rn = 1
    

    But as I said I'm still not sure if you are ordering by price and removing dup productids (as this is) or if you want to order by the min value and max value of each products price range (as the "duplicate" does).

    qid & accept id: (33402664, 33403277) query: SQL - Computing overlap between Interests soup:

    Using the following to set up test tables

    \n
    --drop table Interests  ----------------------------\nCREATE TABLE Interests\n (\n   InterestId  char(1)  not null\n  ,UserId      int      not null\n )\n\nINSERT Interests values\n  ('A',1)\n ,('A',3)\n ,('B',1)\n ,('B',2)\n ,('B',3)\n ,('B',5)\n ,('C',2)\n ,('D',3)\n ,('D',4)\n\n\n--  drop table Groups  ---------------------\nCREATE TABLE Groups\n (\n   GroupId  int  not null\n  ,UserId   int  not null\n )\n\nINSERT Groups values\n  (-1, 1)\n ,(-1, 2)\n\n\nSELECT * from Groups\nSELECT * from Groups\n
    \n

    The following query would appear to do what you want:

    \n
    DECLARE @GroupId int\n\nSET @GroupId = -1\n\n;WITH cteGroupInterests (InterestId)\n as (--  List of the interests referenced by the target group\n     select distinct InterestId\n      from Groups gr\n       inner join Interests nt\n        on nt.UserId = gr.UserId\n      where gr.GroupId = @GroupId)\n--  Aggregate interests for each user\nSELECT\n   UserId\n  ,count(OwnInterstId)      OwnInterests\n  ,count(SharedInterestId)  SharedInterests\n from (--  Subquery lists all interests for each user\n       select\n          nt.UserId\n         ,nt.InterestId   OwnInterstId\n         ,cte.InterestId  SharedInterestId\n        from Interests nt\n         left outer join cteGroupInterests cte\n          on cte.InterestId = nt.InterestId\n        where not exists (--  Correlated subquery: is "this" user in the target group?)\n                          select 1\n                           from Groups gr\n                           where gr.GroupId = @GroupId\n                            and gr.UserId = nt.UserId)) xx\n group by UserId\n having count(SharedInterestId) > 0\n
    \n

    It appears to work, but I'd want to do more elaborate tests, and I've no idea how well it'd work against millions of rows. Key points are:

    \n
      \n
    • cte creates a temp table referenced by the later query; building an actual temp table might be a performance boost
    • \n
    • Correlated subqueries can be tricky, but indexes and not exists should make this pretty quick
    • \n
    • I was lazy and left out all the underscores, sorry
    • \n
    \n soup wrap:

    Using the following to set up test tables

    --drop table Interests  ----------------------------
    CREATE TABLE Interests
     (
       InterestId  char(1)  not null
      ,UserId      int      not null
     )
    
    INSERT Interests values
      ('A',1)
     ,('A',3)
     ,('B',1)
     ,('B',2)
     ,('B',3)
     ,('B',5)
     ,('C',2)
     ,('D',3)
     ,('D',4)
    
    
    --  drop table Groups  ---------------------
    CREATE TABLE Groups
     (
       GroupId  int  not null
      ,UserId   int  not null
     )
    
    INSERT Groups values
      (-1, 1)
     ,(-1, 2)
    
    
    SELECT * from Groups
    SELECT * from Groups
    

    The following query would appear to do what you want:

    DECLARE @GroupId int
    
    SET @GroupId = -1
    
    ;WITH cteGroupInterests (InterestId)
     as (--  List of the interests referenced by the target group
         select distinct InterestId
          from Groups gr
           inner join Interests nt
            on nt.UserId = gr.UserId
          where gr.GroupId = @GroupId)
    --  Aggregate interests for each user
    SELECT
       UserId
      ,count(OwnInterstId)      OwnInterests
      ,count(SharedInterestId)  SharedInterests
     from (--  Subquery lists all interests for each user
           select
              nt.UserId
             ,nt.InterestId   OwnInterstId
             ,cte.InterestId  SharedInterestId
            from Interests nt
             left outer join cteGroupInterests cte
              on cte.InterestId = nt.InterestId
            where not exists (--  Correlated subquery: is "this" user in the target group?)
                              select 1
                               from Groups gr
                               where gr.GroupId = @GroupId
                                and gr.UserId = nt.UserId)) xx
     group by UserId
     having count(SharedInterestId) > 0
    

    It appears to work, but I'd want to do more elaborate tests, and I've no idea how well it'd work against millions of rows. Key points are:

    • cte creates a temp table referenced by the later query; building an actual temp table might be a performance boost
    • Correlated subqueries can be tricky, but indexes and not exists should make this pretty quick
    • I was lazy and left out all the underscores, sorry
    qid & accept id: (33428932, 33430200) query: How to find a given value appears in how many tables in mysql soup:

    You can use UNION:

    \n
    SELECT COUNT(*) AS numOfDiscounts\nFROM (\n   SELECT discount \n   FROM table1\n   WHERE discount = 12\n\n   UNION ALL\n\n   SELECT discount \n   FROM table2\n   WHERE discount = 12\n\n   UNION ALL\n\n   SELECT discount \n   FROM table3\n   WHERE discount = 12\n   UNION ALL\n\n   SELECT discount \n   FROM table4\n   WHERE discount = 12\n\n   UNION ALL\n\n   SELECT discount \n   FROM table5\n   WHERE discount = 12) AS t\n
    \n

    The above query gives the number of tables containing a row with discount = 12.

    \n

    Demo here

    \n

    Alternatively, you can use:

    \n
    SELECT COALESCE((SELECT COUNT(*) FROM table1 WHERE discount = 12),0) + \n       COALESCE((SELECT COUNT(*) FROM table2 WHERE discount = 12),0) + \n       COALESCE((SELECT COUNT(*) FROM table3 WHERE discount = 12),0) + \n       COALESCE((SELECT COUNT(*) FROM table4 WHERE discount = 12),0) + \n       COALESCE((SELECT COUNT(*) FROM table5 WHERE discount = 12),0) AS numOfDiscounts\n
    \n

    Demo here

    \n

    or:

    \n
    SELECT (SELECT COUNT(CASE WHEN discount=12 THEN 1 END) FROM table1) + \n       (SELECT COUNT(CASE WHEN discount=12 THEN 1 END) FROM table2) + \n       (SELECT COUNT(CASE WHEN discount=12 THEN 1 END) FROM table3) + \n       (SELECT COUNT(CASE WHEN discount=12 THEN 1 END) FROM table4) + \n       (SELECT COUNT(CASE WHEN discount=12 THEN 1 END) FROM table5) AS numOfDiscounts\n
    \n

    Demo here

    \n soup wrap:

    You can use UNION:

    SELECT COUNT(*) AS numOfDiscounts
    FROM (
       SELECT discount 
       FROM table1
       WHERE discount = 12
    
       UNION ALL
    
       SELECT discount 
       FROM table2
       WHERE discount = 12
    
       UNION ALL
    
       SELECT discount 
       FROM table3
       WHERE discount = 12
       UNION ALL
    
       SELECT discount 
       FROM table4
       WHERE discount = 12
    
       UNION ALL
    
       SELECT discount 
       FROM table5
       WHERE discount = 12) AS t
    

    The above query gives the number of tables containing a row with discount = 12.

    Demo here

    Alternatively, you can use:

    SELECT COALESCE((SELECT COUNT(*) FROM table1 WHERE discount = 12),0) + 
           COALESCE((SELECT COUNT(*) FROM table2 WHERE discount = 12),0) + 
           COALESCE((SELECT COUNT(*) FROM table3 WHERE discount = 12),0) + 
           COALESCE((SELECT COUNT(*) FROM table4 WHERE discount = 12),0) + 
           COALESCE((SELECT COUNT(*) FROM table5 WHERE discount = 12),0) AS numOfDiscounts
    

    Demo here

    or:

    SELECT (SELECT COUNT(CASE WHEN discount=12 THEN 1 END) FROM table1) + 
           (SELECT COUNT(CASE WHEN discount=12 THEN 1 END) FROM table2) + 
           (SELECT COUNT(CASE WHEN discount=12 THEN 1 END) FROM table3) + 
           (SELECT COUNT(CASE WHEN discount=12 THEN 1 END) FROM table4) + 
           (SELECT COUNT(CASE WHEN discount=12 THEN 1 END) FROM table5) AS numOfDiscounts
    

    Demo here

    qid & accept id: (33437351, 33437469) query: SQL Server query: Adjust money column soup:

    Yes, you can do this without cursors, using cumulative sums:

    \n
    select t.*,\n       (case when sum(amount) over (order by entryid) <= @amount\n             then amount\n             when sum(amount) over (order by entryid) < @amount + amount\n             then @amount - (sum(amount) over (order by entryid) - amount)\n             else 0\n        end) as distrib\nfrom table t;\n
    \n

    That is, use cumulative sums for the calculation.

    \n

    For an update, you can use the same logic:

    \n
    with toupdate as (\n      select t.*,\n             (case when sum(amount) over (order by entryid) <= @amount\n                   then amount\n                   when sum(amount) over (order by entryid) < @amount + amount\n                   then @amount - (sum(amount) over (order by entryid) - amount)\n                   else 0\n              end) as new_distrib\n      from table t\n     )\nupdate toudpate\n    set distrib = new_distrib;\n
    \n soup wrap:

    Yes, you can do this without cursors, using cumulative sums:

    select t.*,
           (case when sum(amount) over (order by entryid) <= @amount
                 then amount
                 when sum(amount) over (order by entryid) < @amount + amount
                 then @amount - (sum(amount) over (order by entryid) - amount)
                 else 0
            end) as distrib
    from table t;
    

    That is, use cumulative sums for the calculation.

    For an update, you can use the same logic:

    with toupdate as (
          select t.*,
                 (case when sum(amount) over (order by entryid) <= @amount
                       then amount
                       when sum(amount) over (order by entryid) < @amount + amount
                       then @amount - (sum(amount) over (order by entryid) - amount)
                       else 0
                  end) as new_distrib
          from table t
         )
    update toudpate
        set distrib = new_distrib;
    
    qid & accept id: (33447891, 33448069) query: Flattening nested query in WHERE clause with NOT IN soup:

    Even though you asked to remove the subquery, using a not exists subquery might run faster than not in especially if the not in query returns a lot of values:

    \n
    SELECT m.id, m.name, m.description\nFROM merchandises m\nWHERE NOT EXISTS (\n    SELECT 1\n    FROM gifts g\n    WHERE g.with_merchandise = m.id\n    AND g.from_user = 'some_user_id'\n    AND g.to_user = 'some_other_user_id'\n)\n
    \n

    This query can take advantage of a composite key on gifts(with_merchandise,from_user,to_user)

    \n

    If you still rather use left join, then move your conditions for from_user and to_user from the where to the on clause

    \n
    SELECT m.id, m.name, m.description\nFROM merchandises m\nLEFT JOIN gifts g ON m.id = g.with_merchandise\n  AND g.from_user = 'some_user_id' AND g.to_user = 'some_other_user_id' \nWHERE g.id IS NULL \nORDER BY m.id ASC\nLIMIT 20 OFFSET 0\n
    \n soup wrap:

    Even though you asked to remove the subquery, using a not exists subquery might run faster than not in especially if the not in query returns a lot of values:

    SELECT m.id, m.name, m.description
    FROM merchandises m
    WHERE NOT EXISTS (
        SELECT 1
        FROM gifts g
        WHERE g.with_merchandise = m.id
        AND g.from_user = 'some_user_id'
        AND g.to_user = 'some_other_user_id'
    )
    

    This query can take advantage of a composite key on gifts(with_merchandise,from_user,to_user)

    If you still rather use left join, then move your conditions for from_user and to_user from the where to the on clause

    SELECT m.id, m.name, m.description
    FROM merchandises m
    LEFT JOIN gifts g ON m.id = g.with_merchandise
      AND g.from_user = 'some_user_id' AND g.to_user = 'some_other_user_id' 
    WHERE g.id IS NULL 
    ORDER BY m.id ASC
    LIMIT 20 OFFSET 0
    
    qid & accept id: (33451718, 33451744) query: Column as list with comma between soup:

    The group_concat aggregate function should do the trick:

    \n
    SELECT GROUP_CONCAT(name ORDER BY name) AS name\nFROM   banned\n
    \n

    EDIT:
    \nTo answer the question in the comment, you could add a separator clause to replace the comma in the result:

    \n
    SELECT GROUP_CONCAT(name ORDER BY name SEPARATOR '...') AS name\nFROM   banned\n
    \n soup wrap:

    The group_concat aggregate function should do the trick:

    SELECT GROUP_CONCAT(name ORDER BY name) AS name
    FROM   banned
    

    EDIT:
    To answer the question in the comment, you could add a separator clause to replace the comma in the result:

    SELECT GROUP_CONCAT(name ORDER BY name SEPARATOR '...') AS name
    FROM   banned
    
    qid & accept id: (33470158, 33470468) query: SQL query to select today and previous day's price soup:

    You can do something like this:

    \n
    with ranking as (\n  select ticker, price, dt, \n  rank() over (partition by ticker order by dt desc) as rank\n  from stocks\n)\nselect * from ranking where rank in (1,2);\n
    \n

    Example: http://sqlfiddle.com/#!15/e45ea/3

    \n

    Results for your example will look like this:

    \n
    | ticker | price |                        dt | rank |\n|--------|-------|---------------------------|------|\n|   AAPL |     6 | October, 23 2015 00:00:00 |    1 |\n|   AAPL |     5 | October, 22 2015 00:00:00 |    2 |\n|   AXP  |     5 | October, 23 2015 00:00:00 |    1 |\n|   AXP  |     3 | October, 22 2015 00:00:00 |    2 |\n
    \n

    If your table is large and have performance issues, use a where to restrict the data to last 30 days or so.

    \n soup wrap:

    You can do something like this:

    with ranking as (
      select ticker, price, dt, 
      rank() over (partition by ticker order by dt desc) as rank
      from stocks
    )
    select * from ranking where rank in (1,2);
    

    Example: http://sqlfiddle.com/#!15/e45ea/3

    Results for your example will look like this:

    | ticker | price |                        dt | rank |
    |--------|-------|---------------------------|------|
    |   AAPL |     6 | October, 23 2015 00:00:00 |    1 |
    |   AAPL |     5 | October, 22 2015 00:00:00 |    2 |
    |   AXP  |     5 | October, 23 2015 00:00:00 |    1 |
    |   AXP  |     3 | October, 22 2015 00:00:00 |    2 |
    

    If your table is large and have performance issues, use a where to restrict the data to last 30 days or so.

    qid & accept id: (33475700, 33475978) query: How to create a table dynamically with a field of main table as its name in Mysql? soup:

    Whenever you insert a record in your events table with some value like 'abc126' as an eventid,

    \n

    Simply execute the next query like this

    \n
    CREATE TABLE `abc126` LIKE `events`;\n
    \n

    This will create a new table as 'abc126' with structure/attributes similiar to your events table.\nIf you wish to have the attributes similiar to any other table , change events to that table name

    \n
    CREATE TABLE `abc126` LIKE `tablename`;\n
    \n

    If you want to further modify the new table being created with limited attributes check the mysql 'CREATE TABLE' options.

    \n soup wrap:

    Whenever you insert a record in your events table with some value like 'abc126' as an eventid,

    Simply execute the next query like this

    CREATE TABLE `abc126` LIKE `events`;
    

    This will create a new table as 'abc126' with structure/attributes similiar to your events table. If you wish to have the attributes similiar to any other table , change events to that table name

    CREATE TABLE `abc126` LIKE `tablename`;
    

    If you want to further modify the new table being created with limited attributes check the mysql 'CREATE TABLE' options.

    qid & accept id: (33483212, 33483245) query: How can I order hierarchy trees by branch in a select statement returning all hierarchy levels? soup:

    The following solution orders siblings by id. In your comments, you've mentioned wanting to order siblings by (filter) value. Just replace the relevant expression to achieve this.

    \n

    Use recursive SQL, Oracle syntax:

    \n
    SELECT *\nFROM t_filters\nSTART WITH parent IS NULL\nCONNECT BY parent = PRIOR id\nORDER SIBLINGS BY id\n
    \n

    Alternatively, SQL standard syntax (the standard and some databases would require a RECURSIVE keyword, but Oracle doesn't allow it). A bit more tedious, but more extensible:

    \n
    WITH /* RECURSIVE */ r (id, parent, rank, value, path) AS (\n  SELECT id, parent, rank, value, '' || id\n  FROM t_filters\n  WHERE parent IS NULL\n\n  UNION ALL\n\n  SELECT f.id, f.parent, f.rank, f.value, r.path || '/' || f.id\n  FROM r\n  JOIN t_filters f ON r.id = f.parent\n)\nSELECT *\nFROM r\nORDER BY path\n
    \n soup wrap:

    The following solution orders siblings by id. In your comments, you've mentioned wanting to order siblings by (filter) value. Just replace the relevant expression to achieve this.

    Use recursive SQL, Oracle syntax:

    SELECT *
    FROM t_filters
    START WITH parent IS NULL
    CONNECT BY parent = PRIOR id
    ORDER SIBLINGS BY id
    

    Alternatively, SQL standard syntax (the standard and some databases would require a RECURSIVE keyword, but Oracle doesn't allow it). A bit more tedious, but more extensible:

    WITH /* RECURSIVE */ r (id, parent, rank, value, path) AS (
      SELECT id, parent, rank, value, '' || id
      FROM t_filters
      WHERE parent IS NULL
    
      UNION ALL
    
      SELECT f.id, f.parent, f.rank, f.value, r.path || '/' || f.id
      FROM r
      JOIN t_filters f ON r.id = f.parent
    )
    SELECT *
    FROM r
    ORDER BY path
    
    qid & accept id: (33510248, 33510836) query: sql database - each row in a table with extra data in different tables soup:

    You need to learn about Database_normalization

    \n

    What you are asking is include information for some rows instead of every row, so you will have lot of columns with nulls with current design.

    \n

    What you do is create aditional tables and link using a foreign key.

    \n

    Imagine you have a table Cars with fields

    \n
    car_id, color, size, num_wheels\n
    \n

    But some cars are recreational vehicule and have aditional properties. So instead of add aditional columns in your Cars table you create another table RV_cars

    \n
    car_id, bathroom_size, num_bed, bol_tv\n
    \n

    So if you want get all information of one rv car you do

    \n
    SELECT C.*, R.*\nFROM Cars  C\njoin RV_cars R\n  ON C.car_id = R.car_id\n
    \n soup wrap:

    You need to learn about Database_normalization

    What you are asking is include information for some rows instead of every row, so you will have lot of columns with nulls with current design.

    What you do is create aditional tables and link using a foreign key.

    Imagine you have a table Cars with fields

    car_id, color, size, num_wheels
    

    But some cars are recreational vehicule and have aditional properties. So instead of add aditional columns in your Cars table you create another table RV_cars

    car_id, bathroom_size, num_bed, bol_tv
    

    So if you want get all information of one rv car you do

    SELECT C.*, R.*
    FROM Cars  C
    join RV_cars R
      ON C.car_id = R.car_id
    
    qid & accept id: (33538420, 33538438) query: MYSQL search if a string contains special characters? soup:

    Use regexp

    \n
    SELECT *\nFROM `tableName`\nWHERE `columnName` REGEXP '[^a-zA-Z0-9]'\n
    \n

    This would select all the rows where the particular column contain atleast one non-alphanumeric character.

    \n

    or

    \n
    REGEXP '[^[:alnum:]]'\n
    \n soup wrap:

    Use regexp

    SELECT *
    FROM `tableName`
    WHERE `columnName` REGEXP '[^a-zA-Z0-9]'
    

    This would select all the rows where the particular column contain atleast one non-alphanumeric character.

    or

    REGEXP '[^[:alnum:]]'
    
    qid & accept id: (33541995, 33542126) query: how to create calculated pivot in sql soup:

    You can do this:

    \n
    SELECT *\nFROM\n(\n    SELECT \n        OrderID,\n        OrderStatus + CountType AS StatusType,\n        DayCount\n    FROM CalendarTable     \n    UNION ALL\n    SELECT \n    OrderID,\n    CASE WHEN CountType = 'Working' THEN 'TotalWorking' ELSE 'TotalCalendar' END,\n    DayCount\n    FROM CalendarTable\n) AS t\nPIVOT\n(\n   MAX(DayCount)\n   For StatusType IN(OpenWorking,\n                     OpenCalendar,\n                     CloseWorking,\n                     CloseCalendar,\n                     PendingWorking,\n                     PendingCalendar,\n                     TotalWorking,\n                     TotalCalendar)\n) AS p;\n
    \n

    This will give you:

    \n

    enter image description here

    \n
    \n

    If you don't want to write down all the statuses manually, then you can do ti dynamically:

    \n

    DECLARE @cols AS NVARCHAR(MAX);

    \n
    DECLARE @query AS NVARCHAR(MAX);\n\nSELECT @cols = STUFF((SELECT distinct ',' +\n                        QUOTENAME(StatusType)\n                       FROM \n                       (\n                            SELECT \n                              OrderID,\n                              OrderStatus + CountType AS StatusType,\n                              DayCount\n                            FROM CalendarTable     \n                            UNION ALL\n                            SELECT \n                              OrderID,\n                              CASE WHEN CountType = 'Working' THEN 'TotalWorking' ELSE 'TotalCalendar' END,\n                              DayCount\n                            FROM CalendarTable\n                        ) AS t\n                      FOR XML PATH(''), TYPE\n                     ).value('.', 'NVARCHAR(MAX)') \n                        , 1, 1, '');\n\n\nSELECT @query = 'SELECT *\n                FROM\n                (\n                    SELECT \n                      OrderID,\n                      OrderStatus + CountType AS StatusType,\n                      DayCount\n                    FROM CalendarTable     \n                    UNION ALL\n                    SELECT \n                      OrderID,\n                      CASE WHEN CountType = ''Working'' THEN ''TotalWorking'' ELSE ''TotalCalendar'' END,\n                      DayCount\n                    FROM CalendarTable\n                ) AS t\n                PIVOT\n                (\n                   MAX(DayCount)\n                   For StatusType IN(' + @cols + ')' +\n                  ') p';\n\nexecute(@query);\n
    \n
    \n

    Update:

    \n

    For column names you can create a new variable @colnames and populate it with the names you want. For the Totals, you can add a WHERE clause to get the total for statuses active and pending only. So your query will be like this:

    \n
    DECLARE @cols AS NVARCHAR(MAX);\nDECLARE @colnames AS NVARCHAR(MAX);\nDECLARE @query AS NVARCHAR(MAX);\n\nSELECT @cols = STUFF((SELECT distinct ',' +\n                        QUOTENAME(StatusType)\n                       FROM \n                       (\n                            SELECT \n                              OrderID,\n                              OrderStatus + CountType AS StatusType,\n                              DayCount\n                            FROM CalendarTable     \n                            UNION ALL\n                            SELECT \n                              OrderID,\n                              CASE WHEN CountType = 'Working' THEN 'TotalWorking' ELSE 'TotalCalendar' END,\n                              DayCount\n                            FROM CalendarTable\n                            WHERE OrderStatus IN('Active', 'Pending')\n                        ) AS t\n                      FOR XML PATH(''), TYPE\n                     ).value('.', 'NVARCHAR(MAX)') \n                        , 1, 1, '');\n\n\nSELECT @colnames = STUFF((SELECT distinct ',' +\n                        QUOTENAME(StatusType) + ' AS ' + QUOTENAME(StatusTypeName)\n                       FROM \n                       (\n                           SELECT \n                                OrderID,\n                                OrderStatus + CountType AS StatusType,\n                                DayCount,\n                                OrderStatus + CASE WHEN CountType = 'Working' THEN  'WorkDays' ELSE 'CalDays' END AS StatusTypeName\n                            FROM CalendarTable     \n                            UNION ALL\n                            SELECT \n                              OrderID,\n                              CASE WHEN CountType = 'Working' THEN 'TotalWorking' ELSE 'TotalCalendar' END,\n                              DayCount,\n                              CASE WHEN CountType = 'Working' THEN 'TotalWorking' ELSE 'TotalCalendar' END\n                            FROM CalendarTable\n                            WHERE OrderStatus IN('Active', 'Pending')\n                        ) AS t\n                      FOR XML PATH(''), TYPE\n                     ).value('.', 'NVARCHAR(MAX)') \n                        , 1, 1, '');\n\nSELECT @query = 'SELECT OrderID , ' + @colnames + '\n                FROM\n                (\n                    SELECT \n                      OrderID,\n                      OrderStatus + CountType AS StatusType,\n                      DayCount\n                    FROM CalendarTable     \n                    UNION ALL\n                    SELECT \n                      OrderID,\n                      CASE WHEN CountType = ''Working'' THEN ''TotalWorking'' ELSE ''TotalCalendar'' END,\n                      DayCount\n                    FROM CalendarTable\n                    WHERE OrderStatus IN(''Active'', ''Pending'')\n                ) AS t\n                PIVOT\n                (\n                   SUM(DayCount)\n                   For StatusType IN(' + @cols + ')' +\n                  ') p';\n\nexecute(@query);\n
    \n

    This will give you:

    \n

    enter image description here

    \n
    \n

    Update

    \n

    If you want to add a where clause to the manual pivot query, you can do this:

    \n
    SELECT *\nFROM\n(\n    SELECT \n        OrderID,\n        OrderStatus + CountType AS StatusType,\n        DayCount\n    FROM CalendarTable     \n    WHERE ...\n    UNION ALL\n    SELECT \n    OrderID,\n    CASE WHEN CountType = 'Working' THEN 'TotalWorking' ELSE 'TotalCalendar' END,\n    DayCount\n    FROM CalendarTable\n    WHERE ...\n) AS t\nPIVOT\n(\n   MAX(DayCount)\n   For StatusType IN(OpenWorking,\n                     OpenCalendar,\n                     CloseWorking,\n                     CloseCalendar,\n                     PendingWorking,\n                     PendingCalendar,\n                     TotalWorking,\n                     TotalCalendar)\n) AS p;\n
    \n soup wrap:

    You can do this:

    SELECT *
    FROM
    (
        SELECT 
            OrderID,
            OrderStatus + CountType AS StatusType,
            DayCount
        FROM CalendarTable     
        UNION ALL
        SELECT 
        OrderID,
        CASE WHEN CountType = 'Working' THEN 'TotalWorking' ELSE 'TotalCalendar' END,
        DayCount
        FROM CalendarTable
    ) AS t
    PIVOT
    (
       MAX(DayCount)
       For StatusType IN(OpenWorking,
                         OpenCalendar,
                         CloseWorking,
                         CloseCalendar,
                         PendingWorking,
                         PendingCalendar,
                         TotalWorking,
                         TotalCalendar)
    ) AS p;
    

    This will give you:

    enter image description here


    If you don't want to write down all the statuses manually, then you can do ti dynamically:

    DECLARE @cols AS NVARCHAR(MAX);

    DECLARE @query AS NVARCHAR(MAX);
    
    SELECT @cols = STUFF((SELECT distinct ',' +
                            QUOTENAME(StatusType)
                           FROM 
                           (
                                SELECT 
                                  OrderID,
                                  OrderStatus + CountType AS StatusType,
                                  DayCount
                                FROM CalendarTable     
                                UNION ALL
                                SELECT 
                                  OrderID,
                                  CASE WHEN CountType = 'Working' THEN 'TotalWorking' ELSE 'TotalCalendar' END,
                                  DayCount
                                FROM CalendarTable
                            ) AS t
                          FOR XML PATH(''), TYPE
                         ).value('.', 'NVARCHAR(MAX)') 
                            , 1, 1, '');
    
    
    SELECT @query = 'SELECT *
                    FROM
                    (
                        SELECT 
                          OrderID,
                          OrderStatus + CountType AS StatusType,
                          DayCount
                        FROM CalendarTable     
                        UNION ALL
                        SELECT 
                          OrderID,
                          CASE WHEN CountType = ''Working'' THEN ''TotalWorking'' ELSE ''TotalCalendar'' END,
                          DayCount
                        FROM CalendarTable
                    ) AS t
                    PIVOT
                    (
                       MAX(DayCount)
                       For StatusType IN(' + @cols + ')' +
                      ') p';
    
    execute(@query);
    

    Update:

    For column names you can create a new variable @colnames and populate it with the names you want. For the Totals, you can add a WHERE clause to get the total for statuses active and pending only. So your query will be like this:

    DECLARE @cols AS NVARCHAR(MAX);
    DECLARE @colnames AS NVARCHAR(MAX);
    DECLARE @query AS NVARCHAR(MAX);
    
    SELECT @cols = STUFF((SELECT distinct ',' +
                            QUOTENAME(StatusType)
                           FROM 
                           (
                                SELECT 
                                  OrderID,
                                  OrderStatus + CountType AS StatusType,
                                  DayCount
                                FROM CalendarTable     
                                UNION ALL
                                SELECT 
                                  OrderID,
                                  CASE WHEN CountType = 'Working' THEN 'TotalWorking' ELSE 'TotalCalendar' END,
                                  DayCount
                                FROM CalendarTable
                                WHERE OrderStatus IN('Active', 'Pending')
                            ) AS t
                          FOR XML PATH(''), TYPE
                         ).value('.', 'NVARCHAR(MAX)') 
                            , 1, 1, '');
    
    
    SELECT @colnames = STUFF((SELECT distinct ',' +
                            QUOTENAME(StatusType) + ' AS ' + QUOTENAME(StatusTypeName)
                           FROM 
                           (
                               SELECT 
                                    OrderID,
                                    OrderStatus + CountType AS StatusType,
                                    DayCount,
                                    OrderStatus + CASE WHEN CountType = 'Working' THEN  'WorkDays' ELSE 'CalDays' END AS StatusTypeName
                                FROM CalendarTable     
                                UNION ALL
                                SELECT 
                                  OrderID,
                                  CASE WHEN CountType = 'Working' THEN 'TotalWorking' ELSE 'TotalCalendar' END,
                                  DayCount,
                                  CASE WHEN CountType = 'Working' THEN 'TotalWorking' ELSE 'TotalCalendar' END
                                FROM CalendarTable
                                WHERE OrderStatus IN('Active', 'Pending')
                            ) AS t
                          FOR XML PATH(''), TYPE
                         ).value('.', 'NVARCHAR(MAX)') 
                            , 1, 1, '');
    
    SELECT @query = 'SELECT OrderID , ' + @colnames + '
                    FROM
                    (
                        SELECT 
                          OrderID,
                          OrderStatus + CountType AS StatusType,
                          DayCount
                        FROM CalendarTable     
                        UNION ALL
                        SELECT 
                          OrderID,
                          CASE WHEN CountType = ''Working'' THEN ''TotalWorking'' ELSE ''TotalCalendar'' END,
                          DayCount
                        FROM CalendarTable
                        WHERE OrderStatus IN(''Active'', ''Pending'')
                    ) AS t
                    PIVOT
                    (
                       SUM(DayCount)
                       For StatusType IN(' + @cols + ')' +
                      ') p';
    
    execute(@query);
    

    This will give you:

    enter image description here


    Update

    If you want to add a where clause to the manual pivot query, you can do this:

    SELECT *
    FROM
    (
        SELECT 
            OrderID,
            OrderStatus + CountType AS StatusType,
            DayCount
        FROM CalendarTable     
        WHERE ...
        UNION ALL
        SELECT 
        OrderID,
        CASE WHEN CountType = 'Working' THEN 'TotalWorking' ELSE 'TotalCalendar' END,
        DayCount
        FROM CalendarTable
        WHERE ...
    ) AS t
    PIVOT
    (
       MAX(DayCount)
       For StatusType IN(OpenWorking,
                         OpenCalendar,
                         CloseWorking,
                         CloseCalendar,
                         PendingWorking,
                         PendingCalendar,
                         TotalWorking,
                         TotalCalendar)
    ) AS p;
    
    qid & accept id: (33573890, 33574361) query: SQL Add a filtering parameter without altering the data soup:

    I believe that you are just showing the duplicate data.

    \n

    As a matter of fact, the invoices table would have the contract sequence number along with the relative invoices IDs. For instance

    \n
    Project A   Invoice_001 \nProject A   Invoice_002 \n
    \n

    With your current SELECT, you'll get always :

    \n
    projectA   *titlea*   1111   Jim  \nprojectA   *titlea*   2222   James  \nprojectB   *titleb*   1111   Jim  \nprojectB   *titleb*   3333   Paul  \n
    \n

    But, if you add the invoices Column in your SELECT, you'll probably get :

    \n
    projectA   *titlea*   1111   Jim   Invoice_001  \nprojectA   *titlea*   1111   Jim   Invoice_002  \nprojectA   *titlea*   2222   James Invoice_003  \nprojectB   *titleb*   1111   Jim   Invoice_004  \nprojectB   *titleb*   3333   Paul  Invoice_005  \n
    \n

    So, no more duplicate data !

    \n

    I hope this helps.

    \n soup wrap:

    I believe that you are just showing the duplicate data.

    As a matter of fact, the invoices table would have the contract sequence number along with the relative invoices IDs. For instance

    Project A   Invoice_001 
    Project A   Invoice_002 
    

    With your current SELECT, you'll get always :

    projectA   *titlea*   1111   Jim  
    projectA   *titlea*   2222   James  
    projectB   *titleb*   1111   Jim  
    projectB   *titleb*   3333   Paul  
    

    But, if you add the invoices Column in your SELECT, you'll probably get :

    projectA   *titlea*   1111   Jim   Invoice_001  
    projectA   *titlea*   1111   Jim   Invoice_002  
    projectA   *titlea*   2222   James Invoice_003  
    projectB   *titleb*   1111   Jim   Invoice_004  
    projectB   *titleb*   3333   Paul  Invoice_005  
    

    So, no more duplicate data !

    I hope this helps.

    qid & accept id: (33593080, 33593135) query: How to get date difference in minutes using Hive soup:

    You could use unix_timestamp for dates after 1970:

    \n
    SELECT \n  (unix_timestamp('2013-01-01 10:10:10') - unix_timestamp('1970-01-01 00:00:00'))/60 \n
    \n
      \n
    1. Convert both dates to seconds from 1970-01-01
    2. \n
    3. Substract them
    4. \n
    5. Divide by 60 to get minutes
    6. \n
    \n

    SqlFiddleDemoUsingMySQL

    \n

    EDIT:

    \n

    Adding Minutes: change date to unixtime -> add var * 60sec -> convert back to date

    \n
    SELECT from_unixtime(unix_timestamp('2013-01-01 10:10:10') + 10 * 60) AS result\n
    \n

    SqlFiddleDemoUsingMySQL2

    \n soup wrap:

    You could use unix_timestamp for dates after 1970:

    SELECT 
      (unix_timestamp('2013-01-01 10:10:10') - unix_timestamp('1970-01-01 00:00:00'))/60 
    
    1. Convert both dates to seconds from 1970-01-01
    2. Substract them
    3. Divide by 60 to get minutes

    SqlFiddleDemoUsingMySQL

    EDIT:

    Adding Minutes: change date to unixtime -> add var * 60sec -> convert back to date

    SELECT from_unixtime(unix_timestamp('2013-01-01 10:10:10') + 10 * 60) AS result
    

    SqlFiddleDemoUsingMySQL2

    qid & accept id: (33605918, 33606305) query: PL/SQL hierarchical ordering soup:

    Since you already have the LEVEL stored in the table as ADMIN, you do not need the CONNECT BY clause. You just need to format your output using LPAD.

    \n

    For example,

    \n

    Setup

    \n
    CREATE TABLE t\n    (ID NUMBER, NAME VARCHAR2(1), ADMIN NUMBER);\n\nINSERT ALL \n    INTO t (ID, NAME, ADMIN)\n         VALUES (200, 'A', 1)\n    INTO t (ID, NAME, ADMIN)\n         VALUES (300, 'B', 2)\n    INTO t (ID, NAME, ADMIN)\n         VALUES (400, 'C', 3)\n    INTO t (ID, NAME, ADMIN)\n         VALUES (500, 'D', 1)\n    INTO t (ID, NAME, ADMIN)\n         VALUES (600, 'E', 3)\nSELECT * FROM dual;\n
    \n

    Query

    \n
    SQL> SELECT lpad(' ',2*(ADMIN-1)) || NAME name_hierarchy FROM t ORDER BY ADMIN, NAME;\n\nNAME_HIERARCHY\n--------------------------------------------------------------------------------\nA\nD\n  B\n    C\n    E\n
    \n soup wrap:

    Since you already have the LEVEL stored in the table as ADMIN, you do not need the CONNECT BY clause. You just need to format your output using LPAD.

    For example,

    Setup

    CREATE TABLE t
        (ID NUMBER, NAME VARCHAR2(1), ADMIN NUMBER);
    
    INSERT ALL 
        INTO t (ID, NAME, ADMIN)
             VALUES (200, 'A', 1)
        INTO t (ID, NAME, ADMIN)
             VALUES (300, 'B', 2)
        INTO t (ID, NAME, ADMIN)
             VALUES (400, 'C', 3)
        INTO t (ID, NAME, ADMIN)
             VALUES (500, 'D', 1)
        INTO t (ID, NAME, ADMIN)
             VALUES (600, 'E', 3)
    SELECT * FROM dual;
    

    Query

    SQL> SELECT lpad(' ',2*(ADMIN-1)) || NAME name_hierarchy FROM t ORDER BY ADMIN, NAME;
    
    NAME_HIERARCHY
    --------------------------------------------------------------------------------
    A
    D
      B
        C
        E
    
    qid & accept id: (33608191, 33608740) query: How can I find which column is empty? soup:

    Another way to find out all the columns which are completely having NULL values, then you could query [DBA|ALL|USER]_TAB_COLUMNS view and check NUM_DISTINCT = 0.

    \n

    NOTE : The statistics must be gathered up to date to get an accurate result.

    \n

    For example,

    \n

    Lets say I have table "T" which has two columns, EMPNO and SAL such that the SAL column is completely NULL.

    \n
    SQL> SELECT * FROM LALIT.t;\n\n     EMPNO        SAL\n---------- ----------\n      7369\n      7499\n      7521\n      7566\n      7654\n      7698\n      7782\n      7788\n      7839\n      7844\n      7876\n      7900\n      7902\n      7934\n\n14 rows selected.\n
    \n

    Lets gather statistics for safe side:

    \n
    SQL> BEGIN\n  2    DBMS_STATS.gather_table_stats(\n  3      'LALIT',\n  4      'T');\n  5  END;\n  6  /\n\nPL/SQL procedure successfully completed.\n
    \n

    Desired output

    \n
    SQL> SELECT column_name,\n  2    num_distinct\n  3  FROM user_tab_columns\n  4  WHERE NUM_DISTINCT = 0\n  5  AND table_name     ='T';\n\nCOLUMN_NAME NUM_DISTINCT\n----------- ------------\nSAL                    0\n
    \n

    So, you get the column which is completely NULL i.e. num_distinct is 0.

    \n
    \n

    UPDATE Based on OP's comment, it could be at least a NULL value.

    \n
      \n
    • You could query the same view for NUM_NULLS <> 0.
    • \n
    \n

    For example, in the standard EMP table in SCOTT schema, let's look for the columns having at least one NULL value.

    \n
    SQL> SELECT column_name,\n  2         num_nulls\n  3  FROM user_tab_columns\n  4  WHERE NUM_NULLS <> 0\n  5  AND table_name     ='EMP';\n\nCOLUMN_NAME  NUM_NULLS\n----------- ----------\nCOMM                11\nMGR                  1\n
    \n

    Remember, the statistics must be gathered up to date.

    \n
      \n
    • Another way in PL/SQL using EXECUTE IMMEDIATE:
    • \n
    \n

    Just reverse the NULL logic in the demonstration about Find all columns having at least a NULL value from all tables in the schema.

    \n

    For example,

    \n

    FIND_NULL_COL is a simple user defined function(UDF) which will return 1 for the column which has at least one NULL value :

    \n
    SQL> CREATE OR REPLACE FUNCTION FIND_NULL_COL(\n  2      TABLE_NAME  VARCHAR2,\n  3      COLUMN_NAME VARCHAR2)\n  4    RETURN NUMBER\n  5  IS\n  6    cnt NUMBER;\n  7  BEGIN\n  8    CNT :=1;\n  9    EXECUTE IMMEDIATE 'select count(*) from ' ||TABLE_NAME||' where '\n 10                                              ||COLUMN_NAME||' is null'\n 11    INTO cnt;\n 12    RETURN\n 13    CASE\n 14    WHEN CNT > 0 THEN\n 15      1\n 16    ELSE\n 17      0\n 18    END;\n 19  END;\n 20  /\n\nFunction created.\n
    \n

    Call the function in SQL to get the NULL status of all the column of any table :

    \n
    SQL> SELECT c.TABLE_NAME,\n  2         c.COLUMN_NAME,\n  3         FIND_NULL_COL(c.TABLE_NAME,c.COLUMN_NAME) null_status\n  4  FROM all_tab_columns c\n  5  WHERE C.OWNER    ='SCOTT'\n  6  AND c.TABLE_NAME = 'EMP'\n  7  ORDER BY C.OWNER,\n  8    C.TABLE_NAME,\n  9    C.COLUMN_ID\n 10  /\n\nTABLE_NAME COLUMN_NAME NULL_STATUS\n---------- ----------- -----------\nEMP        EMPNO                 0\nEMP        ENAME                 0\nEMP        JOB                   0\nEMP        MGR                   1\nEMP        HIREDATE              0\nEMP        SAL                   0\nEMP        COMM                  1\nEMP        DEPTNO                0\n\n8 rows selected.\n
    \n

    So, NULL_STATUS 1 is the column which has at least one NULL value.

    \n soup wrap:

    Another way to find out all the columns which are completely having NULL values, then you could query [DBA|ALL|USER]_TAB_COLUMNS view and check NUM_DISTINCT = 0.

    NOTE : The statistics must be gathered up to date to get an accurate result.

    For example,

    Lets say I have table "T" which has two columns, EMPNO and SAL such that the SAL column is completely NULL.

    SQL> SELECT * FROM LALIT.t;
    
         EMPNO        SAL
    ---------- ----------
          7369
          7499
          7521
          7566
          7654
          7698
          7782
          7788
          7839
          7844
          7876
          7900
          7902
          7934
    
    14 rows selected.
    

    Lets gather statistics for safe side:

    SQL> BEGIN
      2    DBMS_STATS.gather_table_stats(
      3      'LALIT',
      4      'T');
      5  END;
      6  /
    
    PL/SQL procedure successfully completed.
    

    Desired output

    SQL> SELECT column_name,
      2    num_distinct
      3  FROM user_tab_columns
      4  WHERE NUM_DISTINCT = 0
      5  AND table_name     ='T';
    
    COLUMN_NAME NUM_DISTINCT
    ----------- ------------
    SAL                    0
    

    So, you get the column which is completely NULL i.e. num_distinct is 0.


    UPDATE Based on OP's comment, it could be at least a NULL value.

    • You could query the same view for NUM_NULLS <> 0.

    For example, in the standard EMP table in SCOTT schema, let's look for the columns having at least one NULL value.

    SQL> SELECT column_name,
      2         num_nulls
      3  FROM user_tab_columns
      4  WHERE NUM_NULLS <> 0
      5  AND table_name     ='EMP';
    
    COLUMN_NAME  NUM_NULLS
    ----------- ----------
    COMM                11
    MGR                  1
    

    Remember, the statistics must be gathered up to date.

    • Another way in PL/SQL using EXECUTE IMMEDIATE:

    Just reverse the NULL logic in the demonstration about Find all columns having at least a NULL value from all tables in the schema.

    For example,

    FIND_NULL_COL is a simple user defined function(UDF) which will return 1 for the column which has at least one NULL value :

    SQL> CREATE OR REPLACE FUNCTION FIND_NULL_COL(
      2      TABLE_NAME  VARCHAR2,
      3      COLUMN_NAME VARCHAR2)
      4    RETURN NUMBER
      5  IS
      6    cnt NUMBER;
      7  BEGIN
      8    CNT :=1;
      9    EXECUTE IMMEDIATE 'select count(*) from ' ||TABLE_NAME||' where '
     10                                              ||COLUMN_NAME||' is null'
     11    INTO cnt;
     12    RETURN
     13    CASE
     14    WHEN CNT > 0 THEN
     15      1
     16    ELSE
     17      0
     18    END;
     19  END;
     20  /
    
    Function created.
    

    Call the function in SQL to get the NULL status of all the column of any table :

    SQL> SELECT c.TABLE_NAME,
      2         c.COLUMN_NAME,
      3         FIND_NULL_COL(c.TABLE_NAME,c.COLUMN_NAME) null_status
      4  FROM all_tab_columns c
      5  WHERE C.OWNER    ='SCOTT'
      6  AND c.TABLE_NAME = 'EMP'
      7  ORDER BY C.OWNER,
      8    C.TABLE_NAME,
      9    C.COLUMN_ID
     10  /
    
    TABLE_NAME COLUMN_NAME NULL_STATUS
    ---------- ----------- -----------
    EMP        EMPNO                 0
    EMP        ENAME                 0
    EMP        JOB                   0
    EMP        MGR                   1
    EMP        HIREDATE              0
    EMP        SAL                   0
    EMP        COMM                  1
    EMP        DEPTNO                0
    
    8 rows selected.
    

    So, NULL_STATUS 1 is the column which has at least one NULL value.

    qid & accept id: (33613089, 33613534) query: Grouping DateTime by hour in SQL soup:

    Try this.

    \n

    Test Data:

    \n
    DECLARE @MyTable AS TABLE(DateTimes DATETIME)\nINSERT INTO @MyTable(DateTimes)\nVALUES('2015-05-03 01:06:45')\n,('2015-05-03 04:51:09')\n,('2015-05-03 05:08:11')\n,('2015-05-03 09:33:35')\n,('2015-05-03 13:46:38')\n
    \n

    Query:

    \n
      SELECT Hourly\n    FROM (SELECT DATEADD(HOUR, DATEDIFF(HOUR, 0, DateTimes), 0) AS Hourly\n            FROM @MyTable) AS DatesAsHours\nGROUP BY Hourly\n
    \n

    Results:

    \n
    Hourly\n2015-05-03 01:00:00.000\n2015-05-03 04:00:00.000\n2015-05-03 05:00:00.000\n2015-05-03 09:00:00.000\n2015-05-03 13:00:00.000\n
    \n soup wrap:

    Try this.

    Test Data:

    DECLARE @MyTable AS TABLE(DateTimes DATETIME)
    INSERT INTO @MyTable(DateTimes)
    VALUES('2015-05-03 01:06:45')
    ,('2015-05-03 04:51:09')
    ,('2015-05-03 05:08:11')
    ,('2015-05-03 09:33:35')
    ,('2015-05-03 13:46:38')
    

    Query:

      SELECT Hourly
        FROM (SELECT DATEADD(HOUR, DATEDIFF(HOUR, 0, DateTimes), 0) AS Hourly
                FROM @MyTable) AS DatesAsHours
    GROUP BY Hourly
    

    Results:

    Hourly
    2015-05-03 01:00:00.000
    2015-05-03 04:00:00.000
    2015-05-03 05:00:00.000
    2015-05-03 09:00:00.000
    2015-05-03 13:00:00.000
    
    qid & accept id: (33633660, 33635765) query: weekly working report from mysql database soup:

    SQLFiddle doesn't work right now, but I'll try to explain you how to accomplish your goal.

    \n

    First, you need to know the week for each tint date with WEEKOFYEAR() function. This value helps you to sum every week separately. Then, you can make a query like this:

    \n
    select WEEKOFYEAR(tin) as numWeek,\n           DATE(tin) as dateJob, \n           DAYOFWEEK(tin) dayOfWeek,\n           usr,\n           job,\n           (IF( ISNULL(tout), UNIX_TIMESTAMP(), UNIX_TIMESTAMP(tout) ) - UNIX_TIMESTAMP(tin)) as DiffInOut\n    from wtime;\n
    \n

    Now, you can group your data using with every field (numWeek, usr, dayOfWeek, dateJob and job) to get the detailed query:

    \n
    select numWeek,\n       usr,\n       dayOfWeek,\n       datejob,\n       job,\n       sum(DiffInOut)\nfrom\n  (\n    select WEEKOFYEAR(tin) as numWeek,\n           DATE(tin) as dateJob, \n           DAYOFWEEK(tin) dayOfWeek,\n           usr,\n           job,\n           (IF( ISNULL(tout), UNIX_TIMESTAMP(), UNIX_TIMESTAMP(tout) ) - UNIX_TIMESTAMP(tin)) as DiffInOut\n    from wtime\n  ) result\ngroup by numWeek, usr, dayOfWeek, datejob, job\n
    \n

    and later make an union to get only info per week..

    \n
    union\nselect numWeek,\n       usr,\n       dayOfWeek,\n       null,\n       null,\n       sum(DiffInOut)\nfrom\n  (\n    select WEEKOFYEAR(tin) as numWeek,\n           NULL as dateJob, \n           8 dayOfWeek,\n           usr,\n           job,\n           (IF( ISNULL(tout), UNIX_TIMESTAMP(), UNIX_TIMESTAMP(tout) ) - UNIX_TIMESTAMP(tin)) as DiffInOut\n    from wtime\n  ) result\ngroup by numWeek, usr, dayOfWeek\norder by numWeek, usr, dayOfWeek;\n
    \n

    SQLFiddle Example

    \n

    I hope that SQL Fiddle works for you.

    \n

    Result:

    \n
    | numWeek |   usr | dayOfWeek |                    datejob |     job | sum(DiffInOut) |\n|---------|-------|-----------|----------------------------|---------|----------------|\n|      46 | M0005 |         3 | November, 10 2015 00:00:00 | A001942 |          61314 |\n|      46 | M0005 |         8 |                     (null) |  (null) |          61314 |\n|      46 | M0006 |         3 | November, 10 2015 00:00:00 | A001843 |          61314 |\n|      46 | M0006 |         3 | November, 10 2015 00:00:00 | A001814 |              0 |\n|      46 | M0006 |         8 |                     (null) |  (null) |          61314 |\n|      46 | M0007 |         3 | November, 10 2015 00:00:00 | A001814 |          61314 |\n|      46 | M0007 |         3 | November, 10 2015 00:00:00 | .000002 |              0 |\n|      46 | M0007 |         8 |                     (null) |  (null) |          61314 |\n
    \n

    PD: Field 8 dayOfWeek is a little trick to order week row correctly.

    \n

    PD2: My query with your SQL Fiddle example, throws this result:

    \n
    | numWeek |   usr | dayOfWeek |                    datejob |     job | sum(DiffInOut) |\n|---------|-------|-----------|----------------------------|---------|----------------|\n|      45 | M0006 |         4 | November, 04 2015 00:00:00 | ...ENDE |          50972 |\n|      45 | M0006 |         5 | November, 05 2015 00:00:00 | A001860 |           6080 |\n|      45 | M0006 |         5 | November, 05 2015 00:00:00 | ...ENDE |         310399 |\n|      45 | M0006 |         5 | November, 05 2015 00:00:00 | .000001 |           1935 |\n|      45 | M0006 |         5 | November, 05 2015 00:00:00 | .000002 |           4528 |\n|      45 | M0006 |         5 | November, 05 2015 00:00:00 | .000031 |          13434 |\n|      45 | M0006 |         5 | November, 05 2015 00:00:00 | A001814 |           9204 |\n|      45 | M0006 |         8 |                     (null) |  (null) |         396552 |\n|      46 | M0006 |         2 | November, 09 2015 00:00:00 | ...ENDE |          51363 |\n|      46 | M0006 |         2 | November, 09 2015 00:00:00 | .000001 |            114 |\n|      46 | M0006 |         2 | November, 09 2015 00:00:00 | .000002 |           4382 |\n|      46 | M0006 |         2 | November, 09 2015 00:00:00 | A001843 |          13738 |\n|      46 | M0006 |         2 | November, 09 2015 00:00:00 | A001860 |          17046 |\n|      46 | M0006 |         3 | November, 10 2015 00:00:00 | ...ENDE |           1561 |\n|      46 | M0006 |         3 | November, 10 2015 00:00:00 | .000002 |           4374 |\n|      46 | M0006 |         3 | November, 10 2015 00:00:00 | A001814 |           4924 |\n|      46 | M0006 |         3 | November, 10 2015 00:00:00 | A001843 |          25662 |\n|      46 | M0006 |         8 |                     (null) |  (null) |         123164 |\n
    \n soup wrap:

    SQLFiddle doesn't work right now, but I'll try to explain you how to accomplish your goal.

    First, you need to know the week for each tint date with WEEKOFYEAR() function. This value helps you to sum every week separately. Then, you can make a query like this:

    select WEEKOFYEAR(tin) as numWeek,
               DATE(tin) as dateJob, 
               DAYOFWEEK(tin) dayOfWeek,
               usr,
               job,
               (IF( ISNULL(tout), UNIX_TIMESTAMP(), UNIX_TIMESTAMP(tout) ) - UNIX_TIMESTAMP(tin)) as DiffInOut
        from wtime;
    

    Now, you can group your data using with every field (numWeek, usr, dayOfWeek, dateJob and job) to get the detailed query:

    select numWeek,
           usr,
           dayOfWeek,
           datejob,
           job,
           sum(DiffInOut)
    from
      (
        select WEEKOFYEAR(tin) as numWeek,
               DATE(tin) as dateJob, 
               DAYOFWEEK(tin) dayOfWeek,
               usr,
               job,
               (IF( ISNULL(tout), UNIX_TIMESTAMP(), UNIX_TIMESTAMP(tout) ) - UNIX_TIMESTAMP(tin)) as DiffInOut
        from wtime
      ) result
    group by numWeek, usr, dayOfWeek, datejob, job
    

    and later make an union to get only info per week..

    union
    select numWeek,
           usr,
           dayOfWeek,
           null,
           null,
           sum(DiffInOut)
    from
      (
        select WEEKOFYEAR(tin) as numWeek,
               NULL as dateJob, 
               8 dayOfWeek,
               usr,
               job,
               (IF( ISNULL(tout), UNIX_TIMESTAMP(), UNIX_TIMESTAMP(tout) ) - UNIX_TIMESTAMP(tin)) as DiffInOut
        from wtime
      ) result
    group by numWeek, usr, dayOfWeek
    order by numWeek, usr, dayOfWeek;
    

    SQLFiddle Example

    I hope that SQL Fiddle works for you.

    Result:

    | numWeek |   usr | dayOfWeek |                    datejob |     job | sum(DiffInOut) |
    |---------|-------|-----------|----------------------------|---------|----------------|
    |      46 | M0005 |         3 | November, 10 2015 00:00:00 | A001942 |          61314 |
    |      46 | M0005 |         8 |                     (null) |  (null) |          61314 |
    |      46 | M0006 |         3 | November, 10 2015 00:00:00 | A001843 |          61314 |
    |      46 | M0006 |         3 | November, 10 2015 00:00:00 | A001814 |              0 |
    |      46 | M0006 |         8 |                     (null) |  (null) |          61314 |
    |      46 | M0007 |         3 | November, 10 2015 00:00:00 | A001814 |          61314 |
    |      46 | M0007 |         3 | November, 10 2015 00:00:00 | .000002 |              0 |
    |      46 | M0007 |         8 |                     (null) |  (null) |          61314 |
    

    PD: Field 8 dayOfWeek is a little trick to order week row correctly.

    PD2: My query with your SQL Fiddle example, throws this result:

    | numWeek |   usr | dayOfWeek |                    datejob |     job | sum(DiffInOut) |
    |---------|-------|-----------|----------------------------|---------|----------------|
    |      45 | M0006 |         4 | November, 04 2015 00:00:00 | ...ENDE |          50972 |
    |      45 | M0006 |         5 | November, 05 2015 00:00:00 | A001860 |           6080 |
    |      45 | M0006 |         5 | November, 05 2015 00:00:00 | ...ENDE |         310399 |
    |      45 | M0006 |         5 | November, 05 2015 00:00:00 | .000001 |           1935 |
    |      45 | M0006 |         5 | November, 05 2015 00:00:00 | .000002 |           4528 |
    |      45 | M0006 |         5 | November, 05 2015 00:00:00 | .000031 |          13434 |
    |      45 | M0006 |         5 | November, 05 2015 00:00:00 | A001814 |           9204 |
    |      45 | M0006 |         8 |                     (null) |  (null) |         396552 |
    |      46 | M0006 |         2 | November, 09 2015 00:00:00 | ...ENDE |          51363 |
    |      46 | M0006 |         2 | November, 09 2015 00:00:00 | .000001 |            114 |
    |      46 | M0006 |         2 | November, 09 2015 00:00:00 | .000002 |           4382 |
    |      46 | M0006 |         2 | November, 09 2015 00:00:00 | A001843 |          13738 |
    |      46 | M0006 |         2 | November, 09 2015 00:00:00 | A001860 |          17046 |
    |      46 | M0006 |         3 | November, 10 2015 00:00:00 | ...ENDE |           1561 |
    |      46 | M0006 |         3 | November, 10 2015 00:00:00 | .000002 |           4374 |
    |      46 | M0006 |         3 | November, 10 2015 00:00:00 | A001814 |           4924 |
    |      46 | M0006 |         3 | November, 10 2015 00:00:00 | A001843 |          25662 |
    |      46 | M0006 |         8 |                     (null) |  (null) |         123164 |
    
    qid & accept id: (33648148, 33648645) query: T-SQL Replacing If Else (Case When) with lookup table soup:

    It is pretty difficult to follow your exact logic, since it is pretty complicated and not helped by the fact that no columns have table aliases so I don't know what column belongs to what table.

    \n

    I have however given it a go. Since you are reusing the same correlated subquery multiple times, it would probably be beneficial to move these to an APPLY so that the result can be reused. I then just tried to pick apart your logic replacing the statements like:

    \n
    CASE WHEN  IS NULL THEN  ELSE  END\n
    \n

    With

    \n
    ISNULL(, )\n
    \n

    Giving a final query of:

    \n
    SELECT  CASE WHEN ISNULL(BKTXT.LookupResult, XBLNR.LookupResult) <> 'LEER' THEN\n            ISNULL(BKTXT.LookupResult, XBLNR.LookupResult)\n        ELSE \n            ISNULL(SGTXT.LookupResult, 'App')\n        END         \nFROM    SourceTable\n        CROSS APPLY \n        (   SELECT TOP 1 lookupResult \n            FROM ##lookupDefinition \n            WHERE lookupColumnName = 'BKTXT' \n            AND COEP_SGTXT LIKE lookupValue \n            ORDER BY LEN(lookupValue) DESC\n        ) AS BKTXT\n        CROSS APPLY \n        (   SELECT TOP 1 lookupResult \n            FROM ##lookupDefinition \n            WHERE lookupColumnName = 'XBLNR' \n            AND COEP_SGTXT LIKE lookupValue \n            ORDER BY LEN(lookupValue) DESC\n        ) AS XBLNR\n        CROSS APPLY \n        (   SELECT TOP 1 lookupResult \n            FROM ##lookupDefinition \n            WHERE lookupColumnName = 'SGTXT' \n            AND COEP_SGTXT LIKE lookupValue \n            ORDER BY LEN(lookupValue) DESC\n        ) AS SGTXT;\n
    \n soup wrap:

    It is pretty difficult to follow your exact logic, since it is pretty complicated and not helped by the fact that no columns have table aliases so I don't know what column belongs to what table.

    I have however given it a go. Since you are reusing the same correlated subquery multiple times, it would probably be beneficial to move these to an APPLY so that the result can be reused. I then just tried to pick apart your logic replacing the statements like:

    CASE WHEN  IS NULL THEN  ELSE  END
    

    With

    ISNULL(, )
    

    Giving a final query of:

    SELECT  CASE WHEN ISNULL(BKTXT.LookupResult, XBLNR.LookupResult) <> 'LEER' THEN
                ISNULL(BKTXT.LookupResult, XBLNR.LookupResult)
            ELSE 
                ISNULL(SGTXT.LookupResult, 'App')
            END         
    FROM    SourceTable
            CROSS APPLY 
            (   SELECT TOP 1 lookupResult 
                FROM ##lookupDefinition 
                WHERE lookupColumnName = 'BKTXT' 
                AND COEP_SGTXT LIKE lookupValue 
                ORDER BY LEN(lookupValue) DESC
            ) AS BKTXT
            CROSS APPLY 
            (   SELECT TOP 1 lookupResult 
                FROM ##lookupDefinition 
                WHERE lookupColumnName = 'XBLNR' 
                AND COEP_SGTXT LIKE lookupValue 
                ORDER BY LEN(lookupValue) DESC
            ) AS XBLNR
            CROSS APPLY 
            (   SELECT TOP 1 lookupResult 
                FROM ##lookupDefinition 
                WHERE lookupColumnName = 'SGTXT' 
                AND COEP_SGTXT LIKE lookupValue 
                ORDER BY LEN(lookupValue) DESC
            ) AS SGTXT;
    
    qid & accept id: (33736317, 33785787) query: Fewest grouped by distinct - SQL soup:

    So I've finally found a way to do what I want !

    \n

    For the first query, as my underlying real need was "is there a single teacher to do everything", I've lower a bit my expectation and go for this one (58 lines on my true case u_u") :

    \n
    SELECT\n    (\n        SELECT count(s.id_teacher) nb\n        FROM t AS m\n        INNER JOIN t AS s\n            ON m.id_teacher = s.id_teacher\n        GROUP BY m.id_course, m.id_teacher\n        ORDER BY nb DESC\n        LIMIT 1\n        ) AS nbMaxBySingleTeacher,\n    (\n        SELECT COUNT(DISTINCT id_course) nb\n        FROM t\n        ) AS nbTotalCourseToDo\n
    \n

    SQLFiddle sqlfiddle http://sqlfiddle.com/images/fiddle_transparent_small.png

    \n

    And I get back two value that answer my question "is one teacher enough ?"

    \n
    +--------------------------------------+\n|nbMaxBySingleTeacher|nbTotalCourseToDo|\n+--------------------------------------+\n|         4          |        5        |\n+--------------------------------------+\n
    \n
    \n

    The 2nd query use the schedule of new course, and take the id of one I want to check. It should tell me if I need to get one more teacher, or if it's ok with my actual(s) one.

    \n
    SELECT COUNT(*) nb\nFROM (\n    SELECT\n        z.id_teacher\n    FROM z\n    WHERE\n        z.id_course = 50\n    ) t1\nWHERE\n    FIND_IN_SET(t1.id_teacher, (\n        SELECT GROUP_CONCAT(t2.id_teacher) lst\n        FROM (\n            SELECT DISTINCT COUNT(s.id_teacher) nb, m.id_teacher\n            FROM t AS m\n            INNER JOIN t AS s\n                ON m.id_teacher = s.id_teacher\n            GROUP BY m.id_course, m.id_teacher\n            ORDER BY nb DESC\n            ) t2\n        GROUP BY t2.nb\n        ORDER BY nb DESC\n        LIMIT 1\n        ));\n
    \n

    SQLFiddle sqlfiddle http://sqlfiddle.com/images/fiddle_transparent_small.png

    \n

    This tell me the number of teacher that are able to teach the courses I already have AND the new one I want. So if it's over zero, then I don't need a new teacher :

    \n
    +--+\n|nb|\n+--+\n|1 |\n+--+\n
    \n soup wrap:

    So I've finally found a way to do what I want !

    For the first query, as my underlying real need was "is there a single teacher to do everything", I've lower a bit my expectation and go for this one (58 lines on my true case u_u") :

    SELECT
        (
            SELECT count(s.id_teacher) nb
            FROM t AS m
            INNER JOIN t AS s
                ON m.id_teacher = s.id_teacher
            GROUP BY m.id_course, m.id_teacher
            ORDER BY nb DESC
            LIMIT 1
            ) AS nbMaxBySingleTeacher,
        (
            SELECT COUNT(DISTINCT id_course) nb
            FROM t
            ) AS nbTotalCourseToDo
    

    SQLFiddle sqlfiddle http://sqlfiddle.com/images/fiddle_transparent_small.png

    And I get back two value that answer my question "is one teacher enough ?"

    +--------------------------------------+
    |nbMaxBySingleTeacher|nbTotalCourseToDo|
    +--------------------------------------+
    |         4          |        5        |
    +--------------------------------------+
    

    The 2nd query use the schedule of new course, and take the id of one I want to check. It should tell me if I need to get one more teacher, or if it's ok with my actual(s) one.

    SELECT COUNT(*) nb
    FROM (
        SELECT
            z.id_teacher
        FROM z
        WHERE
            z.id_course = 50
        ) t1
    WHERE
        FIND_IN_SET(t1.id_teacher, (
            SELECT GROUP_CONCAT(t2.id_teacher) lst
            FROM (
                SELECT DISTINCT COUNT(s.id_teacher) nb, m.id_teacher
                FROM t AS m
                INNER JOIN t AS s
                    ON m.id_teacher = s.id_teacher
                GROUP BY m.id_course, m.id_teacher
                ORDER BY nb DESC
                ) t2
            GROUP BY t2.nb
            ORDER BY nb DESC
            LIMIT 1
            ));
    

    SQLFiddle sqlfiddle http://sqlfiddle.com/images/fiddle_transparent_small.png

    This tell me the number of teacher that are able to teach the courses I already have AND the new one I want. So if it's over zero, then I don't need a new teacher :

    +--+
    |nb|
    +--+
    |1 |
    +--+
    
    qid & accept id: (33736568, 33737040) query: Oracle Database: getting hours as int value from passed date and time parameter soup:

    Oracle knows two ways to extract the hour. One is EXTRACT(HOUR FROM xx) where xx must be a timestamp unfortunately:

    \n
    select * from visits where visit_time > extract(hour from cast(:datetime as timestamp)\n
    \n

    The other is TO_CHAR(xx, 'HH24') which gives you a string:

    \n
    select * from visits where visit_time > to_number(to_char(:datetime, 'hh24'))\n
    \n soup wrap:

    Oracle knows two ways to extract the hour. One is EXTRACT(HOUR FROM xx) where xx must be a timestamp unfortunately:

    select * from visits where visit_time > extract(hour from cast(:datetime as timestamp)
    

    The other is TO_CHAR(xx, 'HH24') which gives you a string:

    select * from visits where visit_time > to_number(to_char(:datetime, 'hh24'))
    
    qid & accept id: (33791094, 33791701) query: How do I join the most recent row in one table to most recent row in another table (oracle) soup:

    I use two cte to calculate the most recent row in each category. Then join both.

    \n

    SqlFiddleDemo

    \n
    WITH n_node as (\n    SELECT "Name", "Attribute",\n           row_number() over (partition by "Name" order by "Date" DESC) rn\n    FROM Nodes \n), \nn_vector as (\n    SELECT "Node", "V_NAME", "color",\n           row_number() over (partition by "Node", "V_NAME" order by "Date" DESC) rn\n    FROM Vectors \n)\nSELECT "Name", "Attribute", "V_NAME", "color"\nFROM n_node\nJOIN n_vector \n  ON n_node.rn = n_vector.rn\n AND n_node.rn = 1\n AND n_node."Name" = n_vector."Node"\nORDER BY "Name" DESC\n
    \n

    OUTPUT

    \n
    | Name | Attribute | V_NAME | color |\n|------|-----------|--------|-------|\n|   14 |        A2 |     V1 |   red |\n|   14 |        A2 |     V2 |  blue |\n|   12 |        B1 |     V3 | black |\n|   12 |        B1 |     V4 | black |\n
    \n soup wrap:

    I use two cte to calculate the most recent row in each category. Then join both.

    SqlFiddleDemo

    WITH n_node as (
        SELECT "Name", "Attribute",
               row_number() over (partition by "Name" order by "Date" DESC) rn
        FROM Nodes 
    ), 
    n_vector as (
        SELECT "Node", "V_NAME", "color",
               row_number() over (partition by "Node", "V_NAME" order by "Date" DESC) rn
        FROM Vectors 
    )
    SELECT "Name", "Attribute", "V_NAME", "color"
    FROM n_node
    JOIN n_vector 
      ON n_node.rn = n_vector.rn
     AND n_node.rn = 1
     AND n_node."Name" = n_vector."Node"
    ORDER BY "Name" DESC
    

    OUTPUT

    | Name | Attribute | V_NAME | color |
    |------|-----------|--------|-------|
    |   14 |        A2 |     V1 |   red |
    |   14 |        A2 |     V2 |  blue |
    |   12 |        B1 |     V3 | black |
    |   12 |        B1 |     V4 | black |
    
    qid & accept id: (33791862, 33792989) query: sql count rows where diff Dates is less than 30 minutes soup:

    Two queries that both give your expected results and use 30 minute windows but have completely different interpretations of your requirements... you might want to clarify the question.

    \n

    SQL Fiddle

    \n

    Oracle 11g R2 Schema Setup:

    \n
    CREATE TABLE table_name (PersistentId, UserId, EnterDate ) AS\n          SELECT 111, 1,  to_date('June 1, 2015 17:05','Month DD, YYYY HH24:MI') FROM DUAL\nUNION ALL SELECT 112, 1,  to_date('June 1, 2015 17:21','Month DD, YYYY HH24:MI') FROM DUAL\nUNION ALL SELECT 113, 1,  to_date('June 1, 2015 17:27','Month DD, YYYY HH24:MI') FROM DUAL\nUNION ALL SELECT 114, 1,  to_date('June 1, 2015 18:25','Month DD, YYYY HH24:MI') FROM DUAL\nUNION ALL SELECT 115, 1,  to_date('June 1, 2015 19:00','Month DD, YYYY HH24:MI') FROM DUAL\nUNION ALL SELECT 116, 2,  to_date('June 1, 2015 18:05','Month DD, YYYY HH24:MI') FROM DUAL\nUNION ALL SELECT 117, 2,  to_date('June 1, 2015 18:21','Month DD, YYYY HH24:MI') FROM DUAL\nUNION ALL SELECT 118, 2,  to_date('June 1, 2015 19:27','Month DD, YYYY HH24:MI') FROM DUAL\n
    \n

    Query 1 - Count results in 30 minute windows:

    \n
    SELECT UserId,\n       "Count"\nFROM (\n  SELECT UserID,\n         COUNT(*) OVER ( PARTITION BY UserId ORDER BY EnterDate RANGE BETWEEN INTERVAL '30' MINUTE PRECEDING AND CURRENT ROW ) AS "Count",\n         EnterDate,\n         LEAD(EnterDate) OVER ( PARTITION BY UserId ORDER BY EnterDate ) AS nextEnterDate\n  FROM   Table_Name\n)\nWHERE "Count" > 1\nAND   EnterDate + INTERVAL '30' MINUTE < nextEnterDate\n
    \n

    Results:

    \n
    | USERID | Count |\n|--------|-------|\n|      1 |     3 |\n|      2 |     2 |\n
    \n

    Query 2 - Count all rows that are within 30 minutes of another row:

    \n
    SELECT UserID,\n       COUNT(1) AS "Count"\nFROM (\n  SELECT UserID,\n         EnterDate,\n         LAG(EnterDate) OVER ( PARTITION BY UserId ORDER BY EnterDate ) AS prevDate,\n         LEAD(EnterDate) OVER ( PARTITION BY UserId ORDER BY EnterDate ) AS nextDate\n  FROM   Table_Name\n)\nWHERE  EnterDate - INTERVAL '30' MINUTE < prevDate\nOR     EnterDate + INTERVAL '30' MINUTE > nextDate\nGROUP BY UserId\n
    \n

    Results:

    \n
    | USERID | Count |\n|--------|-------|\n|      1 |     3 |\n|      2 |     2 |\n
    \n soup wrap:

    Two queries that both give your expected results and use 30 minute windows but have completely different interpretations of your requirements... you might want to clarify the question.

    SQL Fiddle

    Oracle 11g R2 Schema Setup:

    CREATE TABLE table_name (PersistentId, UserId, EnterDate ) AS
              SELECT 111, 1,  to_date('June 1, 2015 17:05','Month DD, YYYY HH24:MI') FROM DUAL
    UNION ALL SELECT 112, 1,  to_date('June 1, 2015 17:21','Month DD, YYYY HH24:MI') FROM DUAL
    UNION ALL SELECT 113, 1,  to_date('June 1, 2015 17:27','Month DD, YYYY HH24:MI') FROM DUAL
    UNION ALL SELECT 114, 1,  to_date('June 1, 2015 18:25','Month DD, YYYY HH24:MI') FROM DUAL
    UNION ALL SELECT 115, 1,  to_date('June 1, 2015 19:00','Month DD, YYYY HH24:MI') FROM DUAL
    UNION ALL SELECT 116, 2,  to_date('June 1, 2015 18:05','Month DD, YYYY HH24:MI') FROM DUAL
    UNION ALL SELECT 117, 2,  to_date('June 1, 2015 18:21','Month DD, YYYY HH24:MI') FROM DUAL
    UNION ALL SELECT 118, 2,  to_date('June 1, 2015 19:27','Month DD, YYYY HH24:MI') FROM DUAL
    

    Query 1 - Count results in 30 minute windows:

    SELECT UserId,
           "Count"
    FROM (
      SELECT UserID,
             COUNT(*) OVER ( PARTITION BY UserId ORDER BY EnterDate RANGE BETWEEN INTERVAL '30' MINUTE PRECEDING AND CURRENT ROW ) AS "Count",
             EnterDate,
             LEAD(EnterDate) OVER ( PARTITION BY UserId ORDER BY EnterDate ) AS nextEnterDate
      FROM   Table_Name
    )
    WHERE "Count" > 1
    AND   EnterDate + INTERVAL '30' MINUTE < nextEnterDate
    

    Results:

    | USERID | Count |
    |--------|-------|
    |      1 |     3 |
    |      2 |     2 |
    

    Query 2 - Count all rows that are within 30 minutes of another row:

    SELECT UserID,
           COUNT(1) AS "Count"
    FROM (
      SELECT UserID,
             EnterDate,
             LAG(EnterDate) OVER ( PARTITION BY UserId ORDER BY EnterDate ) AS prevDate,
             LEAD(EnterDate) OVER ( PARTITION BY UserId ORDER BY EnterDate ) AS nextDate
      FROM   Table_Name
    )
    WHERE  EnterDate - INTERVAL '30' MINUTE < prevDate
    OR     EnterDate + INTERVAL '30' MINUTE > nextDate
    GROUP BY UserId
    

    Results:

    | USERID | Count |
    |--------|-------|
    |      1 |     3 |
    |      2 |     2 |
    
    qid & accept id: (33831023, 33834638) query: SQL Server: Alternate Assigning a Row based on a criteria soup:

    The general idea of this is to loop through the accounts table, ordered by state (or whatever joins your people to the table). While doing this, use an index/cursor for the people table and assign them accordingly.

    \n

    Using the index you can keep track of which person to assign to each appointment/account, and reset it back to the first person once you don't have any more people left.

    \n

    I used temp tables (actually table variables) to give people an easily run-able solution to this problem.

    \n
    declare @tempTablePeople TABLE \n( \n    [name] varchar(50), \n    [state] varchar(50), \n    [order] int\n)\nINSERT INTO @tempTablePeople \nVALUES\n('Jack', 'Virginia', 1),\n('Jill', 'Virginia', 2),\n('Ron', 'Florida', 1),\n('Bob', 'Florida', 2),\n('Scott', 'Florida', 3);\n\ndeclare @tempTableStateAccts TABLE \n( \n    [AcctNo] int,\n    [state] varchar(50)\n)\nINSERT INTO @tempTableStateAccts \nVALUES\n(22234, 'Virginia'),\n(32432, 'Virginia'),\n(02342, 'Florida'),\n(43423, 'Virginia'),\n(69449, 'Virginia'),\n(33233, 'Florida'),\n(52342, 'Florida'),\n(33342, 'Florida'),\n(77742, 'Florida'),\n(69429, 'Virginia')\n\n\n\ndeclare @tempTableStateAcctsPeople TABLE \n(\n    [AcctNo] int,\n    [state] varchar(50),\n    [name] varchar(50)\n)\n\n\nDECLARE @currentAcct int;\nDECLARE @currentState varchar(50);\nDECLARE @lastState varchar(50);\nDECLARE @currentNameIndex int;\nDECLARE @currentName varchar(50);\n
    \n

    The meat of the query is here where you loop through the rows of the state accounts table using an index to keep track. Notice that you need to order by state in order to get the desired result (otherwise your index would be reset early).

    \n
    SET @currentNameIndex = 1;\nWHILE EXISTS ( SELECT * FROM @tempTableStateAccts)\nBEGIN \n    -- Get current variables for insert from current row : MUST ORDER BY STATE if you want person order to not skip anyone at the start\n    SELECT @currentAcct = AcctNo, @currentState = [state] FROM @tempTableStateAccts ORDER BY [state]\n    -- Reset Index if on a new state\n    IF @lastState IS NULL OR @lastState != @currentState\n        SET @currentNameIndex = 1\n    SET @lastState = @currentState\n    -- If no current name then reset index to 1\n    SET @currentName = ISNULL\n                        ( \n                                (SELECT name FROM @tempTablePeople WHERE [state] = @currentState AND [order] = @currentNameIndex), \n                                (SELECT name FROM @tempTablePeople WHERE [state] = @currentState AND [order] = 1)\n                        )\n    SET @currentNameIndex = ISNULL\n                        ( \n                                (SELECT @currentNameIndex FROM @tempTablePeople WHERE [state] = @currentState AND [order] = @currentNameIndex), \n                                1\n                        )\n\n    -- Get current person for this state based on index\n    SELECT @currentName = name FROM @tempTablePeople WHERE [state] = @currentState AND [order] = @currentNameIndex\n\n    INSERT INTO @tempTableStateAcctsPeople\n    VALUES\n    (\n        @currentAcct, \n        @currentState,\n        @currentName\n    )\n    SET @currentNameIndex = @currentNameIndex + 1\n    DELETE FROM @tempTableStateAccts WHERE AcctNo = @currentAcct\nEND\n-- View final data\nSELECT * FROM @tempTableStateAcctsPeople\n
    \n

    You can paste both parts of the SQL script, in order, and run it to see the results.

    \n
    AcctNo  state       name\n32432   Virginia    Jack\n69429   Virginia    Jill\n22234   Virginia    Jack\n69449   Virginia    Jill\n43423   Virginia    Jack\n77742   Florida     Ron\n33342   Florida     Bob\n52342   Florida     Scott\n33233   Florida     Ron\n2342    Florida     Bob\n
    \n soup wrap:

    The general idea of this is to loop through the accounts table, ordered by state (or whatever joins your people to the table). While doing this, use an index/cursor for the people table and assign them accordingly.

    Using the index you can keep track of which person to assign to each appointment/account, and reset it back to the first person once you don't have any more people left.

    I used temp tables (actually table variables) to give people an easily run-able solution to this problem.

    declare @tempTablePeople TABLE 
    ( 
        [name] varchar(50), 
        [state] varchar(50), 
        [order] int
    )
    INSERT INTO @tempTablePeople 
    VALUES
    ('Jack', 'Virginia', 1),
    ('Jill', 'Virginia', 2),
    ('Ron', 'Florida', 1),
    ('Bob', 'Florida', 2),
    ('Scott', 'Florida', 3);
    
    declare @tempTableStateAccts TABLE 
    ( 
        [AcctNo] int,
        [state] varchar(50)
    )
    INSERT INTO @tempTableStateAccts 
    VALUES
    (22234, 'Virginia'),
    (32432, 'Virginia'),
    (02342, 'Florida'),
    (43423, 'Virginia'),
    (69449, 'Virginia'),
    (33233, 'Florida'),
    (52342, 'Florida'),
    (33342, 'Florida'),
    (77742, 'Florida'),
    (69429, 'Virginia')
    
    
    
    declare @tempTableStateAcctsPeople TABLE 
    (
        [AcctNo] int,
        [state] varchar(50),
        [name] varchar(50)
    )
    
    
    DECLARE @currentAcct int;
    DECLARE @currentState varchar(50);
    DECLARE @lastState varchar(50);
    DECLARE @currentNameIndex int;
    DECLARE @currentName varchar(50);
    

    The meat of the query is here where you loop through the rows of the state accounts table using an index to keep track. Notice that you need to order by state in order to get the desired result (otherwise your index would be reset early).

    SET @currentNameIndex = 1;
    WHILE EXISTS ( SELECT * FROM @tempTableStateAccts)
    BEGIN 
        -- Get current variables for insert from current row : MUST ORDER BY STATE if you want person order to not skip anyone at the start
        SELECT @currentAcct = AcctNo, @currentState = [state] FROM @tempTableStateAccts ORDER BY [state]
        -- Reset Index if on a new state
        IF @lastState IS NULL OR @lastState != @currentState
            SET @currentNameIndex = 1
        SET @lastState = @currentState
        -- If no current name then reset index to 1
        SET @currentName = ISNULL
                            ( 
                                    (SELECT name FROM @tempTablePeople WHERE [state] = @currentState AND [order] = @currentNameIndex), 
                                    (SELECT name FROM @tempTablePeople WHERE [state] = @currentState AND [order] = 1)
                            )
        SET @currentNameIndex = ISNULL
                            ( 
                                    (SELECT @currentNameIndex FROM @tempTablePeople WHERE [state] = @currentState AND [order] = @currentNameIndex), 
                                    1
                            )
    
        -- Get current person for this state based on index
        SELECT @currentName = name FROM @tempTablePeople WHERE [state] = @currentState AND [order] = @currentNameIndex
    
        INSERT INTO @tempTableStateAcctsPeople
        VALUES
        (
            @currentAcct, 
            @currentState,
            @currentName
        )
        SET @currentNameIndex = @currentNameIndex + 1
        DELETE FROM @tempTableStateAccts WHERE AcctNo = @currentAcct
    END
    -- View final data
    SELECT * FROM @tempTableStateAcctsPeople
    

    You can paste both parts of the SQL script, in order, and run it to see the results.

    AcctNo  state       name
    32432   Virginia    Jack
    69429   Virginia    Jill
    22234   Virginia    Jack
    69449   Virginia    Jill
    43423   Virginia    Jack
    77742   Florida     Ron
    33342   Florida     Bob
    52342   Florida     Scott
    33233   Florida     Ron
    2342    Florida     Bob
    
    qid & accept id: (33860211, 33860562) query: Regex to split values in PostgreSQL soup:

    For extracting the characters before the digits:

    \n
    regexp_replace(fieldname, '\d.*$', '')\n
    \n

    For extracting the characters after the digits:

    \n
    regexp_replace(fieldname, '^([^\d]*\d*)', '')\n
    \n

    Note that:

    \n
      \n
    • if there are no digits, the first will return the original value and then second an empty string. This way you are sure that the concatenation is equal to the original value in this case also.
    • \n
    • the concatenation of the three parts will not return the original if there are non-numerical characters surrounded by digits: those will be lost.
    • \n
    • This also works for any non-alphanumeric characters like @, [, ! ...etc.
    • \n
    \n

    Final SQL

    \n
    select\n  fieldname as original,\n  regexp_replace(fieldname, '\d.*$', '') as before_s,\n  regexp_replace(fieldname, '^([^\d]*\d*)', '') as after_s,\n  cast(nullif(regexp_replace(fieldname, '[^\d]', '', 'g'), '') as integer) as number\nfrom mytable;  \n
    \n

    See fiddle.

    \n soup wrap:

    For extracting the characters before the digits:

    regexp_replace(fieldname, '\d.*$', '')
    

    For extracting the characters after the digits:

    regexp_replace(fieldname, '^([^\d]*\d*)', '')
    

    Note that:

    • if there are no digits, the first will return the original value and then second an empty string. This way you are sure that the concatenation is equal to the original value in this case also.
    • the concatenation of the three parts will not return the original if there are non-numerical characters surrounded by digits: those will be lost.
    • This also works for any non-alphanumeric characters like @, [, ! ...etc.

    Final SQL

    select
      fieldname as original,
      regexp_replace(fieldname, '\d.*$', '') as before_s,
      regexp_replace(fieldname, '^([^\d]*\d*)', '') as after_s,
      cast(nullif(regexp_replace(fieldname, '[^\d]', '', 'g'), '') as integer) as number
    from mytable;  
    

    See fiddle.

    qid & accept id: (33860657, 33860676) query: Mysql Sum Conditional soup:

    You can use SUM with CASE WHEN:

    \n
    SELECT SUM(Cost + CASE WHEN Include_Extra = 1   --if Include_Extra is bool delete = 1\n                       THEN COALESCE(Extra_Seat_Cost,0) \n                       ELSE 0 END) AS total\nFROM table_name;\n
    \n

    SqlFiddleDemo

    \n

    I've added COALESCE in case Extra_Seat_Cost can be nullable. number + NULL produces NULL.\n


    \nIf you have grouping column use:\n\n
    SELECT group_column, SUM(Cost + CASE WHEN Include_Extra = 1 \n                                     THEN COALESCE(Extra_Seat_Cost,0) \n                                     ELSE 0 END) AS total\nFROM table_name;\nGROUP BY group_column;\n
    \n soup wrap:

    You can use SUM with CASE WHEN:

    SELECT SUM(Cost + CASE WHEN Include_Extra = 1   --if Include_Extra is bool delete = 1
                           THEN COALESCE(Extra_Seat_Cost,0) 
                           ELSE 0 END) AS total
    FROM table_name;
    

    SqlFiddleDemo

    I've added COALESCE in case Extra_Seat_Cost can be nullable. number + NULL produces NULL.


    If you have grouping column use:
    SELECT group_column, SUM(Cost + CASE WHEN Include_Extra = 1 
                                         THEN COALESCE(Extra_Seat_Cost,0) 
                                         ELSE 0 END) AS total
    FROM table_name;
    GROUP BY group_column;
    
    qid & accept id: (33883763, 33884058) query: Access SQL Select rows that have value in common with results of condition soup:

    Assuming you want something like this

    \n
    select * from invoice where invoice in (SELECT invoice FROM invoice WHERE Delivered = 'True')\n
    \n

    With the nested query your selecting an outputting the invoice numbers for reference in the parent query. Here the output of the nested query is used to 'filter' the results.

    \n

    You already got it to work, but here is the another way, without changing the table.

    \n
    SELECT invoice.ID, invoice.Invoice, invoice.Box, invoice.Delivered, invoice_1.Delivered AS Expr1\nFROM invoice, invoice AS invoice_1\nWHERE (((invoice.Invoice)=[invoice_1].[Invoice]) AND (([invoice_1].[Delivered])=Yes));\n
    \n

    You can test it here.

    \n

    For those getting into the same problem there is an explanation and a solution here

    \n soup wrap:

    Assuming you want something like this

    select * from invoice where invoice in (SELECT invoice FROM invoice WHERE Delivered = 'True')
    

    With the nested query your selecting an outputting the invoice numbers for reference in the parent query. Here the output of the nested query is used to 'filter' the results.

    You already got it to work, but here is the another way, without changing the table.

    SELECT invoice.ID, invoice.Invoice, invoice.Box, invoice.Delivered, invoice_1.Delivered AS Expr1
    FROM invoice, invoice AS invoice_1
    WHERE (((invoice.Invoice)=[invoice_1].[Invoice]) AND (([invoice_1].[Delivered])=Yes));
    

    You can test it here.

    For those getting into the same problem there is an explanation and a solution here

    qid & accept id: (33897526, 33898634) query: Add nchar field to DateTime in Informix soup:

    Ideally, your time zone column would be an INTERVAL HOUR TO MINUTE type; you'd then simply add the two columns to get the desired result. Since it is a character type, substringing in some form will be necessary. Using LEFT is one option; SUBSTRING is another; using the Informix subscripting notation is another. The CAST isn't crucial; Informix is pretty good about coercing things.

    \n

    Unless you actually want only hours and minutes in the result (which is a legitimate choice), your EXTEND operation is unnecessary and undesirable; it means your result won't include the seconds value from your data.

    \n

    Note that some time zones include minutes values. Newfoundland is on UTC-04:30; India is on UTC+05:30; Nepal is on UTC+05:45. (See World Time Zone for more information.) Getting the minutes accurate is harder because the sign has to be carried through.

    \n

    As to formatting in AM/PM notation, apart from the question 'why', the answer is to use the TO_CHAR() function and a ghastligram expressing the time format that you want.

    \n\n

    Demonstration:

    \n
    create table zone_char(time_stamp datetime year to second, time_zone nchar(5));\ninsert into zone_char values('2015-11-24 21:00:00', '-0500');\ninsert into zone_char values('2015-11-23 15:00:00', '-0600');\ninsert into zone_char values('2015-11-22 17:19:21', '+0515');\ninsert into zone_char values('2015-11-21 02:56:31', '-0430');\n
    \n

    Various ways to select the data:

    \n
    select  extend(time_stamp, year to minute) + LEFT(time_zone,3) units hour,\n        time_stamp + LEFT(time_zone,3) units hour,\n        time_stamp + time_zone[1,3] units hour,\n        time_stamp + time_zone[1,3] units hour + (time_zone[1] || time_zone[4,5]) units minute,\n        TO_CHAR(time_stamp + time_zone[1,3] units hour + (time_zone[1] || time_zone[4,5]) units minute,\n                '%A %e %B %Y %I.%M.%S %p')\nfrom zone_char;\n
    \n

    Sample output:

    \n
    2015-11-24 16:00   2015-11-24 16:00:00   2015-11-24 16:00:00   2015-11-24   16:00:00   Tuesday 24 November 2015 04.00.00 PM\n2015-11-23 09:00   2015-11-23 09:00:00   2015-11-23 09:00:00   2015-11-23   09:00:00   Monday 23 November 2015 09.00.00 AM\n2015-11-22 22:19   2015-11-22 22:19:21   2015-11-22 22:19:21   2015-11-22   22:34:21   Sunday 22 November 2015 10.34.21 PM\n2015-11-20 22:56   2015-11-20 22:56:31   2015-11-20 22:56:31   2015-11-20   22:26:31   Friday 20 November 2015 10.26.31 PM\n
    \n

    And note how much easier it is when the time zone is represented as an INTERVAL HOUR TO MINUTE:

    \n
    alter table zone_char add hhmm interval hour to minute;\nupdate zone_char set hhmm = time_zone[1,3] || ':' || time_zone[4,5];\n
    \n

    SELECT:

    \n
    select  time_stamp, hhmm, extend(time_stamp + hhmm, year to minute),\n        time_stamp + hhmm,\n        TO_CHAR(time_stamp + hhmm, '%A %e %B %Y %I.%M.%S %p')\nfrom zone_char;\n
    \n

    Result:

    \n
    2015-11-24 21:00:00   -5:00   2015-11-24 16:00   2015-11-24 16:00:00   Tuesday 24 November 2015 04.00.00 PM\n2015-11-23 15:00:00   -6:00   2015-11-23 09:00   2015-11-23 09:00:00   Monday 23 November 2015 09.00.00 AM\n2015-11-22 17:19:21    5:15   2015-11-22 22:34   2015-11-22 22:34:21   Sunday 22 November 2015 10.34.21 PM\n2015-11-21 02:56:31   -4:30   2015-11-20 22:26   2015-11-20 22:26:31   Friday 20 November 2015 10.26.31 PM\n
    \n soup wrap:

    Ideally, your time zone column would be an INTERVAL HOUR TO MINUTE type; you'd then simply add the two columns to get the desired result. Since it is a character type, substringing in some form will be necessary. Using LEFT is one option; SUBSTRING is another; using the Informix subscripting notation is another. The CAST isn't crucial; Informix is pretty good about coercing things.

    Unless you actually want only hours and minutes in the result (which is a legitimate choice), your EXTEND operation is unnecessary and undesirable; it means your result won't include the seconds value from your data.

    Note that some time zones include minutes values. Newfoundland is on UTC-04:30; India is on UTC+05:30; Nepal is on UTC+05:45. (See World Time Zone for more information.) Getting the minutes accurate is harder because the sign has to be carried through.

    As to formatting in AM/PM notation, apart from the question 'why', the answer is to use the TO_CHAR() function and a ghastligram expressing the time format that you want.

    Demonstration:

    create table zone_char(time_stamp datetime year to second, time_zone nchar(5));
    insert into zone_char values('2015-11-24 21:00:00', '-0500');
    insert into zone_char values('2015-11-23 15:00:00', '-0600');
    insert into zone_char values('2015-11-22 17:19:21', '+0515');
    insert into zone_char values('2015-11-21 02:56:31', '-0430');
    

    Various ways to select the data:

    select  extend(time_stamp, year to minute) + LEFT(time_zone,3) units hour,
            time_stamp + LEFT(time_zone,3) units hour,
            time_stamp + time_zone[1,3] units hour,
            time_stamp + time_zone[1,3] units hour + (time_zone[1] || time_zone[4,5]) units minute,
            TO_CHAR(time_stamp + time_zone[1,3] units hour + (time_zone[1] || time_zone[4,5]) units minute,
                    '%A %e %B %Y %I.%M.%S %p')
    from zone_char;
    

    Sample output:

    2015-11-24 16:00   2015-11-24 16:00:00   2015-11-24 16:00:00   2015-11-24   16:00:00   Tuesday 24 November 2015 04.00.00 PM
    2015-11-23 09:00   2015-11-23 09:00:00   2015-11-23 09:00:00   2015-11-23   09:00:00   Monday 23 November 2015 09.00.00 AM
    2015-11-22 22:19   2015-11-22 22:19:21   2015-11-22 22:19:21   2015-11-22   22:34:21   Sunday 22 November 2015 10.34.21 PM
    2015-11-20 22:56   2015-11-20 22:56:31   2015-11-20 22:56:31   2015-11-20   22:26:31   Friday 20 November 2015 10.26.31 PM
    

    And note how much easier it is when the time zone is represented as an INTERVAL HOUR TO MINUTE:

    alter table zone_char add hhmm interval hour to minute;
    update zone_char set hhmm = time_zone[1,3] || ':' || time_zone[4,5];
    

    SELECT:

    select  time_stamp, hhmm, extend(time_stamp + hhmm, year to minute),
            time_stamp + hhmm,
            TO_CHAR(time_stamp + hhmm, '%A %e %B %Y %I.%M.%S %p')
    from zone_char;
    

    Result:

    2015-11-24 21:00:00   -5:00   2015-11-24 16:00   2015-11-24 16:00:00   Tuesday 24 November 2015 04.00.00 PM
    2015-11-23 15:00:00   -6:00   2015-11-23 09:00   2015-11-23 09:00:00   Monday 23 November 2015 09.00.00 AM
    2015-11-22 17:19:21    5:15   2015-11-22 22:34   2015-11-22 22:34:21   Sunday 22 November 2015 10.34.21 PM
    2015-11-21 02:56:31   -4:30   2015-11-20 22:26   2015-11-20 22:26:31   Friday 20 November 2015 10.26.31 PM
    
    qid & accept id: (33908143, 33923555) query: Escaping special characters for JSON output soup:

    Here's a start. Replacing all the regular characters is easy enough, it's the control characters that will be tricky. This method uses a group consisting of a character class that contains the characters you want to add the backslash in front of. Note that characters inside of the class do not need to be escaped. The argument to REGEXP_REPLACE of 1 means start at the first position and the 0 means to replace all occurrences found in the source string.

    \n
    SELECT REGEXP_REPLACE('t/h"is"'||chr(9)||'is a|te\st', '([/\|"])', '\\\1', 1, 0) FROM dual;\n
    \n

    Replacing the TAB and a carriage return is easy enough by wrapping the above in REPLACE calls, but it stinks to have to do this for each control character. Thus, I'm afraid my answer isn't really a full answer for you, it only helps you with the regular characters a bit:

    \n
    SQL> SELECT REPLACE(REPLACE(REGEXP_REPLACE('t/h"is"'||chr(9)||'is\n  2  a|te\st', '([/\|"])', '\\\1', 1, 0), chr(9), '\t'), chr(10), '\n') fixe\n  3  FROM dual;\n\nFIXED\n-------------------------\nt\/h\"is\"\tis\na\|te\\st\n\nSQL>\n
    \n

    EDIT: Here's a solution! I don't claim to understand it fully, but basically it creates a translation table that joins to your string (in the inp_str table). The connect by, level traverses the length of the string and replaces characters where there is a match in the translation table. I modified a solution found here: http://database.developer-works.com/article/14901746/Replace+%28translate%29+one+char+to+many that really doesn't have a great explanation. Hopefully someone here will chime in and explain this fully.

    \n
    SQL> with trans_tbl(ch_frm, str_to) as (\n     select '"',     '\"' from dual union\n     select '/',     '\/' from dual union\n     select '\',     '\\' from dual union\n     select chr(8),  '\b' from dual union -- BS\n     select chr(12), '\f' from dual union -- FF\n     select chr(10), '\n' from dual union -- NL\n     select chr(13), '\r' from dual union -- CR\n     select chr(9),  '\t' from dual       -- HT\n   ),\n   inp_str as (\n     select 'No' || chr(12) || 'w is ' || chr(9) || 'the "time" for /all go\od men to '||\n     chr(8)||'com' || chr(10) || 'e to the aid of their ' || chr(13) || 'country' txt from dual\n   )\n   select max(replace(sys_connect_by_path(ch,'`'),'`')) as txt\n   from (\n   select lvl\n    ,decode(str_to,null,substr(txt, lvl, 1),str_to) as ch\n    from inp_str cross join (select level lvl from inp_str connect by level <= length(txt))\n    left outer join trans_tbl on (ch_frm = substr(txt, lvl, 1))\n    )\n    connect by lvl = prior lvl+1\n    start with lvl = 1;\n\nTXT\n------------------------------------------------------------------------------------------\nNo\fw is \tthe \"time\" for \/all go\\od men to \bcom\ne to the aid of their \rcountry\n\nSQL>\n
    \n

    EDIT 8/10/2016 - Make it a function for encapsulation and reusability so you could use it for multiple columns at once:

    \n
    create or replace function esc_json(string_in varchar2)\nreturn varchar2\nis \ns_converted varchar2(4000);\nBEGIN\nwith trans_tbl(ch_frm, str_to) as (\n     select '"',     '\"' from dual union\n     select '/',     '\/' from dual union\n     select '\',     '\\' from dual union\n     select chr(8),  '\b' from dual union -- BS\n     select chr(12), '\f' from dual union -- FF\n     select chr(10), '\n' from dual union -- NL\n     select chr(13), '\r' from dual union -- CR\n     select chr(9),  '\t' from dual       -- HT\n   ),\n   inp_str(txt) as (\n     select string_in from dual\n   )\n   select max(replace(sys_connect_by_path(ch,'`'),'`')) as c_text\n   into s_converted   \n   from (\n   select lvl\n    ,decode(str_to,null,substr(txt, lvl, 1),str_to) as ch\n    from inp_str cross join (select level lvl from inp_str connect by level <= length(txt))\n    left outer join trans_tbl on (ch_frm = substr(txt, lvl, 1))\n    )\n    connect by lvl = prior lvl+1\n    start with lvl = 1;\n\n    return s_converted;\nend esc_json;\n
    \n

    Example to call for multiple columns at once:

    \n
    select esc_json(column_1), esc_json(column_2)\nfrom your_table;\n
    \n soup wrap:

    Here's a start. Replacing all the regular characters is easy enough, it's the control characters that will be tricky. This method uses a group consisting of a character class that contains the characters you want to add the backslash in front of. Note that characters inside of the class do not need to be escaped. The argument to REGEXP_REPLACE of 1 means start at the first position and the 0 means to replace all occurrences found in the source string.

    SELECT REGEXP_REPLACE('t/h"is"'||chr(9)||'is a|te\st', '([/\|"])', '\\\1', 1, 0) FROM dual;
    

    Replacing the TAB and a carriage return is easy enough by wrapping the above in REPLACE calls, but it stinks to have to do this for each control character. Thus, I'm afraid my answer isn't really a full answer for you, it only helps you with the regular characters a bit:

    SQL> SELECT REPLACE(REPLACE(REGEXP_REPLACE('t/h"is"'||chr(9)||'is
      2  a|te\st', '([/\|"])', '\\\1', 1, 0), chr(9), '\t'), chr(10), '\n') fixe
      3  FROM dual;
    
    FIXED
    -------------------------
    t\/h\"is\"\tis\na\|te\\st
    
    SQL>
    

    EDIT: Here's a solution! I don't claim to understand it fully, but basically it creates a translation table that joins to your string (in the inp_str table). The connect by, level traverses the length of the string and replaces characters where there is a match in the translation table. I modified a solution found here: http://database.developer-works.com/article/14901746/Replace+%28translate%29+one+char+to+many that really doesn't have a great explanation. Hopefully someone here will chime in and explain this fully.

    SQL> with trans_tbl(ch_frm, str_to) as (
         select '"',     '\"' from dual union
         select '/',     '\/' from dual union
         select '\',     '\\' from dual union
         select chr(8),  '\b' from dual union -- BS
         select chr(12), '\f' from dual union -- FF
         select chr(10), '\n' from dual union -- NL
         select chr(13), '\r' from dual union -- CR
         select chr(9),  '\t' from dual       -- HT
       ),
       inp_str as (
         select 'No' || chr(12) || 'w is ' || chr(9) || 'the "time" for /all go\od men to '||
         chr(8)||'com' || chr(10) || 'e to the aid of their ' || chr(13) || 'country' txt from dual
       )
       select max(replace(sys_connect_by_path(ch,'`'),'`')) as txt
       from (
       select lvl
        ,decode(str_to,null,substr(txt, lvl, 1),str_to) as ch
        from inp_str cross join (select level lvl from inp_str connect by level <= length(txt))
        left outer join trans_tbl on (ch_frm = substr(txt, lvl, 1))
        )
        connect by lvl = prior lvl+1
        start with lvl = 1;
    
    TXT
    ------------------------------------------------------------------------------------------
    No\fw is \tthe \"time\" for \/all go\\od men to \bcom\ne to the aid of their \rcountry
    
    SQL>
    

    EDIT 8/10/2016 - Make it a function for encapsulation and reusability so you could use it for multiple columns at once:

    create or replace function esc_json(string_in varchar2)
    return varchar2
    is 
    s_converted varchar2(4000);
    BEGIN
    with trans_tbl(ch_frm, str_to) as (
         select '"',     '\"' from dual union
         select '/',     '\/' from dual union
         select '\',     '\\' from dual union
         select chr(8),  '\b' from dual union -- BS
         select chr(12), '\f' from dual union -- FF
         select chr(10), '\n' from dual union -- NL
         select chr(13), '\r' from dual union -- CR
         select chr(9),  '\t' from dual       -- HT
       ),
       inp_str(txt) as (
         select string_in from dual
       )
       select max(replace(sys_connect_by_path(ch,'`'),'`')) as c_text
       into s_converted   
       from (
       select lvl
        ,decode(str_to,null,substr(txt, lvl, 1),str_to) as ch
        from inp_str cross join (select level lvl from inp_str connect by level <= length(txt))
        left outer join trans_tbl on (ch_frm = substr(txt, lvl, 1))
        )
        connect by lvl = prior lvl+1
        start with lvl = 1;
    
        return s_converted;
    end esc_json;
    

    Example to call for multiple columns at once:

    select esc_json(column_1), esc_json(column_2)
    from your_table;
    
    qid & accept id: (33939688, 33954369) query: How to present tree of id / hierarchical query soup:

    SQL Fiddle

    \n

    Oracle 11g R2 Schema Setup:

    \n
    CREATE TABLE ORDERS (ID NUMBER PRIMARY KEY);\nINSERT INTO ORDERS VALUES (65733);\nINSERT INTO ORDERS VALUES (23423);\nINSERT INTO ORDERS VALUES (456765);\nINSERT INTO ORDERS VALUES (23464);\nINSERT INTO ORDERS VALUES (77532);\ninsert into ORDERS values (23422);\ninsert into ORDERS values (56435);\n\nCREATE TABLE PRODUCTS (\n  ID NUMBER PRIMARY KEY,\n  ORDER_ID NUMBER REFERENCES ORDERS(ID),\n  PARENT_ID NUMBER\n);\nINSERT INTO PRODUCTS VALUES (1,65733,3);\nINSERT INTO PRODUCTS VALUES (2,23423,3);\nINSERT INTO PRODUCTS VALUES (3,77532,4);\nINSERT INTO PRODUCTS VALUES (4,23464,0); \nINSERT INTO PRODUCTS VALUES (5,456765,null);\ninsert into products values (6,23422,7);\ninsert into products values (7,56435,0);\n
    \n

    Query 1:

    \n
    WITH WantToPresent( ID ) AS (\n  SELECT 23464 FROM DUAL\n)\nSELECT ORDER_ID\nFROM   PRODUCTS p\nSTART WITH\n  EXISTS( SELECT 'X'\n          FROM   WantToPresent w\n          WHERE  p.ORDER_ID = w.ID )\nCONNECT BY ID = PRIOR parent_id\nUNION\nSELECT ORDER_ID\nFROM   PRODUCTS p\nSTART WITH\n  EXISTS( SELECT 'X'\n          FROM   WantToPresent w\n          WHERE  p.ORDER_ID = w.ID )\nCONNECT BY PRIOR ID = parent_id\nUNION\nSELECT p2.ORDER_ID\nFROM   PRODUCTS p1\n       INNER JOIN\n       PRODUCTS p2\n       ON ( p1.PARENT_ID = p2.PARENT_ID AND p2.PARENT_ID <> 0 )\n       INNER JOIN\n       WantToPresent w\n       ON ( p1.ORDER_ID = w.ID )\n
    \n

    Results:

    \n
    | ORDER_ID |\n|----------|\n|    23423 |\n|    23464 |\n|    65733 |\n|    77532 |\n
    \n

    Query 2:

    \n
    WITH WantToPresent( ID ) AS (\n  SELECT 23423 FROM DUAL\n)\nSELECT ORDER_ID\nFROM   PRODUCTS p\nSTART WITH\n  EXISTS( SELECT 'X'\n          FROM   WantToPresent w\n          WHERE  p.ORDER_ID = w.ID )\nCONNECT BY \n  ID = PRIOR parent_id\nUNION\nSELECT ORDER_ID\nFROM   PRODUCTS p\nSTART WITH\n  EXISTS( SELECT 'X'\n          FROM   WantToPresent w\n          WHERE  p.ORDER_ID = w.ID )\nCONNECT BY PRIOR ID = parent_id\nUNION\nSELECT p2.ORDER_ID\nFROM   PRODUCTS p1\n       INNER JOIN\n       PRODUCTS p2\n       ON ( p1.PARENT_ID = p2.PARENT_ID AND p2.PARENT_ID <> 0 )\n       INNER JOIN\n       WantToPresent w\n       ON ( p1.ORDER_ID = w.ID )\n
    \n

    Results:

    \n
    | ORDER_ID |\n|----------|\n|    23423 |\n|    23464 |\n|    65733 |\n|    77532 |\n
    \n

    Query 3:

    \n
    WITH WantToPresent( ID ) AS (\n  SELECT 23464 FROM DUAL UNION ALL\n  SELECT 65733 FROM DUAL\n)\nSELECT ORDER_ID\nFROM   PRODUCTS p\nSTART WITH\n  EXISTS( SELECT 'X'\n          FROM   WantToPresent w\n          WHERE  p.ORDER_ID = w.ID )\nCONNECT BY \n  ID = PRIOR parent_id\nUNION\nSELECT ORDER_ID\nFROM   PRODUCTS p\nSTART WITH\n  EXISTS( SELECT 'X'\n          FROM   WantToPresent w\n          WHERE  p.ORDER_ID = w.ID )\nCONNECT BY PRIOR ID = parent_id\nUNION\nSELECT p2.ORDER_ID\nFROM   PRODUCTS p1\n       INNER JOIN\n       PRODUCTS p2\n       ON ( p1.PARENT_ID = p2.PARENT_ID AND p2.PARENT_ID <> 0 )\n       INNER JOIN\n       WantToPresent w\n       ON ( p1.ORDER_ID = w.ID )\n
    \n

    Results:

    \n
    | ORDER_ID |\n|----------|\n|    23423 |\n|    23464 |\n|    65733 |\n|    77532 |\n
    \n

    Query 4:

    \n
    WITH WantToPresent( ID ) AS (\n  SELECT 56435 FROM DUAL\n)\nSELECT ORDER_ID\nFROM   PRODUCTS p\nSTART WITH\n  EXISTS( SELECT 'X'\n          FROM   WantToPresent w\n          WHERE  p.ORDER_ID = w.ID )\nCONNECT BY \n  ID = PRIOR parent_id\nUNION\nSELECT ORDER_ID\nFROM   PRODUCTS p\nSTART WITH\n  EXISTS( SELECT 'X'\n          FROM   WantToPresent w\n          WHERE  p.ORDER_ID = w.ID )\nCONNECT BY PRIOR ID = parent_id\nUNION\nSELECT p2.ORDER_ID\nFROM   PRODUCTS p1\n       INNER JOIN\n       PRODUCTS p2\n       ON ( p1.PARENT_ID = p2.PARENT_ID AND p2.PARENT_ID <> 0 )\n       INNER JOIN\n       WantToPresent w\n       ON ( p1.ORDER_ID = w.ID )\n
    \n

    Results:

    \n
    | ORDER_ID |\n|----------|\n|    23422 |\n|    56435 |\n
    \n soup wrap:

    SQL Fiddle

    Oracle 11g R2 Schema Setup:

    CREATE TABLE ORDERS (ID NUMBER PRIMARY KEY);
    INSERT INTO ORDERS VALUES (65733);
    INSERT INTO ORDERS VALUES (23423);
    INSERT INTO ORDERS VALUES (456765);
    INSERT INTO ORDERS VALUES (23464);
    INSERT INTO ORDERS VALUES (77532);
    insert into ORDERS values (23422);
    insert into ORDERS values (56435);
    
    CREATE TABLE PRODUCTS (
      ID NUMBER PRIMARY KEY,
      ORDER_ID NUMBER REFERENCES ORDERS(ID),
      PARENT_ID NUMBER
    );
    INSERT INTO PRODUCTS VALUES (1,65733,3);
    INSERT INTO PRODUCTS VALUES (2,23423,3);
    INSERT INTO PRODUCTS VALUES (3,77532,4);
    INSERT INTO PRODUCTS VALUES (4,23464,0); 
    INSERT INTO PRODUCTS VALUES (5,456765,null);
    insert into products values (6,23422,7);
    insert into products values (7,56435,0);
    

    Query 1:

    WITH WantToPresent( ID ) AS (
      SELECT 23464 FROM DUAL
    )
    SELECT ORDER_ID
    FROM   PRODUCTS p
    START WITH
      EXISTS( SELECT 'X'
              FROM   WantToPresent w
              WHERE  p.ORDER_ID = w.ID )
    CONNECT BY ID = PRIOR parent_id
    UNION
    SELECT ORDER_ID
    FROM   PRODUCTS p
    START WITH
      EXISTS( SELECT 'X'
              FROM   WantToPresent w
              WHERE  p.ORDER_ID = w.ID )
    CONNECT BY PRIOR ID = parent_id
    UNION
    SELECT p2.ORDER_ID
    FROM   PRODUCTS p1
           INNER JOIN
           PRODUCTS p2
           ON ( p1.PARENT_ID = p2.PARENT_ID AND p2.PARENT_ID <> 0 )
           INNER JOIN
           WantToPresent w
           ON ( p1.ORDER_ID = w.ID )
    

    Results:

    | ORDER_ID |
    |----------|
    |    23423 |
    |    23464 |
    |    65733 |
    |    77532 |
    

    Query 2:

    WITH WantToPresent( ID ) AS (
      SELECT 23423 FROM DUAL
    )
    SELECT ORDER_ID
    FROM   PRODUCTS p
    START WITH
      EXISTS( SELECT 'X'
              FROM   WantToPresent w
              WHERE  p.ORDER_ID = w.ID )
    CONNECT BY 
      ID = PRIOR parent_id
    UNION
    SELECT ORDER_ID
    FROM   PRODUCTS p
    START WITH
      EXISTS( SELECT 'X'
              FROM   WantToPresent w
              WHERE  p.ORDER_ID = w.ID )
    CONNECT BY PRIOR ID = parent_id
    UNION
    SELECT p2.ORDER_ID
    FROM   PRODUCTS p1
           INNER JOIN
           PRODUCTS p2
           ON ( p1.PARENT_ID = p2.PARENT_ID AND p2.PARENT_ID <> 0 )
           INNER JOIN
           WantToPresent w
           ON ( p1.ORDER_ID = w.ID )
    

    Results:

    | ORDER_ID |
    |----------|
    |    23423 |
    |    23464 |
    |    65733 |
    |    77532 |
    

    Query 3:

    WITH WantToPresent( ID ) AS (
      SELECT 23464 FROM DUAL UNION ALL
      SELECT 65733 FROM DUAL
    )
    SELECT ORDER_ID
    FROM   PRODUCTS p
    START WITH
      EXISTS( SELECT 'X'
              FROM   WantToPresent w
              WHERE  p.ORDER_ID = w.ID )
    CONNECT BY 
      ID = PRIOR parent_id
    UNION
    SELECT ORDER_ID
    FROM   PRODUCTS p
    START WITH
      EXISTS( SELECT 'X'
              FROM   WantToPresent w
              WHERE  p.ORDER_ID = w.ID )
    CONNECT BY PRIOR ID = parent_id
    UNION
    SELECT p2.ORDER_ID
    FROM   PRODUCTS p1
           INNER JOIN
           PRODUCTS p2
           ON ( p1.PARENT_ID = p2.PARENT_ID AND p2.PARENT_ID <> 0 )
           INNER JOIN
           WantToPresent w
           ON ( p1.ORDER_ID = w.ID )
    

    Results:

    | ORDER_ID |
    |----------|
    |    23423 |
    |    23464 |
    |    65733 |
    |    77532 |
    

    Query 4:

    WITH WantToPresent( ID ) AS (
      SELECT 56435 FROM DUAL
    )
    SELECT ORDER_ID
    FROM   PRODUCTS p
    START WITH
      EXISTS( SELECT 'X'
              FROM   WantToPresent w
              WHERE  p.ORDER_ID = w.ID )
    CONNECT BY 
      ID = PRIOR parent_id
    UNION
    SELECT ORDER_ID
    FROM   PRODUCTS p
    START WITH
      EXISTS( SELECT 'X'
              FROM   WantToPresent w
              WHERE  p.ORDER_ID = w.ID )
    CONNECT BY PRIOR ID = parent_id
    UNION
    SELECT p2.ORDER_ID
    FROM   PRODUCTS p1
           INNER JOIN
           PRODUCTS p2
           ON ( p1.PARENT_ID = p2.PARENT_ID AND p2.PARENT_ID <> 0 )
           INNER JOIN
           WantToPresent w
           ON ( p1.ORDER_ID = w.ID )
    

    Results:

    | ORDER_ID |
    |----------|
    |    23422 |
    |    56435 |
    
    qid & accept id: (33962020, 33962574) query: Mysql GROUP BY DATE from text column soup:

    You can try:

    \n
    SELECT date_format(str_to_date(DATECOLUMN, '%d/%m/%Y'), '%d/%m/%Y') AS MyDate, COUNT(*)\nFROM TABLE \nGROUP BY MyDate\n
    \n

    For interval and group you can try this:

    \n
    SELECT COUNT(*), MyDate\nFROM TABLE, (\n    SELECT date_format(str_to_date(DATECOLUMN, '%d/%m/%Y'), '%d/%m/%Y') AS MyDate\n    FROM TABLE) Tmp \nWHERE date_format(str_to_date(MyDate, '%d/%m/%Y'), '%Y-%m-%d') >= NOW() - INTERVAL 7 DAY\nGROUP BY MyDate\n
    \n soup wrap:

    You can try:

    SELECT date_format(str_to_date(DATECOLUMN, '%d/%m/%Y'), '%d/%m/%Y') AS MyDate, COUNT(*)
    FROM TABLE 
    GROUP BY MyDate
    

    For interval and group you can try this:

    SELECT COUNT(*), MyDate
    FROM TABLE, (
        SELECT date_format(str_to_date(DATECOLUMN, '%d/%m/%Y'), '%d/%m/%Y') AS MyDate
        FROM TABLE) Tmp 
    WHERE date_format(str_to_date(MyDate, '%d/%m/%Y'), '%Y-%m-%d') >= NOW() - INTERVAL 7 DAY
    GROUP BY MyDate
    
    qid & accept id: (33965409, 33975489) query: SQL - Exporting a table with xml colomun into a text file soup:

    All your answers you gave in your comments give me the idea that you are going the wrong way... The best approach should be linked servers:

    \n

    Read here: https://msdn.microsoft.com/en-us/library/ff772782.aspx?f=255&MSPPError=-2147217396

    \n

    Further information here: https://msdn.microsoft.com/en-us/library/ms190479.aspx?f=255&MSPPError=-2147217396

    \n

    Try this in your SS2014

    \n
    USE [master]\nGO\nEXEC master.dbo.sp_addlinkedserver \n    @server = N'YourLowerServer', \n    @srvproduct=N'SQL Server' ;\nGO\n
    \n

    This you need to get access:

    \n
    EXEC master.dbo.sp_addlinkedsrvlogin \n    @rmtsrvname = N'YourLowerServer', \n    @locallogin = NULL , \n    @useself = N'True' ;\nGO\n
    \n

    If this is done you can use the INSERT INTO from one server directly to the other server. Try this in your SS2014:

    \n
    INSERT INTO YourLowerServer.YourDatabase.dbo.TableName(col1,col2,...)\nSELECT col1,col2,... FROM dbo.TableName \n
    \n

    If you want to get rid of your linked server after this operation use sp_dropserver (read here: https://msdn.microsoft.com/en-us/library/ms174310.aspx)

    \n

    Hope this helps...

    \n soup wrap:

    All your answers you gave in your comments give me the idea that you are going the wrong way... The best approach should be linked servers:

    Read here: https://msdn.microsoft.com/en-us/library/ff772782.aspx?f=255&MSPPError=-2147217396

    Further information here: https://msdn.microsoft.com/en-us/library/ms190479.aspx?f=255&MSPPError=-2147217396

    Try this in your SS2014

    USE [master]
    GO
    EXEC master.dbo.sp_addlinkedserver 
        @server = N'YourLowerServer', 
        @srvproduct=N'SQL Server' ;
    GO
    

    This you need to get access:

    EXEC master.dbo.sp_addlinkedsrvlogin 
        @rmtsrvname = N'YourLowerServer', 
        @locallogin = NULL , 
        @useself = N'True' ;
    GO
    

    If this is done you can use the INSERT INTO from one server directly to the other server. Try this in your SS2014:

    INSERT INTO YourLowerServer.YourDatabase.dbo.TableName(col1,col2,...)
    SELECT col1,col2,... FROM dbo.TableName 
    

    If you want to get rid of your linked server after this operation use sp_dropserver (read here: https://msdn.microsoft.com/en-us/library/ms174310.aspx)

    Hope this helps...

    qid & accept id: (34004813, 34006543) query: find value in comma separated list for each record of a different table soup:

    So what I ended up doing instead that solved my issue was joining the table in like this:

    \n
    JOIN #CityIDs c\n  ON cpcpp.CUST_PROD_PARM_VAL LIKE '' + c.FirstCitySearchText + ''\n  OR cpcpp.CUST_PROD_PARM_VAL LIKE '' + c.LastCitySearchText + ''\n  OR cpcpp.CUST_PROD_PARM_VAL LIKE '' + c.OnlyCitySearchText + ''\n  OR cpcpp.CUST_PROD_PARM_VAL LIKE '' + c.City_ID + ''\n  OR cpcpp.CUST_PROD_PARM_VAL LIKE 'ALL'\n
    \n

    And this what those search text fields are:

    \n
    UPDATE c\n   SET c.FirstCitySearchText = CAST(c.City_ID AS VARCHAR(100))+ ',%'\n     , c.LastCitySearchText = '%,' + CAST(c.City_ID AS VARCHAR(100))\n     , c.OnlyCitySearchText = '%,'  + CAST(c.City_ID AS VARCHAR(100)) + ',%'\n  FROM #CityIDs c\n
    \n

    I couldn't use the where clauses that were posted because I wasn't sure of how to join the table in to get the city ids, so instead I joined them using the where clause as the join.

    \n

    This isn't a good solution but it's the only one that has worked so far. I'm getting back the results that I expect and the stored procedure is only taking about 20 seconds to run, which is considered a win.

    \n soup wrap:

    So what I ended up doing instead that solved my issue was joining the table in like this:

    JOIN #CityIDs c
      ON cpcpp.CUST_PROD_PARM_VAL LIKE '' + c.FirstCitySearchText + ''
      OR cpcpp.CUST_PROD_PARM_VAL LIKE '' + c.LastCitySearchText + ''
      OR cpcpp.CUST_PROD_PARM_VAL LIKE '' + c.OnlyCitySearchText + ''
      OR cpcpp.CUST_PROD_PARM_VAL LIKE '' + c.City_ID + ''
      OR cpcpp.CUST_PROD_PARM_VAL LIKE 'ALL'
    

    And this what those search text fields are:

    UPDATE c
       SET c.FirstCitySearchText = CAST(c.City_ID AS VARCHAR(100))+ ',%'
         , c.LastCitySearchText = '%,' + CAST(c.City_ID AS VARCHAR(100))
         , c.OnlyCitySearchText = '%,'  + CAST(c.City_ID AS VARCHAR(100)) + ',%'
      FROM #CityIDs c
    

    I couldn't use the where clauses that were posted because I wasn't sure of how to join the table in to get the city ids, so instead I joined them using the where clause as the join.

    This isn't a good solution but it's the only one that has worked so far. I'm getting back the results that I expect and the stored procedure is only taking about 20 seconds to run, which is considered a win.

    qid & accept id: (34005749, 34005871) query: Query to delete records older than n active dates from each group soup:

    You can use row_number to get the last 5 days when the table had an entry. Then delete based on the generated numbers.

    \n

    SQL Fiddle

    \n
    with rownums as (SELECT row_number() over(partition by category order by cast(entryDate as date) desc) as rn\n                 ,*\n                 FROM dataTable\n)\ndelete from rownums where rn <= 5 --use > 5 for records prior to the last 5 days\n
    \n

    Use dense_rank to number the rows if there can be multiple entries per day.

    \n
    with rownums as (SELECT dense_rank() over(partition by category order by cast(entryDate as date) desc) as rn\n                     ,*\n                 FROM dataTable)\ndelete from rownums where rn > 5;\n
    \n soup wrap:

    You can use row_number to get the last 5 days when the table had an entry. Then delete based on the generated numbers.

    SQL Fiddle

    with rownums as (SELECT row_number() over(partition by category order by cast(entryDate as date) desc) as rn
                     ,*
                     FROM dataTable
    )
    delete from rownums where rn <= 5 --use > 5 for records prior to the last 5 days
    

    Use dense_rank to number the rows if there can be multiple entries per day.

    with rownums as (SELECT dense_rank() over(partition by category order by cast(entryDate as date) desc) as rn
                         ,*
                     FROM dataTable)
    delete from rownums where rn > 5;
    
    qid & accept id: (34031494, 34033423) query: MySQL - Select data from relational tables A, B, C, D, E if record was not found on Table F soup:

    First of all you need to start with the subject of what you want to return. That would be users. So this will be your first table to select from (Note here: use AS to rename tables for brevity):

    \n
    Select *\nFrom users AS u\n
    \n

    Next link together the data you want. Depending on how specific you want to be will depend on how many joins you will need to make (e.g. courseID vs CourseName).

    \n

    Now assuming we want lots of human readable data such as names we will link the following tables.

    \n
    LEFT OUTER JOIN users_lectures AS ul ON u.id = ul.userid\nLEFT OUTER JOIN lectures AS l ON l.id = ul.lectureid\nLEFT OUTER JOIN courses AS c ON c.id = l.courseid\nLEFT OUTER JOIN attends AS a ON a.userid = u.id AND a.lectureid = l.id\n
    \n

    And to top it off, to find people who didn't attend, they will not be in the attends table so we just check for people where the values are null for attends.

    \n
    WHERE a.userid IS NULL\n
    \n

    As a side note, you won't get any results with this in the WHERE clause

    \n
    AND attends.lecture_id != users_lectures.lecture_id\n
    \n

    Because you are joining with this statement

    \n
    LEFT JOIN attends ON attends.lecture_id = users_lectures.lecture_id\n
    \n

    Which contradict each other

    \n soup wrap:

    First of all you need to start with the subject of what you want to return. That would be users. So this will be your first table to select from (Note here: use AS to rename tables for brevity):

    Select *
    From users AS u
    

    Next link together the data you want. Depending on how specific you want to be will depend on how many joins you will need to make (e.g. courseID vs CourseName).

    Now assuming we want lots of human readable data such as names we will link the following tables.

    LEFT OUTER JOIN users_lectures AS ul ON u.id = ul.userid
    LEFT OUTER JOIN lectures AS l ON l.id = ul.lectureid
    LEFT OUTER JOIN courses AS c ON c.id = l.courseid
    LEFT OUTER JOIN attends AS a ON a.userid = u.id AND a.lectureid = l.id
    

    And to top it off, to find people who didn't attend, they will not be in the attends table so we just check for people where the values are null for attends.

    WHERE a.userid IS NULL
    

    As a side note, you won't get any results with this in the WHERE clause

    AND attends.lecture_id != users_lectures.lecture_id
    

    Because you are joining with this statement

    LEFT JOIN attends ON attends.lecture_id = users_lectures.lecture_id
    

    Which contradict each other

    qid & accept id: (34082441, 34082648) query: MySQL consolodating table rows with overlapping date spans soup:

    One way of doing it is by the use of correlated subqueries:

    \n
    SELECT DISTINCT\n       (SELECT MIN(opens)\n       FROM mytable AS t2\n       WHERE t2.opens <= t1.closes AND t2.closes >= t1.opens) AS start,\n       (SELECT MAX(closes)\n       FROM mytable AS t2\n       WHERE t2.opens <= t1.closes AND t2.closes >= t1.opens) AS end       \nFROM mytable AS t1\nORDER BY opens\n
    \n

    The WHERE predicates of the correlated subqueries:

    \n

    t2.opens <= t1.closes AND t2.closes >= t1.opens

    \n

    return all overlapping records related to the current record. Performing aggregation one these records we can find the start / end dates of each interval: the start date of the interval is the minimum opens date between all overlapping records, whereas the end date is the maximum closes date.

    \n

    Demo here

    \n

    EDIT:

    \n

    The above solution won't work with a set of intervals like the following:

    \n
    1. |-----------|\n2. |----|\n3.           |-----|\n
    \n

    Record no. 2, when processed, will produce a flawed start/end interval.

    \n

    Here's a solution using variables:

    \n
    SELECT MIN(start) AS start, MAX(end) AS end\nFROM (\n  SELECT @grp := IF(@start = '1900-01-01' OR \n                   (opens <= @end AND closes >= @start), @grp, @grp+1) AS grp,        \n         @start := IF(@start = '1900-01-01', opens, \n                      IF(opens <= @end AND closes >= @start, \n                         IF (@start < opens, @start, opens), opens)) AS start,\n         @end := IF(@end = '1900-01-01', closes, \n                    IF (opens <= @end AND closes >= @start, \n                      IF (@end > closes, @end, closes), closes)) AS end                 \n  FROM mytable\n  CROSS JOIN (SELECT @grp := 1, @start := '1900-01-01', @end := '1900-01-01') AS vars\n  ORDER BY opens, DATEDIFF(closes, opens) DESC) AS t\nGROUP BY grp\n
    \n

    The idea is to start from left-most opens/closes interval. Variables @start, @end are used to propagate the incrementally expanding (as new overlapping rows are being processed) consolidated interval down the interval chain. Once a non-overlapping interval is encountered, [@start - @end] is initialized so as to match this new interval and grp is incremented by one.

    \n

    Demo here

    \n soup wrap:

    One way of doing it is by the use of correlated subqueries:

    SELECT DISTINCT
           (SELECT MIN(opens)
           FROM mytable AS t2
           WHERE t2.opens <= t1.closes AND t2.closes >= t1.opens) AS start,
           (SELECT MAX(closes)
           FROM mytable AS t2
           WHERE t2.opens <= t1.closes AND t2.closes >= t1.opens) AS end       
    FROM mytable AS t1
    ORDER BY opens
    

    The WHERE predicates of the correlated subqueries:

    t2.opens <= t1.closes AND t2.closes >= t1.opens

    return all overlapping records related to the current record. Performing aggregation one these records we can find the start / end dates of each interval: the start date of the interval is the minimum opens date between all overlapping records, whereas the end date is the maximum closes date.

    Demo here

    EDIT:

    The above solution won't work with a set of intervals like the following:

    1. |-----------|
    2. |----|
    3.           |-----|
    

    Record no. 2, when processed, will produce a flawed start/end interval.

    Here's a solution using variables:

    SELECT MIN(start) AS start, MAX(end) AS end
    FROM (
      SELECT @grp := IF(@start = '1900-01-01' OR 
                       (opens <= @end AND closes >= @start), @grp, @grp+1) AS grp,        
             @start := IF(@start = '1900-01-01', opens, 
                          IF(opens <= @end AND closes >= @start, 
                             IF (@start < opens, @start, opens), opens)) AS start,
             @end := IF(@end = '1900-01-01', closes, 
                        IF (opens <= @end AND closes >= @start, 
                          IF (@end > closes, @end, closes), closes)) AS end                 
      FROM mytable
      CROSS JOIN (SELECT @grp := 1, @start := '1900-01-01', @end := '1900-01-01') AS vars
      ORDER BY opens, DATEDIFF(closes, opens) DESC) AS t
    GROUP BY grp
    

    The idea is to start from left-most opens/closes interval. Variables @start, @end are used to propagate the incrementally expanding (as new overlapping rows are being processed) consolidated interval down the interval chain. Once a non-overlapping interval is encountered, [@start - @end] is initialized so as to match this new interval and grp is incremented by one.

    Demo here

    qid & accept id: (34121399, 34142578) query: Unique post author from Esqueleto soup:

    Instead of using subList_select and trying to call max_, you could just use sub_select and have the subquery sort by date and limit 1.

    \n

    This is the solution that I found, which seems to do the trick:

    \n
        result <- select $ from $ \(user, status_update) -> do\n        let subquery = from $ \status_update2 -> do\n            where_ (status_update2 ^. StatusUpdateUser ==. user ^. UserId)\n            where_ (date (status_update2 ^. StatusUpdatePosted) ==. date now)\n            orderBy [desc (status_update2 ^. StatusUpdatePosted)]\n            limit 1\n            return (status_update2 ^. StatusUpdateId)\n\n        where_ (status_update ^. StatusUpdateId ==. sub_select subquery)\n        where_ (user ^. UserId ==. status_update ^. StatusUpdateUser)\n        return\n            ( status_update ^. StatusUpdateId\n            , status_update ^. StatusUpdateSubject\n            , status_update ^. StatusUpdateMessage\n            , user ^. UserEmail\n            )\n
    \n

    The resulting SQL is:

    \n
    SELECT "status_update"."id", "status_update"."subject", "status_update"."message", "user"."email"\nFROM "user", "status_update"\nWHERE ("status_update"."id" = (SELECT "status_update2"."id"\n    FROM "status_update" AS "status_update2"\n    WHERE ("status_update2"."user" = "user"."id")\n    AND (date("status_update2"."posted") = date(date(?)))\n    ORDER BY "status_update2"."posted" DESC\n    LIMIT 1))\nAND ("user"."id" = "status_update"."user")\n
    \n

    To satisfy the "only StatusUpdates from the current day" condition I have defined date and now by importing Database.Esqueleto.Internal.Sql and:

    \n
    date :: SqlExpr (Value UTCTime) -> SqlExpr (Value Int)\ndate d = unsafeSqlFunction "date" d\n\nnow :: SqlExpr (Value UTCTime)\nnow = unsafeSqlFunction "date" (val "now" :: SqlExpr (Value String))\n
    \n

    However, what exactly "from the current day" means to you could be something different (past 24 hours, in a certain timezone, etc.).

    \n soup wrap:

    Instead of using subList_select and trying to call max_, you could just use sub_select and have the subquery sort by date and limit 1.

    This is the solution that I found, which seems to do the trick:

        result <- select $ from $ \(user, status_update) -> do
            let subquery = from $ \status_update2 -> do
                where_ (status_update2 ^. StatusUpdateUser ==. user ^. UserId)
                where_ (date (status_update2 ^. StatusUpdatePosted) ==. date now)
                orderBy [desc (status_update2 ^. StatusUpdatePosted)]
                limit 1
                return (status_update2 ^. StatusUpdateId)
    
            where_ (status_update ^. StatusUpdateId ==. sub_select subquery)
            where_ (user ^. UserId ==. status_update ^. StatusUpdateUser)
            return
                ( status_update ^. StatusUpdateId
                , status_update ^. StatusUpdateSubject
                , status_update ^. StatusUpdateMessage
                , user ^. UserEmail
                )
    

    The resulting SQL is:

    SELECT "status_update"."id", "status_update"."subject", "status_update"."message", "user"."email"
    FROM "user", "status_update"
    WHERE ("status_update"."id" = (SELECT "status_update2"."id"
        FROM "status_update" AS "status_update2"
        WHERE ("status_update2"."user" = "user"."id")
        AND (date("status_update2"."posted") = date(date(?)))
        ORDER BY "status_update2"."posted" DESC
        LIMIT 1))
    AND ("user"."id" = "status_update"."user")
    

    To satisfy the "only StatusUpdates from the current day" condition I have defined date and now by importing Database.Esqueleto.Internal.Sql and:

    date :: SqlExpr (Value UTCTime) -> SqlExpr (Value Int)
    date d = unsafeSqlFunction "date" d
    
    now :: SqlExpr (Value UTCTime)
    now = unsafeSqlFunction "date" (val "now" :: SqlExpr (Value String))
    

    However, what exactly "from the current day" means to you could be something different (past 24 hours, in a certain timezone, etc.).

    qid & accept id: (34191191, 34191266) query: Display the different salary figures earned by faculty members arranged in descending order soup:

    As I understand it you want to have a list of unique salary values in descending order. This is how you can achieve it:

    \n
    SELECT Salary FROM faculty\ngroup by Salary\norder by Salary desc\n
    \n

    Alternative:

    \n
    SELECT distinct(Salary) FROM faculty\norder by Salary desc\n
    \n

    This will give you all the salaries in descending order. If two people earn 10k, you will only see 10k once.

    \n
    SELECT Salary FROM faculty\ngroup by FacultyID, Salary\norder by Salary desc\n
    \n

    This will give you all the salaries grouped by faculty id in descending order with no duplicates within a faculty.

    \n soup wrap:

    As I understand it you want to have a list of unique salary values in descending order. This is how you can achieve it:

    SELECT Salary FROM faculty
    group by Salary
    order by Salary desc
    

    Alternative:

    SELECT distinct(Salary) FROM faculty
    order by Salary desc
    

    This will give you all the salaries in descending order. If two people earn 10k, you will only see 10k once.

    SELECT Salary FROM faculty
    group by FacultyID, Salary
    order by Salary desc
    

    This will give you all the salaries grouped by faculty id in descending order with no duplicates within a faculty.

    qid & accept id: (34218190, 34219192) query: T-SQL Calculate duration in months between different years of ranges soup:

    The syntax here is finding all FromDate that doesn't have an overlapping FromDate and ToDate interval and all ToDates that doesn't have an overlapping FromDate and ToDate interval. Giving them a rownumber according to the date value and matching them on that rownumber:

    \n
    ;WITH CTE as\n(\n  SELECT min(Id) Id ,FromDate, row_number() over (ORDER BY FromDate) rn\n  FROM @temp x\n  WHERE \n    not exists\n      (SELECT * FROM @temp WHERE x.FromDate > FromDate and x.FromDate <= Todate)\n  GROUP BY FromDate\n), CTE2 as\n(\n  SELECT Max(Id) Id ,ToDate, row_number() over (ORDER BY ToDate) rn\n  FROM @temp x\n  WHERE\n    not exists\n      (SELECT * FROM @temp WHERE x.ToDate >= FromDate and x.ToDate < Todate)\n  GROUP BY ToDate\n)\nSELECT SUM(DateDiff(month, CTE.FromDate, CTE2.ToDate))\nFROM CTE\nJOIN CTE2\nON CTE.rn = CTE2.rn\n
    \n

    Result:

    \n
    144\n
    \n soup wrap:

    The syntax here is finding all FromDate that doesn't have an overlapping FromDate and ToDate interval and all ToDates that doesn't have an overlapping FromDate and ToDate interval. Giving them a rownumber according to the date value and matching them on that rownumber:

    ;WITH CTE as
    (
      SELECT min(Id) Id ,FromDate, row_number() over (ORDER BY FromDate) rn
      FROM @temp x
      WHERE 
        not exists
          (SELECT * FROM @temp WHERE x.FromDate > FromDate and x.FromDate <= Todate)
      GROUP BY FromDate
    ), CTE2 as
    (
      SELECT Max(Id) Id ,ToDate, row_number() over (ORDER BY ToDate) rn
      FROM @temp x
      WHERE
        not exists
          (SELECT * FROM @temp WHERE x.ToDate >= FromDate and x.ToDate < Todate)
      GROUP BY ToDate
    )
    SELECT SUM(DateDiff(month, CTE.FromDate, CTE2.ToDate))
    FROM CTE
    JOIN CTE2
    ON CTE.rn = CTE2.rn
    

    Result:

    144
    
    qid & accept id: (34221483, 34221720) query: Exploding strings to words and copying to another table soup:
    \n

    What's the best way to do this?

    \n
    \n

    As always it depends. There is no one ultimate answer and you should prepare dev environment and do performance tests first.

    \n

    One solution is to use tally table for multiple split:

    \n
    SELECT id AS f_id, SUBSTRING_INDEX(SUBSTRING_INDEX(t.name, ' ', n.n), ' ', -1) AS word\nFROM mytable t \nCROSS JOIN \n(\n   SELECT a.N + b.N * 10 + 1 n\n     FROM \n    (SELECT 0 AS N UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9) a\n   ,(SELECT 0 AS N UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9) b\n) n\n WHERE n.n <= 1 + (LENGTH(t.name) - LENGTH(REPLACE(t.name, ' ', '')))\nORDER BY id, n\n
    \n

    SqlFiddleDemo

    \n

    Output:

    \n
    ╔═════╦═══════════╗\n║f_id ║   word    ║\n╠═════╬═══════════╣\n║  1  ║ foo       ║\n║  1  ║ bar       ║\n║  1  ║ something ║\n║  2  ║ something ║\n║  2  ║ else      ║\n╚═════╩═══════════╝\n
    \n

    You can also consider using external tools to do it. Read data from DB, process in application and save back to DB.

    \n

    EDIT:

    \n

    Exclude words that are less than 3 characters:

    \n
    WHERE n.n <= 1 + (LENGTH(t.name) - LENGTH(REPLACE(t.name, ' ', '')))\n  AND LENGTH(SUBSTRING_INDEX(SUBSTRING_INDEX(t.name, ' ', n.n), ' ', -1)) > 2\n
    \n

    SqlFiddleDemo2

    \n soup wrap:

    What's the best way to do this?

    As always it depends. There is no one ultimate answer and you should prepare dev environment and do performance tests first.

    One solution is to use tally table for multiple split:

    SELECT id AS f_id, SUBSTRING_INDEX(SUBSTRING_INDEX(t.name, ' ', n.n), ' ', -1) AS word
    FROM mytable t 
    CROSS JOIN 
    (
       SELECT a.N + b.N * 10 + 1 n
         FROM 
        (SELECT 0 AS N UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9) a
       ,(SELECT 0 AS N UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9) b
    ) n
     WHERE n.n <= 1 + (LENGTH(t.name) - LENGTH(REPLACE(t.name, ' ', '')))
    ORDER BY id, n
    

    SqlFiddleDemo

    Output:

    ╔═════╦═══════════╗
    ║f_id ║   word    ║
    ╠═════╬═══════════╣
    ║  1  ║ foo       ║
    ║  1  ║ bar       ║
    ║  1  ║ something ║
    ║  2  ║ something ║
    ║  2  ║ else      ║
    ╚═════╩═══════════╝
    

    You can also consider using external tools to do it. Read data from DB, process in application and save back to DB.

    EDIT:

    Exclude words that are less than 3 characters:

    WHERE n.n <= 1 + (LENGTH(t.name) - LENGTH(REPLACE(t.name, ' ', '')))
      AND LENGTH(SUBSTRING_INDEX(SUBSTRING_INDEX(t.name, ' ', n.n), ' ', -1)) > 2
    

    SqlFiddleDemo2

    qid & accept id: (34232471, 34232592) query: How to run a cursor inside a procedure in SQL Server soup:

    You can show the output like this

    \n
    create or replace PROCEDURE SOH_MEMBER IS\n\nCURSOR studC IS\nSELECT* from STUD;\nBEGIN\n  for c1   IN  studC\nLOOP\n  IF C1.CITY = 'sohar'\n  THEN INSERT INTO SOH_STUDENT (NAME, TEL, SEX)\n  VALUES( C1.NAME,  C1.TEL, C1.SEX);\n  end if;\n  END LOOP;\n  END SOH_MEMBER;\n\n/* show the output */\nSelect * From SOH_STUDENT\n
    \n

    But you can actually do the whole thing much faster this way (OTTOMH):

    \n
    create or replace PROCEDURE SOH_MEMBER IS\n\nInsert Into SOH_STUDENT (NAME, TEL, SEX)\nSELECT NAME, TEL, SEX\nfrom STUD\nWhere CITY = 'sohar'\n\nSelect * From SOH_STUDENT\n
    \n soup wrap:

    You can show the output like this

    create or replace PROCEDURE SOH_MEMBER IS
    
    CURSOR studC IS
    SELECT* from STUD;
    BEGIN
      for c1   IN  studC
    LOOP
      IF C1.CITY = 'sohar'
      THEN INSERT INTO SOH_STUDENT (NAME, TEL, SEX)
      VALUES( C1.NAME,  C1.TEL, C1.SEX);
      end if;
      END LOOP;
      END SOH_MEMBER;
    
    /* show the output */
    Select * From SOH_STUDENT
    

    But you can actually do the whole thing much faster this way (OTTOMH):

    create or replace PROCEDURE SOH_MEMBER IS
    
    Insert Into SOH_STUDENT (NAME, TEL, SEX)
    SELECT NAME, TEL, SEX
    from STUD
    Where CITY = 'sohar'
    
    Select * From SOH_STUDENT
    
    qid & accept id: (34242338, 34242651) query: MySQL pivoting with VARCHAR soup:

    To do this you can use equivalent of ROW_NUMBER and GROUP BY calculated RowNumber column:

    \n
    SELECT \n    MAX(CASE WHEN Role = 'Admin' THEN Name END) AS `Admin`, \n    MAX(CASE WHEN Role = 'Moderator' THEN Name END) AS `Moderator`, \n    MAX(CASE WHEN Role = 'User' THEN Name END) AS `User`\nFROM (\n      SELECT *\n        ,@row_num := IF(@prev_value=concat_ws('',t.Role),@row_num+1,1) AS RowNumber\n        ,@prev_value := concat_ws('',t.Role)  \n      FROM Organization t,\n         (SELECT @row_num := 1) x,\n         (SELECT @prev_value := '') y\n      ORDER BY t.Role   \n     ) AS sub\nGROUP BY RowNumber\n
    \n

    SqlFiddleDemo

    \n

    Output:

    \n
    ╔═════════╦════════════╦══════╗\n║ Admin   ║ Moderator  ║ User ║\n╠═════════╬════════════╬══════╣\n║ Tony    ║ (null)     ║ Sara ║\n║ (null)  ║ (null)     ║ John ║\n╚═════════╩════════════╩══════╝\n
    \n soup wrap:

    To do this you can use equivalent of ROW_NUMBER and GROUP BY calculated RowNumber column:

    SELECT 
        MAX(CASE WHEN Role = 'Admin' THEN Name END) AS `Admin`, 
        MAX(CASE WHEN Role = 'Moderator' THEN Name END) AS `Moderator`, 
        MAX(CASE WHEN Role = 'User' THEN Name END) AS `User`
    FROM (
          SELECT *
            ,@row_num := IF(@prev_value=concat_ws('',t.Role),@row_num+1,1) AS RowNumber
            ,@prev_value := concat_ws('',t.Role)  
          FROM Organization t,
             (SELECT @row_num := 1) x,
             (SELECT @prev_value := '') y
          ORDER BY t.Role   
         ) AS sub
    GROUP BY RowNumber
    

    SqlFiddleDemo

    Output:

    ╔═════════╦════════════╦══════╗
    ║ Admin   ║ Moderator  ║ User ║
    ╠═════════╬════════════╬══════╣
    ║ Tony    ║ (null)     ║ Sara ║
    ║ (null)  ║ (null)     ║ John ║
    ╚═════════╩════════════╩══════╝
    
    qid & accept id: (34245868, 34246534) query: Return record with start IP and end IP as range that an IP address falls between? soup:

    This is a strategy rather than exact code.

    \n

    First, update your table to have integer ips. The code is not going to work with the string representation. Here is one method:

    \n
    alter table t add ipstart_int unsigned;\nalter table t add ipend_int unsigned;\n\nupdate t\n    set ipstart_int = dbo.IPAddressToInteger(ipstart),\n        ipend_int = dbo.IPAddressToInteger(ipend);\n
    \n

    Then create an appropriate index on ipstart_int, ipend_int.

    \n

    Then, run the query by doing something like:

    \n
    select top 1 ip.*\nfrom t ip\nwhere ip.ipstart > dbo.IPAddressToInteger(@ip)\norder by ipstart asc;\n
    \n

    With a bit of luck, this will use the index and be very quick. You can then compare the resulting end ip to be sure that @ip is, indeed, in the right range.

    \n soup wrap:

    This is a strategy rather than exact code.

    First, update your table to have integer ips. The code is not going to work with the string representation. Here is one method:

    alter table t add ipstart_int unsigned;
    alter table t add ipend_int unsigned;
    
    update t
        set ipstart_int = dbo.IPAddressToInteger(ipstart),
            ipend_int = dbo.IPAddressToInteger(ipend);
    

    Then create an appropriate index on ipstart_int, ipend_int.

    Then, run the query by doing something like:

    select top 1 ip.*
    from t ip
    where ip.ipstart > dbo.IPAddressToInteger(@ip)
    order by ipstart asc;
    

    With a bit of luck, this will use the index and be very quick. You can then compare the resulting end ip to be sure that @ip is, indeed, in the right range.

    qid & accept id: (34251134, 34251177) query: Reversing a number using reverse for loop in Postgresql in PgAdmin soup:

    When you use: L_REV_NO = L_REV_NO||SUBSTRING(L_NO,CNTR,1); you do L_REV_NO = NULL || ... => NULL

    \n

    You need to initialize L_REV_NO variable first:

    \n
     DO $$\n    DECLARE\n    L_NO VARCHAR(5) := '1234';\n    L_LEN NUMERIC(5);\n    L_REV_NO VARCHAR(5) = '';\n    BEGIN\n    L_LEN := CHAR_LENGTH(L_NO) ;\n    RAISE NOTICE 'STRING LENGTH IS %' , L_LEN  ;\n    FOR  CNTR IN  REVERSE  L_LEN..1 LOOP\n   L_REV_NO = L_REV_NO||SUBSTRING(L_NO,CNTR,1);\n    END LOOP;\n    RAISE NOTICE 'NUMBER IS %' ,L_NO ;\n    RAISE NOTICE 'REVERSE NUMBER IS %' ,L_REV_NO ;\n    END $$ ;\n
    \n
    \n

    Another simple solution is to use built-in REVERSE function.

    \n
    SELECT REVERSE('1234')\n-- 4321\n
    \n

    Demo

    \n soup wrap:

    When you use: L_REV_NO = L_REV_NO||SUBSTRING(L_NO,CNTR,1); you do L_REV_NO = NULL || ... => NULL

    You need to initialize L_REV_NO variable first:

     DO $$
        DECLARE
        L_NO VARCHAR(5) := '1234';
        L_LEN NUMERIC(5);
        L_REV_NO VARCHAR(5) = '';
        BEGIN
        L_LEN := CHAR_LENGTH(L_NO) ;
        RAISE NOTICE 'STRING LENGTH IS %' , L_LEN  ;
        FOR  CNTR IN  REVERSE  L_LEN..1 LOOP
       L_REV_NO = L_REV_NO||SUBSTRING(L_NO,CNTR,1);
        END LOOP;
        RAISE NOTICE 'NUMBER IS %' ,L_NO ;
        RAISE NOTICE 'REVERSE NUMBER IS %' ,L_REV_NO ;
        END $$ ;
    

    Another simple solution is to use built-in REVERSE function.

    SELECT REVERSE('1234')
    -- 4321
    

    Demo

    qid & accept id: (34254637, 34254689) query: SQL Query to return rows where a column value appears multiple time soup:

    One solution is to use a GROUP BY query, grouping by FixtureID and counting the rows for each FixtureID. This query will select all FixtureIDs with both players 1 and 3:

    \n
    select\n  FixtureID\nfrom\n  Results\nwhere\n  PlayerID IN (1,3)\ngroup by\n  FixtureID\nhaving\n  count(*)=2\n
    \n

    then to get the record from the Results table you can use this query:

    \n
    select *\nfrom Results\nwhere FixtureID IN (\n  select FixtureID\n  from Results\n  where PlayerID IN (1,3)\n  group by FixtureID\n  having count(*)=2\n)\n
    \n soup wrap:

    One solution is to use a GROUP BY query, grouping by FixtureID and counting the rows for each FixtureID. This query will select all FixtureIDs with both players 1 and 3:

    select
      FixtureID
    from
      Results
    where
      PlayerID IN (1,3)
    group by
      FixtureID
    having
      count(*)=2
    

    then to get the record from the Results table you can use this query:

    select *
    from Results
    where FixtureID IN (
      select FixtureID
      from Results
      where PlayerID IN (1,3)
      group by FixtureID
      having count(*)=2
    )
    
    qid & accept id: (34273934, 34273945) query: Netezza automatically rounding down decimal values soup:

    Change one of arguments to NUMERIC to avoid integer division

    \n
    SELECT (4 -1) / 2.0  AS result\n
    \n

    or:

    \n
    SELECT (4-1) / CAST(2 AS NUMERIC(15,6)) AS result\n
    \n

    Division:

    \n
    1 / 10   -> 0\n1.0 / 10 -> 0.1\n1 / 10.0 -> 0.1\n1.0/10.0 -> 0.1\n
    \n soup wrap:

    Change one of arguments to NUMERIC to avoid integer division

    SELECT (4 -1) / 2.0  AS result
    

    or:

    SELECT (4-1) / CAST(2 AS NUMERIC(15,6)) AS result
    

    Division:

    1 / 10   -> 0
    1.0 / 10 -> 0.1
    1 / 10.0 -> 0.1
    1.0/10.0 -> 0.1
    
    qid & accept id: (34278143, 34278301) query: Mysql finding results not present in jointable soup:

    An easy solution is just to require that the entry is not in your completed tasks table:

    \n
    select * from users, tasks\nwhere not exists (\n    select * from users_tasks\n    where users.id = users_tasks.user_id and tasks.id = users_tasks.task_id\n);\n
    \n

    Result:

    \n
    +------+-------+------+-------------+\n| id   | name  | id   | name        |\n+------+-------+------+-------------+\n|    3 | susie |    2 | Shower      |\n|    2 | mike  |    3 | Check Email |\n|    3 | susie |    3 | Check Email |\n+------+-------+------+-------------+\n
    \n soup wrap:

    An easy solution is just to require that the entry is not in your completed tasks table:

    select * from users, tasks
    where not exists (
        select * from users_tasks
        where users.id = users_tasks.user_id and tasks.id = users_tasks.task_id
    );
    

    Result:

    +------+-------+------+-------------+
    | id   | name  | id   | name        |
    +------+-------+------+-------------+
    |    3 | susie |    2 | Shower      |
    |    2 | mike  |    3 | Check Email |
    |    3 | susie |    3 | Check Email |
    +------+-------+------+-------------+
    
    qid & accept id: (34309419, 34323523) query: How to Join tables to assign one record to multiple records in a "FIFO" order soup:

    Calculate running sum of Qty to know how many rows to skip from ExtInvoice using

    \n
    SUM(Qty) OVER (PARTITION BY P_No ORDER BY NumOrder)\n
    \n

    Use OUTER APPLY to join tables and pick the number of rows defined by Qty in TOP.

    \n

    Sample data

    \n

    I added one more P_No to verify that results are partitioned correctly.

    \n
    DECLARE @ExtInvoice TABLE \n(Ext_Invoice int, P_No int, Part int, InvoiceDate int, Due_Date int, NumOrder int);\n\nINSERT INTO @ExtInvoice\n(Ext_Invoice, P_No, Part, InvoiceDate, Due_Date, NumOrder)\nVALUES\n(571, 607, 7991, 151116, 151222, 1),\n(572, 607, 7991, 151120, 151228, 2),\n(573, 607, 7991, 151127, 160104, 3),\n(574, 608, 7991, 151127, 160104, 1);\n\n\nDECLARE @InternalInvoice TABLE\n(Invoice_No int, Original varchar(5), P_No int, Part int, Qty int, NumOrder int);\n\nINSERT INTO @InternalInvoice\n(Invoice_No, Original, P_No, Part, Qty, NumOrder)\nVALUES\n(198, '607', 607, 7991, 2, 1),\n(199, 'RE607', 607, 7991, 1, 2),\n(200, 'RE607', 607, 7991, 1, 3),\n(201, 'RE608', 608, 7991, 1, 1);\n
    \n

    Query

    \n

    I the final query you should list actual column names instead of *.\nTo make it work efficiently there should be an index for ExtInvoice table on (P_No, NumOrder).

    \n
    WITH\nCTE_InternalInvoices\nAS\n(\n    SELECT\n        I.*\n        ,SUM(Qty) OVER (PARTITION BY P_No ORDER BY NumOrder) AS SumQty\n    FROM\n        @InternalInvoice AS I\n)\nSELECT\n    *\nFROM\n    CTE_InternalInvoices\n    OUTER APPLY\n    (\n        SELECT TOP(CTE_InternalInvoices.Qty) *\n        FROM @ExtInvoice AS E\n        WHERE\n            E.P_No = CTE_InternalInvoices.P_No\n            AND E.NumOrder > CTE_InternalInvoices.SumQty - CTE_InternalInvoices.Qty\n        ORDER BY E.NumOrder\n    ) AS CA\nORDER BY CTE_InternalInvoices.Invoice_No;\n
    \n

    Result

    \n
    +------------+----------+------+------+-----+----------+--------+-------------+------+------+-------------+----------+----------+\n| Invoice_No | Original | P_No | Part | Qty | NumOrder | SumQty | Ext_Invoice | P_No | Part | InvoiceDate | Due_Date | NumOrder |\n+------------+----------+------+------+-----+----------+--------+-------------+------+------+-------------+----------+----------+\n|        198 | 607      |  607 | 7991 |   2 |        1 |      2 | 571         | 607  | 7991 | 151116      | 151222   | 1        |\n|        198 | 607      |  607 | 7991 |   2 |        1 |      2 | 572         | 607  | 7991 | 151120      | 151228   | 2        |\n|        199 | RE607    |  607 | 7991 |   1 |        2 |      3 | 573         | 607  | 7991 | 151127      | 160104   | 3        |\n|        200 | RE607    |  607 | 7991 |   1 |        3 |      4 | NULL        | NULL | NULL | NULL        | NULL     | NULL     |\n|        201 | RE608    |  608 | 7991 |   1 |        1 |      1 | 574         | 608  | 7991 | 151127      | 160104   | 1        |\n+------------+----------+------+------+-----+----------+--------+-------------+------+------+-------------+----------+----------+\n
    \n

    SQL Fiddle

    \n soup wrap:

    Calculate running sum of Qty to know how many rows to skip from ExtInvoice using

    SUM(Qty) OVER (PARTITION BY P_No ORDER BY NumOrder)
    

    Use OUTER APPLY to join tables and pick the number of rows defined by Qty in TOP.

    Sample data

    I added one more P_No to verify that results are partitioned correctly.

    DECLARE @ExtInvoice TABLE 
    (Ext_Invoice int, P_No int, Part int, InvoiceDate int, Due_Date int, NumOrder int);
    
    INSERT INTO @ExtInvoice
    (Ext_Invoice, P_No, Part, InvoiceDate, Due_Date, NumOrder)
    VALUES
    (571, 607, 7991, 151116, 151222, 1),
    (572, 607, 7991, 151120, 151228, 2),
    (573, 607, 7991, 151127, 160104, 3),
    (574, 608, 7991, 151127, 160104, 1);
    
    
    DECLARE @InternalInvoice TABLE
    (Invoice_No int, Original varchar(5), P_No int, Part int, Qty int, NumOrder int);
    
    INSERT INTO @InternalInvoice
    (Invoice_No, Original, P_No, Part, Qty, NumOrder)
    VALUES
    (198, '607', 607, 7991, 2, 1),
    (199, 'RE607', 607, 7991, 1, 2),
    (200, 'RE607', 607, 7991, 1, 3),
    (201, 'RE608', 608, 7991, 1, 1);
    

    Query

    I the final query you should list actual column names instead of *. To make it work efficiently there should be an index for ExtInvoice table on (P_No, NumOrder).

    WITH
    CTE_InternalInvoices
    AS
    (
        SELECT
            I.*
            ,SUM(Qty) OVER (PARTITION BY P_No ORDER BY NumOrder) AS SumQty
        FROM
            @InternalInvoice AS I
    )
    SELECT
        *
    FROM
        CTE_InternalInvoices
        OUTER APPLY
        (
            SELECT TOP(CTE_InternalInvoices.Qty) *
            FROM @ExtInvoice AS E
            WHERE
                E.P_No = CTE_InternalInvoices.P_No
                AND E.NumOrder > CTE_InternalInvoices.SumQty - CTE_InternalInvoices.Qty
            ORDER BY E.NumOrder
        ) AS CA
    ORDER BY CTE_InternalInvoices.Invoice_No;
    

    Result

    +------------+----------+------+------+-----+----------+--------+-------------+------+------+-------------+----------+----------+
    | Invoice_No | Original | P_No | Part | Qty | NumOrder | SumQty | Ext_Invoice | P_No | Part | InvoiceDate | Due_Date | NumOrder |
    +------------+----------+------+------+-----+----------+--------+-------------+------+------+-------------+----------+----------+
    |        198 | 607      |  607 | 7991 |   2 |        1 |      2 | 571         | 607  | 7991 | 151116      | 151222   | 1        |
    |        198 | 607      |  607 | 7991 |   2 |        1 |      2 | 572         | 607  | 7991 | 151120      | 151228   | 2        |
    |        199 | RE607    |  607 | 7991 |   1 |        2 |      3 | 573         | 607  | 7991 | 151127      | 160104   | 3        |
    |        200 | RE607    |  607 | 7991 |   1 |        3 |      4 | NULL        | NULL | NULL | NULL        | NULL     | NULL     |
    |        201 | RE608    |  608 | 7991 |   1 |        1 |      1 | 574         | 608  | 7991 | 151127      | 160104   | 1        |
    +------------+----------+------+------+-----+----------+--------+-------------+------+------+-------------+----------+----------+
    

    SQL Fiddle

    qid & accept id: (34314266, 34314476) query: How can I mark which rows come from which table when I do a join? soup:

    The details of your answer are going to depend a LOT on the specific database platform you're using. With that said, most database platforms support a CASE statement, which allows you to conditionally return values (including static strings) based on a variety of conditions.

    \n

    More generally, however, you're going to be doing an outer join based on Table 1 fields matching Table 2 fields. Within your code, if the Table 1 fields being returned are null, that indicates the data came from Table 2, and vice versa. If neither are null, the data came from both.

    \n

    You also have another option, to

    \n
          select from Table 1 \nUNION select from Table 2\n
    \n

    Then you can have a static field indicating which table each record is from, such as

    \n
          SELECT 'Table 1' table, field1, field2 FROM Table1 \nUNION SELECT 'Table 2' table, field1, field2 FROM Table2\n
    \n

    This option will probably create more work in your code, but may put less burden on the database server.

    \n

    There are probably more options, but those are the ones that jumped out at me.

    \n soup wrap:

    The details of your answer are going to depend a LOT on the specific database platform you're using. With that said, most database platforms support a CASE statement, which allows you to conditionally return values (including static strings) based on a variety of conditions.

    More generally, however, you're going to be doing an outer join based on Table 1 fields matching Table 2 fields. Within your code, if the Table 1 fields being returned are null, that indicates the data came from Table 2, and vice versa. If neither are null, the data came from both.

    You also have another option, to

          select from Table 1 
    UNION select from Table 2
    

    Then you can have a static field indicating which table each record is from, such as

          SELECT 'Table 1' table, field1, field2 FROM Table1 
    UNION SELECT 'Table 2' table, field1, field2 FROM Table2
    

    This option will probably create more work in your code, but may put less burden on the database server.

    There are probably more options, but those are the ones that jumped out at me.

    qid & accept id: (34340729, 34340984) query: Need to update data from another database using db link soup:

    I think you've gotten all tied up between the rows you're updating and the rows you're using to update the column values with.

    \n

    If you think about it, you're wanting to update rows in your w_product_d table where the created_on_dt is null, which means that your update statement will have a basic structure of:

    \n
    update w_product_d wpd\nset    ...\nwhere  wpd.created_on_dt is null;\n
    \n

    Once you have that, it's easy then to slot in the column you're updating and what you're updating it with:

    \n
    update w_product_d wpd\nset    wpd.created_on_dt = (select min(creation_date)\n                            from   mtl_system_items_b@afldev b\n                            where  to_char(b.inventory_item_id) = wpd.integration_id)\nwhere  wpd.created_on_dt is null;\n
    \n soup wrap:

    I think you've gotten all tied up between the rows you're updating and the rows you're using to update the column values with.

    If you think about it, you're wanting to update rows in your w_product_d table where the created_on_dt is null, which means that your update statement will have a basic structure of:

    update w_product_d wpd
    set    ...
    where  wpd.created_on_dt is null;
    

    Once you have that, it's easy then to slot in the column you're updating and what you're updating it with:

    update w_product_d wpd
    set    wpd.created_on_dt = (select min(creation_date)
                                from   mtl_system_items_b@afldev b
                                where  to_char(b.inventory_item_id) = wpd.integration_id)
    where  wpd.created_on_dt is null;
    
    qid & accept id: (34353369, 34353524) query: Find group of records that match multiple values soup:

    You can do this with conditional aggregation:

    \n
    select parentid \nfrom tablename\ngroup by parentid\nhaving sum(case when datavalue = 1 then 1 else 0 end) > 0 and\n       sum(case when datavalue = 6 then 1 else 0 end) > 0\n
    \n

    Another way is use exists:

    \n
    select distinct parentid\nfrom tablename t1\nwhere exists(select * from tablename where parentid = t1.parentid and datavalue = 1) and\n      exists(select * from tablename where parentid = t1.parentid and datavalue = 6)\n
    \n

    Another way is counting distinct occurrences:

    \n
    select parentid \nfrom tablename\nwhere datavalue in(1, 6)\ngroup by parentid\nhaving count(distinct datavalue) = 2\n
    \n soup wrap:

    You can do this with conditional aggregation:

    select parentid 
    from tablename
    group by parentid
    having sum(case when datavalue = 1 then 1 else 0 end) > 0 and
           sum(case when datavalue = 6 then 1 else 0 end) > 0
    

    Another way is use exists:

    select distinct parentid
    from tablename t1
    where exists(select * from tablename where parentid = t1.parentid and datavalue = 1) and
          exists(select * from tablename where parentid = t1.parentid and datavalue = 6)
    

    Another way is counting distinct occurrences:

    select parentid 
    from tablename
    where datavalue in(1, 6)
    group by parentid
    having count(distinct datavalue) = 2
    
    qid & accept id: (34417879, 34419953) query: Find highest and lowest selling item in a table soup:

    SQL Fiddle

    \n

    Oracle 11g R2 Schema Setup:

    \n
    create table orders (\n  ono      number(5) not null primary key,\n  cno      number(5),\n  eno      number(4),\n  received date,\n  shipped  date\n);\n\nINSERT INTO orders\nSELECT 1020, 1, 1, DATE '2015-12-21', NULL FROM DUAL UNION ALL\nSELECT 1021, 1, 1, DATE '2015-12-20', DATE '2015-12-20' FROM DUAL UNION ALL\nSELECT 1022, 1, 1, DATE '2015-12-18', DATE '2015-12-20' FROM DUAL UNION ALL\nSELECT 1023, 1, 1, DATE '2015-12-21', NULL FROM DUAL UNION ALL\nSELECT 1024, 1, 1, DATE '2015-12-20', DATE '2015-12-20' FROM DUAL;\n\ncreate table odetails (\n  ono      number(5) not null references orders(ono),\n  pno      number(5) not null,\n  qty      integer check(qty > 0),\n  primary key (ono,pno)\n);\n\nINSERT INTO odetails\nSELECT 1020, 10506, 1 FROM DUAL UNION ALL\nSELECT 1020, 10507, 1 FROM DUAL UNION ALL\nSELECT 1020, 10508, 2 FROM DUAL UNION ALL\nSELECT 1020, 10509, 3 FROM DUAL UNION ALL\nSELECT 1021, 10601, 4 FROM DUAL UNION ALL\nSELECT 1022, 10601, 1 FROM DUAL UNION ALL\nSELECT 1022, 10701, 1 FROM DUAL UNION ALL\nSELECT 1023, 10800, 1 FROM DUAL UNION ALL\nSELECT 1024, 10900, 1 FROM DUAL;\n
    \n

    Query 1 - The onoand pnos for the pno which has sold the maximum total quantity in December 2015:

    \n
    SELECT ono,\n       pno,\n       TOTAL_QTY\nFROM (\n  SELECT q.*,\n         RANK() OVER ( ORDER BY TOTAL_QTY DESC ) AS rnk\n  FROM   (\n    SELECT od.ono,\n           od.PNO,\n           SUM( od.QTY ) OVER ( PARTITION BY od.PNO ) AS TOTAL_QTY\n    FROM   ODETAILS od\n           INNER JOIN\n           orders o\n           ON ( o.ono = od.ono )\n    WHERE  TRUNC( o.received, 'MM' ) = DATE '2015-12-01'\n--    WHERE  EXTRACT( MONTH FROM o.received ) = 12\n  ) q\n)\nWHERE rnk = 1\n
    \n

    Change the WHERE clause to get the results for any December rather than just December 2015.

    \n

    Results:

    \n
    |  ONO |   PNO | TOTAL_QTY |\n|------|-------|-----------|\n| 1021 | 10601 |         5 |\n| 1022 | 10601 |         5 |\n
    \n

    Query 2 - The ono and pnos for the items which have sold the maximum quantity in a single order in December 2015:

    \n
    SELECT ono,\n       pno,\n       qty\nFROM (\n  SELECT od.*,\n         RANK() OVER ( ORDER BY od.qty DESC ) AS qty_rank\n  FROM   ODETAILS od\n         INNER JOIN\n         orders o\n         ON ( o.ono = od.ono )\n  WHERE  TRUNC( o.received, 'MM' ) = DATE '2015-12-01'\n  --    WHERE  EXTRACT( MONTH FROM o.received ) = 12\n)\nWHERE qty_rank = 1\n
    \n

    Change the WHERE clause to get the results for any December rather than just December 2015.

    \n

    Results:

    \n
    |  ONO |   PNO | QTY |\n|------|-------|-----|\n| 1021 | 10601 |   4 |\n
    \n soup wrap:

    SQL Fiddle

    Oracle 11g R2 Schema Setup:

    create table orders (
      ono      number(5) not null primary key,
      cno      number(5),
      eno      number(4),
      received date,
      shipped  date
    );
    
    INSERT INTO orders
    SELECT 1020, 1, 1, DATE '2015-12-21', NULL FROM DUAL UNION ALL
    SELECT 1021, 1, 1, DATE '2015-12-20', DATE '2015-12-20' FROM DUAL UNION ALL
    SELECT 1022, 1, 1, DATE '2015-12-18', DATE '2015-12-20' FROM DUAL UNION ALL
    SELECT 1023, 1, 1, DATE '2015-12-21', NULL FROM DUAL UNION ALL
    SELECT 1024, 1, 1, DATE '2015-12-20', DATE '2015-12-20' FROM DUAL;
    
    create table odetails (
      ono      number(5) not null references orders(ono),
      pno      number(5) not null,
      qty      integer check(qty > 0),
      primary key (ono,pno)
    );
    
    INSERT INTO odetails
    SELECT 1020, 10506, 1 FROM DUAL UNION ALL
    SELECT 1020, 10507, 1 FROM DUAL UNION ALL
    SELECT 1020, 10508, 2 FROM DUAL UNION ALL
    SELECT 1020, 10509, 3 FROM DUAL UNION ALL
    SELECT 1021, 10601, 4 FROM DUAL UNION ALL
    SELECT 1022, 10601, 1 FROM DUAL UNION ALL
    SELECT 1022, 10701, 1 FROM DUAL UNION ALL
    SELECT 1023, 10800, 1 FROM DUAL UNION ALL
    SELECT 1024, 10900, 1 FROM DUAL;
    

    Query 1 - The onoand pnos for the pno which has sold the maximum total quantity in December 2015:

    SELECT ono,
           pno,
           TOTAL_QTY
    FROM (
      SELECT q.*,
             RANK() OVER ( ORDER BY TOTAL_QTY DESC ) AS rnk
      FROM   (
        SELECT od.ono,
               od.PNO,
               SUM( od.QTY ) OVER ( PARTITION BY od.PNO ) AS TOTAL_QTY
        FROM   ODETAILS od
               INNER JOIN
               orders o
               ON ( o.ono = od.ono )
        WHERE  TRUNC( o.received, 'MM' ) = DATE '2015-12-01'
    --    WHERE  EXTRACT( MONTH FROM o.received ) = 12
      ) q
    )
    WHERE rnk = 1
    

    Change the WHERE clause to get the results for any December rather than just December 2015.

    Results:

    |  ONO |   PNO | TOTAL_QTY |
    |------|-------|-----------|
    | 1021 | 10601 |         5 |
    | 1022 | 10601 |         5 |
    

    Query 2 - The ono and pnos for the items which have sold the maximum quantity in a single order in December 2015:

    SELECT ono,
           pno,
           qty
    FROM (
      SELECT od.*,
             RANK() OVER ( ORDER BY od.qty DESC ) AS qty_rank
      FROM   ODETAILS od
             INNER JOIN
             orders o
             ON ( o.ono = od.ono )
      WHERE  TRUNC( o.received, 'MM' ) = DATE '2015-12-01'
      --    WHERE  EXTRACT( MONTH FROM o.received ) = 12
    )
    WHERE qty_rank = 1
    

    Change the WHERE clause to get the results for any December rather than just December 2015.

    Results:

    |  ONO |   PNO | QTY |
    |------|-------|-----|
    | 1021 | 10601 |   4 |
    
    qid & accept id: (34435022, 34435160) query: Finding out how two tables are connected by looking in a third soup:

    This is a bit tricky, because the id could be to either table. One solution is group by with a union all. Here is a generic approach, assuming that the ids in the two reference table have different values:

    \n
    select b.boxid\nfrom boxes b left join\n     (select id, name \n      from toys t\n      union all\n      select id, name\n      from kitchen k\n     ) tk\n     on b.id = tk.id\ngroup by b.boxid\nhaving sum(case when tk.name = 'Car' then 1 else 0 end) > 0 and\n       sum(case when tk.name = 'Fork' then 1 else 0 end) > 0;\n
    \n

    Note: In MySQL, I would write this query as:

    \n
    select b.boxid\nfrom boxes b left join\n     (select id, name \n      from toys t\n      where t.name in ('Car', 'Fork')\n      union all\n      select id, name\n      from kitchen k\n      where k.name in ('Car', 'Fork')\n     ) tk\n     on b.id = tk.id\ngroup by b.boxid\nhaving count(distinct name) = 2;\n
    \n

    You could write it this way in any SQL dialect, actually.

    \n soup wrap:

    This is a bit tricky, because the id could be to either table. One solution is group by with a union all. Here is a generic approach, assuming that the ids in the two reference table have different values:

    select b.boxid
    from boxes b left join
         (select id, name 
          from toys t
          union all
          select id, name
          from kitchen k
         ) tk
         on b.id = tk.id
    group by b.boxid
    having sum(case when tk.name = 'Car' then 1 else 0 end) > 0 and
           sum(case when tk.name = 'Fork' then 1 else 0 end) > 0;
    

    Note: In MySQL, I would write this query as:

    select b.boxid
    from boxes b left join
         (select id, name 
          from toys t
          where t.name in ('Car', 'Fork')
          union all
          select id, name
          from kitchen k
          where k.name in ('Car', 'Fork')
         ) tk
         on b.id = tk.id
    group by b.boxid
    having count(distinct name) = 2;
    

    You could write it this way in any SQL dialect, actually.

    qid & accept id: (34486588, 34486810) query: Querying possible choices and existing transactions soup:

    Edit: In case you can't use the OVER clause on aggregate functions in 2000, the following should accomplish the same thing. Sorry I missed the 2000 requirement, and unfortunately you need either a subquery or derived table. The fastest way to accomplish this type of problem depends on the data, but I like to do the grouping on the key only in a derived table, and then join to that, which I believe becomes better performing for larger sets of data.

    \n
    select i.ImportNo,\n    i.ImportDate,\n    coalesce(i_s.Completed, 0) as Completed,\n    v.ID VendorID,\n    v.Name,\n    case when grp.Completed = grp.Total then 1\n        else 0\n    end as BatchCompleted\nfrom Imports i\nleft join Vendors v on i.ImportDate between v.StartDate and v.EndDate\nleft join ImportsStatus i_s on i.ImportNo = i_s.ImportNo and v.ID = i_s.VendorID\njoin (select i.ImportNo,\n        sum(cast(i_s.Completed as int)) Completed,\n        count(v.ID) Total\n    from Imports i\n    left join Vendors v on i.ImportDate between v.StartDate and v.EndDate\n    left join ImportsStatus i_s on i.ImportNo = i_s.ImportNo and v.ID = i_s.VendorID\n    group by i.ImportNo\n) grp on grp.ImportNo = i.ImportNo\n
    \n

    I believe the following query might be an easier to read version of what you're looking for:

    \n
    select i.ImportNo,\n    i.ImportDate,\n    coalesce(i_s.Completed, 0) as Completed,\n    v.ID VendorID,\n    v.Name,\n    iif(sum(cast(i_s.Completed as int)) over (partition by i.ImportNo) = count(v.ID) over (partition by i.ImportNo), 1, 0) as BatchCompleted\nfrom Imports i\nleft join Vendors v on i.ImportDate between v.StartDate and v.EndDate\nleft join ImportsStatus i_s on i.ImportNo = i_s.ImportNo and v.ID = i_s.VendorID\n
    \n

    The idea here is to use partitioned sums/counts instead of subqueries to determine if the batch is completed or not. It also uses LEFT JOIN to ensure each import is included. I reordered and placed ImportsStatus at the end to prevent the duplicate problem you were having.

    \n soup wrap:

    Edit: In case you can't use the OVER clause on aggregate functions in 2000, the following should accomplish the same thing. Sorry I missed the 2000 requirement, and unfortunately you need either a subquery or derived table. The fastest way to accomplish this type of problem depends on the data, but I like to do the grouping on the key only in a derived table, and then join to that, which I believe becomes better performing for larger sets of data.

    select i.ImportNo,
        i.ImportDate,
        coalesce(i_s.Completed, 0) as Completed,
        v.ID VendorID,
        v.Name,
        case when grp.Completed = grp.Total then 1
            else 0
        end as BatchCompleted
    from Imports i
    left join Vendors v on i.ImportDate between v.StartDate and v.EndDate
    left join ImportsStatus i_s on i.ImportNo = i_s.ImportNo and v.ID = i_s.VendorID
    join (select i.ImportNo,
            sum(cast(i_s.Completed as int)) Completed,
            count(v.ID) Total
        from Imports i
        left join Vendors v on i.ImportDate between v.StartDate and v.EndDate
        left join ImportsStatus i_s on i.ImportNo = i_s.ImportNo and v.ID = i_s.VendorID
        group by i.ImportNo
    ) grp on grp.ImportNo = i.ImportNo
    

    I believe the following query might be an easier to read version of what you're looking for:

    select i.ImportNo,
        i.ImportDate,
        coalesce(i_s.Completed, 0) as Completed,
        v.ID VendorID,
        v.Name,
        iif(sum(cast(i_s.Completed as int)) over (partition by i.ImportNo) = count(v.ID) over (partition by i.ImportNo), 1, 0) as BatchCompleted
    from Imports i
    left join Vendors v on i.ImportDate between v.StartDate and v.EndDate
    left join ImportsStatus i_s on i.ImportNo = i_s.ImportNo and v.ID = i_s.VendorID
    

    The idea here is to use partitioned sums/counts instead of subqueries to determine if the batch is completed or not. It also uses LEFT JOIN to ensure each import is included. I reordered and placed ImportsStatus at the end to prevent the duplicate problem you were having.

    qid & accept id: (34508028, 34508190) query: select Y if column if difference of count(colmn1) with value A and B is 0 using aggregate funtion soup:

    Using sum, you can achieve the results, there are many other ways to do so.

    \n
    SELECT CASE\n           WHEN SUM(CASE\n                        WHEN c2.movetype = 'C' THEN\n                         1\n                        WHEN c2.movetype = 'D' THEN\n                         -1\n                        ELSE\n                         0\n                    END) = 0 THEN\n            'Y'\n           ELSE\n            'N'\n       END\n  FROM com24 c2\n WHERE c2.csnstat != 90\n GROUP BY c2.poliref,\n          c2.inrctyp,\n          c2.inrcref,\n          c2.csntype,\n          c2.duedate,\n          c2.itrno\n
    \n

    UPDATE:\nIf you want the values in a cursor, you can write the code like this

    \n
    CURSOR C1 AS\nWITH t_table AS (\n    SELECT c2.poliref,\n           c2.inrctyp,\n           c2.inrcref,\n           c2.csntype,\n           c2.duedate,\n           c2.itrno,\n           CASE\n               WHEN SUM(CASE\n                        WHEN c2.movetype = 'C' THEN\n                         1\n                        WHEN c2.movetype = 'D' THEN\n                         -1\n                        ELSE\n                         0\n                    END) = 0 THEN\n            'Y'\n           ELSE\n            'N'\n           END AS flag\n      FROM com24 c2\n     WHERE c2.csnstat != 90\n     GROUP BY c2.poliref,\n              c2.inrctyp,\n              c2.inrcref,\n              c2.csntype,\n              c2.duedate,\n              c2.itrno)\nSELECT *\n  FROM t_table       \n WHERE flag = 'Y';\n
    \n
    \n

    You requirement may be a little different but you can get some idea\n from the answers on how to write your code.

    \n
    \n soup wrap:

    Using sum, you can achieve the results, there are many other ways to do so.

    SELECT CASE
               WHEN SUM(CASE
                            WHEN c2.movetype = 'C' THEN
                             1
                            WHEN c2.movetype = 'D' THEN
                             -1
                            ELSE
                             0
                        END) = 0 THEN
                'Y'
               ELSE
                'N'
           END
      FROM com24 c2
     WHERE c2.csnstat != 90
     GROUP BY c2.poliref,
              c2.inrctyp,
              c2.inrcref,
              c2.csntype,
              c2.duedate,
              c2.itrno
    

    UPDATE: If you want the values in a cursor, you can write the code like this

    CURSOR C1 AS
    WITH t_table AS (
        SELECT c2.poliref,
               c2.inrctyp,
               c2.inrcref,
               c2.csntype,
               c2.duedate,
               c2.itrno,
               CASE
                   WHEN SUM(CASE
                            WHEN c2.movetype = 'C' THEN
                             1
                            WHEN c2.movetype = 'D' THEN
                             -1
                            ELSE
                             0
                        END) = 0 THEN
                'Y'
               ELSE
                'N'
               END AS flag
          FROM com24 c2
         WHERE c2.csnstat != 90
         GROUP BY c2.poliref,
                  c2.inrctyp,
                  c2.inrcref,
                  c2.csntype,
                  c2.duedate,
                  c2.itrno)
    SELECT *
      FROM t_table       
     WHERE flag = 'Y';
    

    You requirement may be a little different but you can get some idea from the answers on how to write your code.

    qid & accept id: (34516501, 34516733) query: MSSQL Order by date with distinct soup:

    If you are using SQL Server 2012+ you could use FORMAT function:

    \n
    DECLARE @cols AS NVARCHAR(MAX);\n\n;WITH cte AS       -- get only one date per month/year\n(\n  SELECT MIN(StartDate) AS StartDate\n  FROM #Products2 \n  GROUP BY YEAR(StartDate),MONTH(StartDate)\n)\nSELECT @cols = STUFF((SELECT  ',' + QUOTENAME(FORMAT(StartDate, 'MMM-yy'))\n                      FROM cte\n                      ORDER BY StartDate      \n                      FOR XML PATH('')),\n                    1, 1, N'');\n\nSELECT @cols;\n
    \n

    LiveDemo

    \n

    Output:

    \n
    ╔══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╗\n║                                                        result                                                        ║\n╠══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╣\n║ [Dec-15],[Jan-16],[Feb-16],[Mar-16],[Apr-16],[May-16],[Jun-16],[Jul-16],[Aug-16],[Sep-16],[Oct-16],[Nov-16],[Dec-16] ║\n╚══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╝\n
    \n soup wrap:

    If you are using SQL Server 2012+ you could use FORMAT function:

    DECLARE @cols AS NVARCHAR(MAX);
    
    ;WITH cte AS       -- get only one date per month/year
    (
      SELECT MIN(StartDate) AS StartDate
      FROM #Products2 
      GROUP BY YEAR(StartDate),MONTH(StartDate)
    )
    SELECT @cols = STUFF((SELECT  ',' + QUOTENAME(FORMAT(StartDate, 'MMM-yy'))
                          FROM cte
                          ORDER BY StartDate      
                          FOR XML PATH('')),
                        1, 1, N'');
    
    SELECT @cols;
    

    LiveDemo

    Output:

    ╔══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╗
    ║                                                        result                                                        ║
    ╠══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╣
    ║ [Dec-15],[Jan-16],[Feb-16],[Mar-16],[Apr-16],[May-16],[Jun-16],[Jul-16],[Aug-16],[Sep-16],[Oct-16],[Nov-16],[Dec-16] ║
    ╚══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╝
    
    qid & accept id: (34604707, 34607421) query: how to join tables when the join is on 2 fields? soup:

    To Insert into C, the query used above is correct-

    \n
    Insert into C (productid,partid) \nselect A.productid, A.partid \nfrom A join B on A.productid = B.productid AND A.partid = B.partid\n
    \n

    To delete from B, you can use the below query-

    \n
    Delete B from B\njoin C\non C.productid = B.productid \nAND C.partid = B.partid\n
    \n

    Since you have to delete the records in B so you have to give that in delete statement.

    \n soup wrap:

    To Insert into C, the query used above is correct-

    Insert into C (productid,partid) 
    select A.productid, A.partid 
    from A join B on A.productid = B.productid AND A.partid = B.partid
    

    To delete from B, you can use the below query-

    Delete B from B
    join C
    on C.productid = B.productid 
    AND C.partid = B.partid
    

    Since you have to delete the records in B so you have to give that in delete statement.

    qid & accept id: (34610639, 34610714) query: Combine two tables, exclude same records soup:

    Looks like you need FULL OUTER JOIN and exclude common part. You can simulate it with:

    \n
    SELECT T1.col_name\nFROM T1 \nLEFT JOIN T2\n  ON T1.col_name = T2.col_name\nWHERE T2.col_name IS NULL\nUNION\nSELECT T2.col_name\nFROM T2 \nLEFT JOIN T1\n  ON T1.col_name = T2.col_name\nWHERE T1.col_name IS NULL;\n
    \n

    SqlFiddleDemo

    \n
    ╔══════════╗\n║ col_name ║\n╠══════════╣\n║ C        ║\n║ D        ║\n║ E        ║\n║ F        ║\n║ G        ║\n╚══════════╝\n
    \n
    \n

    More info: Visual Representation of SQL Joins

    \n

    enter image description here

    \n
    SELECT \nFROM Table_A A\nFULL OUTER JOIN Table_B B\nON A.Key = B.Key\nWHERE A.Key IS NULL OR B.Key IS NULL\n
    \n

    Unfortunately MySQL does not support FULL OUTER JOIN so I used union of 2 LEFT JOIN.

    \n

    enter image description here

    \n

    enter image description here

    \n

    All images from http://www.codeproject.com/Articles/33052/Visual-Representation-of-SQL-Joins

    \n

    Addendum

    \n
    \n

    But what if I have two different tables with different columns, but both of them have one same column? The used SELECT statements have a different number of columns

    \n
    \n

    You could easily expand it with additional columns.

    \n
    SELECT 'T1' AS tab_name, T1.col_name, T1.col1, NULL AS col2\nFROM  T1\nLEFT JOIN  T2\n  ON T1.col_name=  T2.col_name\nWHERE T2.col_name IS NULL\nUNION\nSELECT 'T2' AS tab_name, T2.col_name, NULL, T2.col2\nFROM  T2\nLEFT JOIN  T1\n  ON T1.col_name=  T2.col_name\nWHERE T1.col_name IS NULL;\n
    \n

    LiveDemo

    \n

    Output:

    \n
    ╔══════════╦══════════╦══════╦═════════════════════╗\n║ tab_name ║ col_name ║ col1 ║        col2         ║\n╠══════════╬══════════╬══════╬═════════════════════╣\n║ T1       ║ C        ║    3 ║                     ║\n║ T1       ║ D        ║    4 ║                     ║\n║ T2       ║ E        ║      ║ 2016-01-03 00:00:00 ║\n║ T2       ║ F        ║      ║ 2016-01-02 00:00:00 ║\n║ T2       ║ G        ║      ║ 2016-01-01 00:00:00 ║\n╚══════════╩══════════╩══════╩═════════════════════╝\n
    \n soup wrap:

    Looks like you need FULL OUTER JOIN and exclude common part. You can simulate it with:

    SELECT T1.col_name
    FROM T1 
    LEFT JOIN T2
      ON T1.col_name = T2.col_name
    WHERE T2.col_name IS NULL
    UNION
    SELECT T2.col_name
    FROM T2 
    LEFT JOIN T1
      ON T1.col_name = T2.col_name
    WHERE T1.col_name IS NULL;
    

    SqlFiddleDemo

    ╔══════════╗
    ║ col_name ║
    ╠══════════╣
    ║ C        ║
    ║ D        ║
    ║ E        ║
    ║ F        ║
    ║ G        ║
    ╚══════════╝
    

    More info: Visual Representation of SQL Joins

    enter image description here

    SELECT 
    FROM Table_A A
    FULL OUTER JOIN Table_B B
    ON A.Key = B.Key
    WHERE A.Key IS NULL OR B.Key IS NULL
    

    Unfortunately MySQL does not support FULL OUTER JOIN so I used union of 2 LEFT JOIN.

    enter image description here

    enter image description here

    All images from http://www.codeproject.com/Articles/33052/Visual-Representation-of-SQL-Joins

    Addendum

    But what if I have two different tables with different columns, but both of them have one same column? The used SELECT statements have a different number of columns

    You could easily expand it with additional columns.

    SELECT 'T1' AS tab_name, T1.col_name, T1.col1, NULL AS col2
    FROM  T1
    LEFT JOIN  T2
      ON T1.col_name=  T2.col_name
    WHERE T2.col_name IS NULL
    UNION
    SELECT 'T2' AS tab_name, T2.col_name, NULL, T2.col2
    FROM  T2
    LEFT JOIN  T1
      ON T1.col_name=  T2.col_name
    WHERE T1.col_name IS NULL;
    

    LiveDemo

    Output:

    ╔══════════╦══════════╦══════╦═════════════════════╗
    ║ tab_name ║ col_name ║ col1 ║        col2         ║
    ╠══════════╬══════════╬══════╬═════════════════════╣
    ║ T1       ║ C        ║    3 ║                     ║
    ║ T1       ║ D        ║    4 ║                     ║
    ║ T2       ║ E        ║      ║ 2016-01-03 00:00:00 ║
    ║ T2       ║ F        ║      ║ 2016-01-02 00:00:00 ║
    ║ T2       ║ G        ║      ║ 2016-01-01 00:00:00 ║
    ╚══════════╩══════════╩══════╩═════════════════════╝
    
    qid & accept id: (34612343, 34613375) query: SQL Server Linked Server Join soup:

    You cannot use variables in place of database, schema or table names.

    \n

    Instead you can build and execute dynamic SQL statements, using sp_ExecuteSQL.

    \n

    This example won't work, as the server name is seen as a string and not a server object.

    \n

    Failed Example

    \n
    /* Anti-pattern.\n * Does not work.\n */\nDECLARE @Server    SYSNAME = 'Server001';\n\nSELECT\n    *\nFROM\n    @Server.Database1.dbo.Table1\n;\n
    \n

    This example shows a method that does work. Here the SQL statement is built as a string, which is then executed.

    \n
    /* Dynamic SQL statement.\n * Will work.\n */\nDECLARE @Server    SYSNAME = 'Server001';\nDECLARE @Statement    NVARCHAR(255);\n\nSET @Statement = 'SELECT * FROM ' + QUOTENAME(@Server) + '.Database1.dbo.Table1;';\n\nEXECUTE sp_ExecuteSQL @Statement;\n
    \n

    As ever; please be careful when generating and executing dynamic SQL statements. You do not want to open yourself up to SQL injection attacks. Look into OPENROWSET or check the passed server name against the code kindly supplied by @Devart above (SELECT name FROM sys.servers WHERE server_id > 0) before executing.

    \n

    EDIT 1: Added more detail to the paragraph on SQL injection.

    \n

    EDIT 2: Removed square brackets from 2nd example query, replaced with QUOTENAME, as per @TTs comment.

    \n soup wrap:

    You cannot use variables in place of database, schema or table names.

    Instead you can build and execute dynamic SQL statements, using sp_ExecuteSQL.

    This example won't work, as the server name is seen as a string and not a server object.

    Failed Example

    /* Anti-pattern.
     * Does not work.
     */
    DECLARE @Server    SYSNAME = 'Server001';
    
    SELECT
        *
    FROM
        @Server.Database1.dbo.Table1
    ;
    

    This example shows a method that does work. Here the SQL statement is built as a string, which is then executed.

    /* Dynamic SQL statement.
     * Will work.
     */
    DECLARE @Server    SYSNAME = 'Server001';
    DECLARE @Statement    NVARCHAR(255);
    
    SET @Statement = 'SELECT * FROM ' + QUOTENAME(@Server) + '.Database1.dbo.Table1;';
    
    EXECUTE sp_ExecuteSQL @Statement;
    

    As ever; please be careful when generating and executing dynamic SQL statements. You do not want to open yourself up to SQL injection attacks. Look into OPENROWSET or check the passed server name against the code kindly supplied by @Devart above (SELECT name FROM sys.servers WHERE server_id > 0) before executing.

    EDIT 1: Added more detail to the paragraph on SQL injection.

    EDIT 2: Removed square brackets from 2nd example query, replaced with QUOTENAME, as per @TTs comment.

    qid & accept id: (34643471, 34645285) query: Using BITAND in JOIN clause soup:

    Found the solution.

    \n

    This query:

    \n
    select  data.* \n        ,p.*\nfrom    data\ninner join p ON \n        (bitand(data.bits,2)=p.bits OR bitand(data.bits,66)=p.bits) OR\n        (bitand(data.bits,16)=p.bits) \norder by tt.bits, p.bits\n
    \n

    Produces the desired result:

    \n
    BITS    ORGANS                  BUCKET\n2       LUNG [2]                LUNG [2]\n18      LUNG [2]; KIDNEY [16]   LUNG [2]\n18      LUNG [2]; KIDNEY [16]   KIDNEY [16]\n64      HEART [64]              HEART [64]\n66      LUNG [2]; HEART [64]    LUNG [2]\n
    \n soup wrap:

    Found the solution.

    This query:

    select  data.* 
            ,p.*
    from    data
    inner join p ON 
            (bitand(data.bits,2)=p.bits OR bitand(data.bits,66)=p.bits) OR
            (bitand(data.bits,16)=p.bits) 
    order by tt.bits, p.bits
    

    Produces the desired result:

    BITS    ORGANS                  BUCKET
    2       LUNG [2]                LUNG [2]
    18      LUNG [2]; KIDNEY [16]   LUNG [2]
    18      LUNG [2]; KIDNEY [16]   KIDNEY [16]
    64      HEART [64]              HEART [64]
    66      LUNG [2]; HEART [64]    LUNG [2]
    
    qid & accept id: (34660036, 34661392) query: MySQL subquery from another database where table name depends on main query soup:

    It seems you are trying to perform variable substitution on a SQL table name.

    \n
      FROM dbName.table_guest_[??note: need to insert e.ID??] a \n
    \n

    You Can't Do That™ directly in SQL. You'll need to write your php code to generate that table name, or use the string-processing feature that the MySQL team calls prepared statements.

    \n

    If your different databases are hosted on the same MySQL server, you can write queries that refer to more than one of them at a time. Simply give the database name as well as the table name. For example, if you have databases db1 and db2 you can do this.

    \n
     SELECT whatever\n   FROM db1.events e\n   JOIN db2.user_118 u ON whatever = whatever\n
    \n soup wrap:

    It seems you are trying to perform variable substitution on a SQL table name.

      FROM dbName.table_guest_[??note: need to insert e.ID??] a 
    

    You Can't Do That™ directly in SQL. You'll need to write your php code to generate that table name, or use the string-processing feature that the MySQL team calls prepared statements.

    If your different databases are hosted on the same MySQL server, you can write queries that refer to more than one of them at a time. Simply give the database name as well as the table name. For example, if you have databases db1 and db2 you can do this.

     SELECT whatever
       FROM db1.events e
       JOIN db2.user_118 u ON whatever = whatever
    
    qid & accept id: (34687839, 34688375) query: SQL Select command to ignore until a condition is met soup:

    You could first select the time of the first record with the special condition, in a sub-select (I put it in a with clause). This would return exactly one record per Folder. And then select all records for the same folder that have a time stamp that is not less than that one:

    \n
    WITH StartRec AS ( \n    SELECT  FolderNo, MIN(SetDatetime) SetDatetime\n    FROM    ST3ROTE_Message\n    WHERE   FolderNo = @DropSelect\n        AND MessageNumber = 27 -- your starting condition\n        AND SetDatetime BETWEEN \n              DATEADD(hour, 18, DATEDIFF(day, 1, GETDATE())) \n              AND CURRENT_TIMESTAMP\n    GROUP BY FolderNo)\nSELECT     M.ProductionID, M.FolderNo, M.SetDatetime, \n           M.MessageNumber, M.MessageText, M.MessageLocation, \n           MD.GrossCopies, MD.NetCopies, MD.Speed \nFROM       ST3ROTE_Message AS M\nINNER JOIN StartRec\n        ON StartRec.FolderNo = M.FolderNo \n       AND StartRec.SetDatetime <= M.SetDatetime\nLEFT JOIN  ST3ROTE_MessageData AS MD\n        ON M.MessageID = MD.MessageID \nWHERE      M.FolderNo = @DropSelect\n
    \n

    Here is a fiddle. Note that since the fiddle works with few data, it will not return any records if executed after today.

    \n

    Also note that your way of calculating "yesterday at 18:00" can be done a lot more efficient, as I have included in the query above:

    \n
    DATEADD(hour, 18, DATEDIFF(day, 1, GETDATE())) \n
    \n

    This first calculates the number of whole days between day 1 (earliest date has value 0) and now. Then this is used as a date (= yesterday 0:00) to which 18 hours are added.

    \n

    Since you said in comments that SetDateTime reflects the timestamp of an event that happened, and can never be a time in the future, you don't really need a BETWEEN condition for. You could replace:

    \n
            SetDatetime BETWEEN \n          DATEADD(hour, 18, DATEDIFF(day, 1, GETDATE())) \n          AND CURRENT_TIMESTAMP\n
    \n

    By:

    \n
            SetDatetime >= DATEADD(hour, 18, DATEDIFF(day, 1, GETDATE()))\n
    \n soup wrap:

    You could first select the time of the first record with the special condition, in a sub-select (I put it in a with clause). This would return exactly one record per Folder. And then select all records for the same folder that have a time stamp that is not less than that one:

    WITH StartRec AS ( 
        SELECT  FolderNo, MIN(SetDatetime) SetDatetime
        FROM    ST3ROTE_Message
        WHERE   FolderNo = @DropSelect
            AND MessageNumber = 27 -- your starting condition
            AND SetDatetime BETWEEN 
                  DATEADD(hour, 18, DATEDIFF(day, 1, GETDATE())) 
                  AND CURRENT_TIMESTAMP
        GROUP BY FolderNo)
    SELECT     M.ProductionID, M.FolderNo, M.SetDatetime, 
               M.MessageNumber, M.MessageText, M.MessageLocation, 
               MD.GrossCopies, MD.NetCopies, MD.Speed 
    FROM       ST3ROTE_Message AS M
    INNER JOIN StartRec
            ON StartRec.FolderNo = M.FolderNo 
           AND StartRec.SetDatetime <= M.SetDatetime
    LEFT JOIN  ST3ROTE_MessageData AS MD
            ON M.MessageID = MD.MessageID 
    WHERE      M.FolderNo = @DropSelect
    

    Here is a fiddle. Note that since the fiddle works with few data, it will not return any records if executed after today.

    Also note that your way of calculating "yesterday at 18:00" can be done a lot more efficient, as I have included in the query above:

    DATEADD(hour, 18, DATEDIFF(day, 1, GETDATE())) 
    

    This first calculates the number of whole days between day 1 (earliest date has value 0) and now. Then this is used as a date (= yesterday 0:00) to which 18 hours are added.

    Since you said in comments that SetDateTime reflects the timestamp of an event that happened, and can never be a time in the future, you don't really need a BETWEEN condition for. You could replace:

            SetDatetime BETWEEN 
              DATEADD(hour, 18, DATEDIFF(day, 1, GETDATE())) 
              AND CURRENT_TIMESTAMP
    

    By:

            SetDatetime >= DATEADD(hour, 18, DATEDIFF(day, 1, GETDATE()))
    
    qid & accept id: (34696374, 34698586) query: openrowset - How to select from a filename with white spaces? soup:

    From the documentation on OPENROWSET specifically on the query (emphasis mine):

    \n
    \n

    'query'

    \n

    Is a string constant sent to and executed by the provider. The local instance of SQL Server does not process this query, but processes query results returned by the provider, a pass-through query. [...]

    \n
    \n

    In other words, it's not due to SQL Server that this pass-through query is not working.

    \n

    The following two examples use the DefaultDir property in your provider string and should get your statement to work:

    \n
    SELECT * FROM OPENROWSET('MSDASQL','Driver={Microsoft Access Text Driver (*.txt, *.csv)}; Extended Properties="text; HDR=YES; FMT=Delimited"; DefaultDir=E:\folder\sub folder;','SELECT * FROM [my file#txt]');\n
    \n

    Or

    \n
    SELECT * FROM OPENROWSET('MSDASQL','Driver={Microsoft Access Text Driver (*.txt, *.csv)}; Extended Properties="text; HDR=YES; FMT=Delimited"; DefaultDir=E:\folder\sub folder;','SELECT * FROM "my file.txt"');\n
    \n soup wrap:

    From the documentation on OPENROWSET specifically on the query (emphasis mine):

    'query'

    Is a string constant sent to and executed by the provider. The local instance of SQL Server does not process this query, but processes query results returned by the provider, a pass-through query. [...]

    In other words, it's not due to SQL Server that this pass-through query is not working.

    The following two examples use the DefaultDir property in your provider string and should get your statement to work:

    SELECT * FROM OPENROWSET('MSDASQL','Driver={Microsoft Access Text Driver (*.txt, *.csv)}; Extended Properties="text; HDR=YES; FMT=Delimited"; DefaultDir=E:\folder\sub folder;','SELECT * FROM [my file#txt]');
    

    Or

    SELECT * FROM OPENROWSET('MSDASQL','Driver={Microsoft Access Text Driver (*.txt, *.csv)}; Extended Properties="text; HDR=YES; FMT=Delimited"; DefaultDir=E:\folder\sub folder;','SELECT * FROM "my file.txt"');
    
    qid & accept id: (34712380, 34712460) query: In SQL(Ms-Access) how to Order By customized order? (Solved) soup:

    You can use a bunch of nested iif() statements. An alternative is to use instr():

    \n
    select *\nfrom TableA\nwhere [A_Design] In ("A", "D", "C" , "B")\norder by instr("ADCB", A_Design);\n
    \n

    Note: this works fine for single character codes. For longer codes, you should use delimiters:

    \n
    select *\nfrom TableA\nwhere [A_Design] In ("A", "D", "C" , "B")\norder by instr(",A,D,C,B,", "," & A_Design & ",");\n
    \n soup wrap:

    You can use a bunch of nested iif() statements. An alternative is to use instr():

    select *
    from TableA
    where [A_Design] In ("A", "D", "C" , "B")
    order by instr("ADCB", A_Design);
    

    Note: this works fine for single character codes. For longer codes, you should use delimiters:

    select *
    from TableA
    where [A_Design] In ("A", "D", "C" , "B")
    order by instr(",A,D,C,B,", "," & A_Design & ",");
    
    qid & accept id: (34722021, 34723760) query: Replace multiple IDs within text expression soup:

    One way is recursive common table expression:

    \n
    CREATE TABLE test(id INT, "name" VARCHAR(100), expression VARCHAR(100));\n\nINSERT INTO test(id,  "name", expression)\nSELECT 1,  'width', NULL                      \nUNION ALL SELECT 2, 'length', NULL                      \nUNION ALL SELECT 3, 'area'  ,  '[1] * [2]' \nUNION ALL SELECT 4, 'height', NULL\nUNION ALL SELECT 5, 'volume', '[3] * [4]'       \nUNION ALL SELECT 6, 'volumne_alt', '[2]^3';\n
    \n

    Query:

    \n
    WITH RECURSIVE cte AS (\n  SELECT id,  expression::varchar(10000), "name"\n         ,(regexp_matches(expression, '\[(\d+)\]'))[1] AS repid\n  FROM  test\n  WHERE expression IS NOT NULL  \n  UNION ALL\n  SELECT id, REPLACE(expression, repid, (SELECT name \n                                         FROM test \n                                         WHERE id = repid::int))::varchar(10000)\n          ,"name",(regexp_matches(expression, '\[(\d+)\]'))[1]    \n  FROM cte c\n  WHERE c.expression ~ '(.*)\[(\d+)\](.*)'\n)\nSELECT id, "name", expression\nFROM cte\nWHERE expression !~ '(.*)\[(\d+)\](.*)'\nORDER BY id;\n
    \n

    SqlFiddleDemo

    \n

    Output:

    \n
    ╔═════╦══════════════╦════════════════════╗\n║ id  ║    name      ║     expression     ║\n╠═════╬══════════════╬════════════════════╣\n║  3  ║ area         ║ [width] * [length] ║\n║  5  ║ volume       ║ [area] * [height]  ║\n║  6  ║ volumne_alt  ║ [length]^3         ║\n╚═════╩══════════════╩════════════════════╝\n
    \n
    \n

    With table UPDATE:

    \n
    WITH cte AS\n(...\n)\nUPDATE test AS t\nSET expression = c.expression\nFROM cte AS c\nWHERE t.id = c.id AND c.expression !~ '(.*)\[(\d+)\](.*)';\n
    \n

    SqlFiddleDemo2

    \n

    Output:

    \n
    ╔═════╦══════════════╦════════════════════╗\n║ id  ║    name      ║     expression     ║\n╠═════╬══════════════╬════════════════════╣\n║  1  ║ width        ║ (null)             ║\n║  2  ║ length       ║ (null)             ║\n║  3  ║ area         ║ [width] * [length] ║\n║  4  ║ height       ║ (null)             ║\n║  5  ║ volume       ║ [area] * [height]  ║\n║  6  ║ volumne_alt  ║ [length]^3         ║\n╚═════╩══════════════╩════════════════════╝\n
    \n soup wrap:

    One way is recursive common table expression:

    CREATE TABLE test(id INT, "name" VARCHAR(100), expression VARCHAR(100));
    
    INSERT INTO test(id,  "name", expression)
    SELECT 1,  'width', NULL                      
    UNION ALL SELECT 2, 'length', NULL                      
    UNION ALL SELECT 3, 'area'  ,  '[1] * [2]' 
    UNION ALL SELECT 4, 'height', NULL
    UNION ALL SELECT 5, 'volume', '[3] * [4]'       
    UNION ALL SELECT 6, 'volumne_alt', '[2]^3';
    

    Query:

    WITH RECURSIVE cte AS (
      SELECT id,  expression::varchar(10000), "name"
             ,(regexp_matches(expression, '\[(\d+)\]'))[1] AS repid
      FROM  test
      WHERE expression IS NOT NULL  
      UNION ALL
      SELECT id, REPLACE(expression, repid, (SELECT name 
                                             FROM test 
                                             WHERE id = repid::int))::varchar(10000)
              ,"name",(regexp_matches(expression, '\[(\d+)\]'))[1]    
      FROM cte c
      WHERE c.expression ~ '(.*)\[(\d+)\](.*)'
    )
    SELECT id, "name", expression
    FROM cte
    WHERE expression !~ '(.*)\[(\d+)\](.*)'
    ORDER BY id;
    

    SqlFiddleDemo

    Output:

    ╔═════╦══════════════╦════════════════════╗
    ║ id  ║    name      ║     expression     ║
    ╠═════╬══════════════╬════════════════════╣
    ║  3  ║ area         ║ [width] * [length] ║
    ║  5  ║ volume       ║ [area] * [height]  ║
    ║  6  ║ volumne_alt  ║ [length]^3         ║
    ╚═════╩══════════════╩════════════════════╝
    

    With table UPDATE:

    WITH cte AS
    (...
    )
    UPDATE test AS t
    SET expression = c.expression
    FROM cte AS c
    WHERE t.id = c.id AND c.expression !~ '(.*)\[(\d+)\](.*)';
    

    SqlFiddleDemo2

    Output:

    ╔═════╦══════════════╦════════════════════╗
    ║ id  ║    name      ║     expression     ║
    ╠═════╬══════════════╬════════════════════╣
    ║  1  ║ width        ║ (null)             ║
    ║  2  ║ length       ║ (null)             ║
    ║  3  ║ area         ║ [width] * [length] ║
    ║  4  ║ height       ║ (null)             ║
    ║  5  ║ volume       ║ [area] * [height]  ║
    ║  6  ║ volumne_alt  ║ [length]^3         ║
    ╚═════╩══════════════╩════════════════════╝
    
    qid & accept id: (34766139, 34767415) query: Compare two nvarchar columns with Unicode text in SQL server 2012 soup:
    \n

    Is there any other option or can this only be done with collation?

    \n
    \n

    Yes it is, for instance HASHBYTES:

    \n
    DECLARE @TABLE TABLE(A nvarchar(100),B nvarchar(100));\nINSERT INTO @TABLE VALUES (N'A²', N'A2')\n\nSELECT *\nFROM @TABLE \nWHERE HASHBYTES('SHA2_256',A) <> HASHBYTES('SHA2_256',B);\n
    \n

    LiveDemo

    \n

    Output:

    \n
    ╔════╦════╗\n║ A  ║ B  ║\n╠════╬════╣\n║ A² ║ A2 ║\n╚════╩════╝\n
    \n

    Anyway the collation solution is the cleanest one.

    \n soup wrap:

    Is there any other option or can this only be done with collation?

    Yes it is, for instance HASHBYTES:

    DECLARE @TABLE TABLE(A nvarchar(100),B nvarchar(100));
    INSERT INTO @TABLE VALUES (N'A²', N'A2')
    
    SELECT *
    FROM @TABLE 
    WHERE HASHBYTES('SHA2_256',A) <> HASHBYTES('SHA2_256',B);
    

    LiveDemo

    Output:

    ╔════╦════╗
    ║ A  ║ B  ║
    ╠════╬════╣
    ║ A² ║ A2 ║
    ╚════╩════╝
    

    Anyway the collation solution is the cleanest one.

    qid & accept id: (34773830, 34820222) query: UPDATE and DELETE a set of rows when the operations affect the set soup:

    Method 1 - Stored Procedure with temporary table

    \n

    This seems the simplest method if you're prepared to use a Stored Procedure and temporary table:

    \n
    CREATE PROCEDURE sp_sanitize_mrbs()\nBEGIN\n    DROP TEMPORARY TABLE IF EXISTS mrbs_to_sanitize;\n    CREATE TEMPORARY TABLE mrbs_to_sanitize (\n      id int auto_increment primary key,\n      room2_id int,\n      room3_id int);\n\n    -- "I want to go through the table, and when room 2 & 3 both have\n    -- entries at the same time and with the same name I want to..."\n    INSERT INTO mrbs_to_sanitize (room2_id, room3_id)\n    SELECT m1.id, m2.id\n    FROM mrbs_entry m1\n    CROSS JOIN mrbs_entry m2\n    WHERE m1.start_time = m2.start_time\n      AND m1.name = m2.name\n      AND m1.room_id = 2\n      AND m2.room_id = 3;\n\n    -- ...change room 2's room_id to 1\n    UPDATE mrbs_entry me\n    JOIN mrbs_to_sanitize mts\n    ON me.id = mts.room2_id\n    SET me.room_id = 1;\n\n    -- "...and delete the entry for room 3."\n    DELETE me\n    FROM mrbs_entry me\n    JOIN mrbs_to_sanitize mts\n    ON me.id = mts.room3_id;\nEND//\n\n-- ...\n-- The Stored Procedure can now be called any time you like:\nCALL sp_sanitize_mrbs();\n
    \n

    See SQL Fiddle Demo - using a Stored Procedure

    \n

    Method 2 - without Stored Procedure

    \n

    The following "trick" is slightly more complex but should do it without using stored procedures, temporary tables or variables:

    \n
    -- "I want to go through the table, and when room 2 & 3 both have\n-- entries at the same time and with the same name I want to..."\n\n-- "...change room 2's room_id to 1"\nUPDATE mrbs_entry m1\nCROSS JOIN mrbs_entry m2\n-- temporarily mark this row as having been updated\nSET m1.room_id = 1, m1.name = CONCAT(m1.name, ' UPDATED')\nWHERE m1.start_time = m2.start_time\n  AND m1.name = m2.name\n  AND m1.room_id = 2\n  AND m2.room_id = 3;\n\n-- "...and delete the entry for room 3."\nDELETE m2 FROM mrbs_entry m1\nCROSS JOIN mrbs_entry m2\nWHERE m1.start_time = m2.start_time\n  AND m1.name = CONCAT(m2.name, ' UPDATED')\n  AND m1.room_id = 1\n  AND m2.room_id = 3;\n\n-- now remove the temporary marker to restore previous value\nUPDATE mrbs_entry\nSET name = LEFT(name, CHAR_LENGTH(name) - CHAR_LENGTH(' UPDATED'))\nWHERE name LIKE '% UPDATED';\n
    \n

    Explanation of Method 2

    \n

    The first query updates the room number. However, as you mentioned, we need to perform the delete in a separate query. Since I'm not making any assumptions about your data, a safe way of requerying to get the same results once they have been modified is to introduce a "marker" to temporarily indicate which row was changed by the update. In the example above, this marker is 'UPDATED ' but you may wish to choose something more likely to never be used for any other purpose e.g. a random sequence of characters. It could also be moved onto a different field if required. The delete can then be performed and finally the marker needs to be removed to restore the original data.

    \n

    See SQL Fiddle demo - without Stored Procedure.

    \n soup wrap:

    Method 1 - Stored Procedure with temporary table

    This seems the simplest method if you're prepared to use a Stored Procedure and temporary table:

    CREATE PROCEDURE sp_sanitize_mrbs()
    BEGIN
        DROP TEMPORARY TABLE IF EXISTS mrbs_to_sanitize;
        CREATE TEMPORARY TABLE mrbs_to_sanitize (
          id int auto_increment primary key,
          room2_id int,
          room3_id int);
    
        -- "I want to go through the table, and when room 2 & 3 both have
        -- entries at the same time and with the same name I want to..."
        INSERT INTO mrbs_to_sanitize (room2_id, room3_id)
        SELECT m1.id, m2.id
        FROM mrbs_entry m1
        CROSS JOIN mrbs_entry m2
        WHERE m1.start_time = m2.start_time
          AND m1.name = m2.name
          AND m1.room_id = 2
          AND m2.room_id = 3;
    
        -- ...change room 2's room_id to 1
        UPDATE mrbs_entry me
        JOIN mrbs_to_sanitize mts
        ON me.id = mts.room2_id
        SET me.room_id = 1;
    
        -- "...and delete the entry for room 3."
        DELETE me
        FROM mrbs_entry me
        JOIN mrbs_to_sanitize mts
        ON me.id = mts.room3_id;
    END//
    
    -- ...
    -- The Stored Procedure can now be called any time you like:
    CALL sp_sanitize_mrbs();
    

    See SQL Fiddle Demo - using a Stored Procedure

    Method 2 - without Stored Procedure

    The following "trick" is slightly more complex but should do it without using stored procedures, temporary tables or variables:

    -- "I want to go through the table, and when room 2 & 3 both have
    -- entries at the same time and with the same name I want to..."
    
    -- "...change room 2's room_id to 1"
    UPDATE mrbs_entry m1
    CROSS JOIN mrbs_entry m2
    -- temporarily mark this row as having been updated
    SET m1.room_id = 1, m1.name = CONCAT(m1.name, ' UPDATED')
    WHERE m1.start_time = m2.start_time
      AND m1.name = m2.name
      AND m1.room_id = 2
      AND m2.room_id = 3;
    
    -- "...and delete the entry for room 3."
    DELETE m2 FROM mrbs_entry m1
    CROSS JOIN mrbs_entry m2
    WHERE m1.start_time = m2.start_time
      AND m1.name = CONCAT(m2.name, ' UPDATED')
      AND m1.room_id = 1
      AND m2.room_id = 3;
    
    -- now remove the temporary marker to restore previous value
    UPDATE mrbs_entry
    SET name = LEFT(name, CHAR_LENGTH(name) - CHAR_LENGTH(' UPDATED'))
    WHERE name LIKE '% UPDATED';
    

    Explanation of Method 2

    The first query updates the room number. However, as you mentioned, we need to perform the delete in a separate query. Since I'm not making any assumptions about your data, a safe way of requerying to get the same results once they have been modified is to introduce a "marker" to temporarily indicate which row was changed by the update. In the example above, this marker is 'UPDATED ' but you may wish to choose something more likely to never be used for any other purpose e.g. a random sequence of characters. It could also be moved onto a different field if required. The delete can then be performed and finally the marker needs to be removed to restore the original data.

    See SQL Fiddle demo - without Stored Procedure.

    qid & accept id: (34786699, 34787180) query: Select and Update Oracle BLOB column with XMLQUERY soup:

    For selecting from XML you can either use ExtractValue(XmlType, XPath) or XmlTable to transform the Xml clob into a queriable table of XML. For BLOB converstion, you should be able to just wrap it with XmlType(blob_value, 1), then you can perform any of the XML related function on it.

    \n
    SELECT ExtractValue(\n          XmlType('value1value2'), \n                  '/test/node1') as Node1 \nFROM dual;\n
    \n

    Or using XmlTable

    \n
    SELECT xt.Node1, xt.Node2\nFROM XmlTable('/test/block'\n         PASSING XmlType('\nvalue1avalue2a\nvalue1bvalue2b\nvalue1cvalue2c\n')\n        COLUMNS\n        "Node1"     VARCHAR2(20)   PATH 'node1',\n        "Node2"     VARCHAR2(20)   PATH 'node2') AS xt;\n
    \n

    Using UpdateXml, assuming the record I am updating has the above XML in a column:

    \n
    UPDATE MyTable SET xml_data =\nUpdateXml(xml_data, '/test/block/node2[text() = "value2b"]/text()', 'value2z')\nWHERE data_id = 1;\n
    \n

    The above should update the node2 that had the value value2b to now have value2z instead. Which then returns the new XML and assigns it to the column xml_data in the record that matches data_id = 1.

    \n

    One note, in the above query, it is working with a column that is already of the type XmlType. You are working with a BLOB. I would ask, is there a reason for it being BLOB instead of CLOB or XmlType? If you are storing VARCHAR type data you should really be using one of the latter two types, CLOB if you are storing various VARCHAR data, and XmlType (which is a more specific type of CLOB anyway) if you are storing strictly XML data.

    \n

    If you are stuck using the BLOB data type you will need to perform a lot of conversions. Using XmlType(blob_data, 1) should get you from BLOB to XmlType, but going back you will likely need to use UTL_RAW.CAST_TO_RAW(xml_data). So the query would become:

    \n
    UPDATE MyTable SET clob_data =\nUTL_RAW.CAST_TO_RAW(\n    UpdateXml(XmlType(clob_data, 1), '/test/block/node2[text() = "value2b"]/text()', 'value2z').GetClobVal()\n)\nWHERE data_id = 1;\n
    \n

    Here is a working stand alone example showing the various methods mentioned above:

    \n
    DECLARE varchar_data    VARCHAR2(500);\n        blob_data       BLOB;\n        xml_data        XMLType;\n        node1Val        VARCHAR(20);\n        node2Val        VARCHAR(20);\n\nBEGIN\n    select '\nvalue1avalue2a\nvalue1bvalue2b\nvalue1cvalue2c\nvalue1dvalue2d\n' into varchar_data from dual;\n\n    select UTL_RAW.CAST_TO_RAW(varchar_data) into blob_data from dual;\n\n    select XmlType(blob_data, 1) into xml_data from dual;\n    dbms_output.put_line(xml_data.getClobVal());\n\n    select xt.Node1, xt.Node2\n    into node1Val, node2Val\n    from XmlTable('/test/group' \n        passing XmlType(blob_data, 1)\n        columns Node1     VARCHAR2(20)    path 'node1',\n                Node2     VARCHAR2(20)    path 'node2'\n        ) xt\n    where xt.Node1 = 'value1c';\n    dbms_output.put_line('node1Val = ''' || node1Val || ''', node2Val = ''' || node2Val || ''';'); \n\n    -- Using UpdateXml to update the XML, that will return an XmlType \n    -- so we call GetClobVal() to let CAST_TO_RAW convert to BLOB.\n    select UTL_RAW.CAST_TO_RAW(\n        UpdateXml(\n            XmlType(blob_data, 1), \n            '/test/group/node2[../node1/text() = "value1c"]/text()', \n            'zzzz').GetClobVal()\n        ) into blob_data\n    from dual; \n\n    select XmlType(blob_data, 1) into xml_data from dual;\n    dbms_output.put_line(xml_data.getClobVal());\n\n    select xt.Node1, xt.Node2\n    into node1Val, node2Val\n    from XmlTable('/test/group' \n        passing XmlType(blob_data, 1)\n        columns Node1     VARCHAR2(20)    path 'node1',\n                Node2     VARCHAR2(20)    path 'node2'\n        ) xt\n    where xt.Node1 = 'value1c';\n    dbms_output.put_line('node1Val = ''' || node1Val || ''', node2Val = ''' || node2Val || ''';'); \n\nEND;\n
    \n soup wrap:

    For selecting from XML you can either use ExtractValue(XmlType, XPath) or XmlTable to transform the Xml clob into a queriable table of XML. For BLOB converstion, you should be able to just wrap it with XmlType(blob_value, 1), then you can perform any of the XML related function on it.

    SELECT ExtractValue(
              XmlType('value1value2'), 
                      '/test/node1') as Node1 
    FROM dual;
    

    Or using XmlTable

    SELECT xt.Node1, xt.Node2
    FROM XmlTable('/test/block'
             PASSING XmlType('
    value1avalue2a
    value1bvalue2b
    value1cvalue2c
    ')
            COLUMNS
            "Node1"     VARCHAR2(20)   PATH 'node1',
            "Node2"     VARCHAR2(20)   PATH 'node2') AS xt;
    

    Using UpdateXml, assuming the record I am updating has the above XML in a column:

    UPDATE MyTable SET xml_data =
    UpdateXml(xml_data, '/test/block/node2[text() = "value2b"]/text()', 'value2z')
    WHERE data_id = 1;
    

    The above should update the node2 that had the value value2b to now have value2z instead. Which then returns the new XML and assigns it to the column xml_data in the record that matches data_id = 1.

    One note, in the above query, it is working with a column that is already of the type XmlType. You are working with a BLOB. I would ask, is there a reason for it being BLOB instead of CLOB or XmlType? If you are storing VARCHAR type data you should really be using one of the latter two types, CLOB if you are storing various VARCHAR data, and XmlType (which is a more specific type of CLOB anyway) if you are storing strictly XML data.

    If you are stuck using the BLOB data type you will need to perform a lot of conversions. Using XmlType(blob_data, 1) should get you from BLOB to XmlType, but going back you will likely need to use UTL_RAW.CAST_TO_RAW(xml_data). So the query would become:

    UPDATE MyTable SET clob_data =
    UTL_RAW.CAST_TO_RAW(
        UpdateXml(XmlType(clob_data, 1), '/test/block/node2[text() = "value2b"]/text()', 'value2z').GetClobVal()
    )
    WHERE data_id = 1;
    

    Here is a working stand alone example showing the various methods mentioned above:

    DECLARE varchar_data    VARCHAR2(500);
            blob_data       BLOB;
            xml_data        XMLType;
            node1Val        VARCHAR(20);
            node2Val        VARCHAR(20);
    
    BEGIN
        select '
    value1avalue2a
    value1bvalue2b
    value1cvalue2c
    value1dvalue2d
    ' into varchar_data from dual;
    
        select UTL_RAW.CAST_TO_RAW(varchar_data) into blob_data from dual;
    
        select XmlType(blob_data, 1) into xml_data from dual;
        dbms_output.put_line(xml_data.getClobVal());
    
        select xt.Node1, xt.Node2
        into node1Val, node2Val
        from XmlTable('/test/group' 
            passing XmlType(blob_data, 1)
            columns Node1     VARCHAR2(20)    path 'node1',
                    Node2     VARCHAR2(20)    path 'node2'
            ) xt
        where xt.Node1 = 'value1c';
        dbms_output.put_line('node1Val = ''' || node1Val || ''', node2Val = ''' || node2Val || ''';'); 
    
        -- Using UpdateXml to update the XML, that will return an XmlType 
        -- so we call GetClobVal() to let CAST_TO_RAW convert to BLOB.
        select UTL_RAW.CAST_TO_RAW(
            UpdateXml(
                XmlType(blob_data, 1), 
                '/test/group/node2[../node1/text() = "value1c"]/text()', 
                'zzzz').GetClobVal()
            ) into blob_data
        from dual; 
    
        select XmlType(blob_data, 1) into xml_data from dual;
        dbms_output.put_line(xml_data.getClobVal());
    
        select xt.Node1, xt.Node2
        into node1Val, node2Val
        from XmlTable('/test/group' 
            passing XmlType(blob_data, 1)
            columns Node1     VARCHAR2(20)    path 'node1',
                    Node2     VARCHAR2(20)    path 'node2'
            ) xt
        where xt.Node1 = 'value1c';
        dbms_output.put_line('node1Val = ''' || node1Val || ''', node2Val = ''' || node2Val || ''';'); 
    
    END;
    
    qid & accept id: (34845881, 34845920) query: How do I use a wild card in the middle of an sql server like query soup:

    Problem is due the presence of [] in string. [] is used with LIKE operator to find

    \n
    \n

    any single character within the specified range ([a-f]) or set\n ([abcdef]).

    \n
    \n

    so you need to ESCAPE the square bracket's

    \n
    select 1 \nwhere '[Error] Something failed in (Freds) session'\nlike '%\[Error] Something failed in (%) session%' escape '\'\n
    \n

    or

    \n
    select 1 \nwhere '[Error] Something failed in (Freds) session'\nlike '%[[]Error] Something failed in (%) session%'\n
    \n\n soup wrap:

    Problem is due the presence of [] in string. [] is used with LIKE operator to find

    any single character within the specified range ([a-f]) or set ([abcdef]).

    so you need to ESCAPE the square bracket's

    select 1 
    where '[Error] Something failed in (Freds) session'
    like '%\[Error] Something failed in (%) session%' escape '\'
    

    or

    select 1 
    where '[Error] Something failed in (Freds) session'
    like '%[[]Error] Something failed in (%) session%'
    
    qid & accept id: (34875353, 34876344) query: Update statement with lookup table soup:

    This can be a bit tricky using a single statement, because SQL Server likes to optimize things. So the obvious:

    \n
    update t \n    set t.actiecode = (select top 1 actiecode \n                       from data_mgl_campagnemails_codes\n                       order by newid()\n                      )\n    from data_mgl_campagnemails_transfer t;\n
    \n

    Also doesn't work. One method is to enumerate things and use a join or correlated subquery:

    \n
    with t as (\n      select t.*, row_number() over (order by newid()) as seqnum\n      from data_mgl_campagnemails_transfer t\n     ),\n     a as (\n      select a.*, row_number() over (order by newid()) as seqnum\n      from data_mgl_campagnemails_codes a\n     )\nupdate t\n    set t.actiecode = (select top 1 actiecode from a)\n    from t join\n         a\n         on t.seqnum = a.seqnum;\n
    \n

    Another way is to "trick" SQL Server into running the correlated subquery more than once. I think something like this:

    \n
    update t \n    set t.actiecode = (select top 1 actiecode \n                       from data_mgl_campagnemails_codes\n                       where t.CustomerId is not null -- references the outer table but really does nothing\n                       order by newid()\n                      )\n    from data_mgl_campagnemails_transfer t;\n
    \n soup wrap:

    This can be a bit tricky using a single statement, because SQL Server likes to optimize things. So the obvious:

    update t 
        set t.actiecode = (select top 1 actiecode 
                           from data_mgl_campagnemails_codes
                           order by newid()
                          )
        from data_mgl_campagnemails_transfer t;
    

    Also doesn't work. One method is to enumerate things and use a join or correlated subquery:

    with t as (
          select t.*, row_number() over (order by newid()) as seqnum
          from data_mgl_campagnemails_transfer t
         ),
         a as (
          select a.*, row_number() over (order by newid()) as seqnum
          from data_mgl_campagnemails_codes a
         )
    update t
        set t.actiecode = (select top 1 actiecode from a)
        from t join
             a
             on t.seqnum = a.seqnum;
    

    Another way is to "trick" SQL Server into running the correlated subquery more than once. I think something like this:

    update t 
        set t.actiecode = (select top 1 actiecode 
                           from data_mgl_campagnemails_codes
                           where t.CustomerId is not null -- references the outer table but really does nothing
                           order by newid()
                          )
        from data_mgl_campagnemails_transfer t;
    
    qid & accept id: (34876711, 34877497) query: MySQL query - compare version numbers soup:

    Thanks for the tips @symcbean and @gordon-linoff, my final query looks like this:

    \n
    SELECT *\nFROM versions WHERE CONCAT(\n        LPAD(SUBSTRING_INDEX(SUBSTRING_INDEX(version_number, '.', 1), '.', -1), 10, '0'),\n        LPAD(SUBSTRING_INDEX(SUBSTRING_INDEX(version_number, '.', 2), '.', -1), 10, '0'),\n        LPAD(SUBSTRING_INDEX(SUBSTRING_INDEX(version_number, '.', 3), '.', -1), 10, '0') \n       ) > CONCAT(LPAD(2,10,'0'), LPAD(1,10,'0'), LPAD(27,10,'0'));\n
    \n

    This allows each component to be up to 10 digits long.

    \n

    It transforms this:

    \n
    3.11.9 > 2.1.27\n
    \n

    to this:

    \n
    '000000000300000000110000000009' > '000000000200000000010000000027'\n
    \n soup wrap:

    Thanks for the tips @symcbean and @gordon-linoff, my final query looks like this:

    SELECT *
    FROM versions WHERE CONCAT(
            LPAD(SUBSTRING_INDEX(SUBSTRING_INDEX(version_number, '.', 1), '.', -1), 10, '0'),
            LPAD(SUBSTRING_INDEX(SUBSTRING_INDEX(version_number, '.', 2), '.', -1), 10, '0'),
            LPAD(SUBSTRING_INDEX(SUBSTRING_INDEX(version_number, '.', 3), '.', -1), 10, '0') 
           ) > CONCAT(LPAD(2,10,'0'), LPAD(1,10,'0'), LPAD(27,10,'0'));
    

    This allows each component to be up to 10 digits long.

    It transforms this:

    3.11.9 > 2.1.27
    

    to this:

    '000000000300000000110000000009' > '000000000200000000010000000027'
    
    qid & accept id: (34908511, 34908855) query: Update an ordinal column based on the alphabetic ordering of another column soup:

    I would do this with a simple update:

    \n
    with toupdate as (\n      select m.*, row_number() over (partition by parent order by title) as seqnum\n      from menu\n     )\nupdate toupdate\n    set m_order = toupdate.seqnum;\n
    \n

    This restarts the ordering for each parent. If you have a particular parent in mind, use a WHERE clause:

    \n
    where parentid = @parentid and m_order <> toupdate.seqnum\n
    \n soup wrap:

    I would do this with a simple update:

    with toupdate as (
          select m.*, row_number() over (partition by parent order by title) as seqnum
          from menu
         )
    update toupdate
        set m_order = toupdate.seqnum;
    

    This restarts the ordering for each parent. If you have a particular parent in mind, use a WHERE clause:

    where parentid = @parentid and m_order <> toupdate.seqnum
    
    qid & accept id: (34954765, 34957654) query: Find rows where the value in column 1 exists in colunm2 soup:

    Just for the sake of completeness. You can also use INTERSECT:

    \n
    Select FirstName, LastName\nFrom People\n\nINTERSECT\n\nSelect LastName, FirstName\nFrom People\n
    \n

    This will return only one pair of matching rows, i.e.:

    \n
    +-----------+----------+\n| FirstName | LastName |\n+-----------+----------+\n| Doc       | Jones    |\n| Jones     | Doc      |\n+-----------+----------+\n
    \n

    even if original data has Doc Jones or Jones Doc more than once:

    \n
    DECLARE @People TABLE ([FirstName] varchar(50), [LastName] varchar(50));\n\nINSERT INTO @People ([FirstName], [LastName]) VALUES\n('Doc', 'Jones'),\n('Doc', 'Jones'),\n('Jones', 'Doc'),\n('Doc', 'Holiday'),\n('John', 'Doe');\n
    \n soup wrap:

    Just for the sake of completeness. You can also use INTERSECT:

    Select FirstName, LastName
    From People
    
    INTERSECT
    
    Select LastName, FirstName
    From People
    

    This will return only one pair of matching rows, i.e.:

    +-----------+----------+
    | FirstName | LastName |
    +-----------+----------+
    | Doc       | Jones    |
    | Jones     | Doc      |
    +-----------+----------+
    

    even if original data has Doc Jones or Jones Doc more than once:

    DECLARE @People TABLE ([FirstName] varchar(50), [LastName] varchar(50));
    
    INSERT INTO @People ([FirstName], [LastName]) VALUES
    ('Doc', 'Jones'),
    ('Doc', 'Jones'),
    ('Jones', 'Doc'),
    ('Doc', 'Holiday'),
    ('John', 'Doe');
    
    qid & accept id: (35011175, 35012352) query: MySQL Left Join with fundamentally differently structured tables soup:

    You are maybe looking for the group_concat function:

    \n
    SELECT     MASTER.animal, MASTER.species, \n           group_concat(BI.properties separator ', ') as Properties\nFROM       MASTER\nLEFT JOIN  BOARD_INFO BI\n       ON (MASTER.animal = BI.animal)\nGROUP BY   MASTER.animal, MASTER.species\n
    \n

    See this fiddle.

    \n

    Output of the SQL is:

    \n
    +--------+---------+-------------------------------+\n| animal | species | properties                    |\n+--------+---------+-------------------------------+\n| dog    | mammal  | has ears, has a tail, has fir |\n| cat    | mammal  | meows, hunts birds            |\n| turtle | reptile | (null)                        | \n+--------+---------+-------------------------------+\n
    \n

    Your PHP can stay like it is.

    \n soup wrap:

    You are maybe looking for the group_concat function:

    SELECT     MASTER.animal, MASTER.species, 
               group_concat(BI.properties separator ', ') as Properties
    FROM       MASTER
    LEFT JOIN  BOARD_INFO BI
           ON (MASTER.animal = BI.animal)
    GROUP BY   MASTER.animal, MASTER.species
    

    See this fiddle.

    Output of the SQL is:

    +--------+---------+-------------------------------+
    | animal | species | properties                    |
    +--------+---------+-------------------------------+
    | dog    | mammal  | has ears, has a tail, has fir |
    | cat    | mammal  | meows, hunts birds            |
    | turtle | reptile | (null)                        | 
    +--------+---------+-------------------------------+
    

    Your PHP can stay like it is.

    qid & accept id: (35036298, 35037434) query: Need Oracle sql query for grouping the date soup:

    You can do this with lead and lag analytic functions - in a subquery which you then group over, which may be what you missed - but you can also do it with an analytic 'trick'.

    \n

    If you look at the difference between each date and the lowest date you get a broken sequence, in your case 0, 1, 2, 3, 5, ..., 27, 28, 29. You can see that with attndate - min(attndate) over ().

    \n

    You also have another unbroken sequence available from row_number() over (order by attndate), which gives you 1, 2, 3, ... 28.

    \n

    If you subtract one from the other each contiguous block of dates gets the same answer, which I've called 'slot_no':

    \n
    select attndate,\n  attndate - min(attndate) over ()\n    - row_number() over (order by attndate) as slot_no\nfrom your_table;\n
    \n

    With this data every row gets either -1, 0 or 1. (You can add two to those to make them more friendly if you want, but that only really works if the gaps in the data are single days). You can then group by that slot number:

    \n
    with cte as (\n  select attndate,\n    attndate - min(attndate) over ()\n      - row_number() over (order by attndate) as slot_no\n  from your_table\n)\nselect dense_rank() over (order by slot_no) as slot_no,\n  min(attndate) as attnfrom, max(attndate) as attntill\nfrom cte\ngroup by slot_no\norder by slot_no;\n
    \n

    With some generated data:

    \n
    alter session set nls_date_format = 'DD/MM/YYYY';\nwith your_table (attndate) as (\n  select date '2015-11-02' + level - 1 from dual connect by level <= 4\n  union all select date '2015-11-07' + level - 1 from dual connect by level <= 13\n  union all select date '2015-11-21' + level - 1 from dual connect by level <= 11\n),\ncte as (\n  select attndate,\n    attndate - min(attndate) over ()\n      - row_number() over (order by attndate) as slot_no\n  from your_table\n)\nselect dense_rank() over (order by slot_no) as slot_no,\n  min(attndate) as attnfrom, max(attndate) as attntill\nfrom cte\ngroup by slot_no\norder by slot_no;\n\n   SLOT_NO ATTNFROM   ATTNTILL \n---------- ---------- ----------\n         1 02/11/2015 05/11/2015\n         2 07/11/2015 19/11/2015\n         3 21/11/2015 01/12/2015\n
    \n

    If your real scenario is getting these ranges for multiple keys, say a person ID, then you can add a partition by clause to each of the analytic function calls, in the three over () sections.

    \n soup wrap:

    You can do this with lead and lag analytic functions - in a subquery which you then group over, which may be what you missed - but you can also do it with an analytic 'trick'.

    If you look at the difference between each date and the lowest date you get a broken sequence, in your case 0, 1, 2, 3, 5, ..., 27, 28, 29. You can see that with attndate - min(attndate) over ().

    You also have another unbroken sequence available from row_number() over (order by attndate), which gives you 1, 2, 3, ... 28.

    If you subtract one from the other each contiguous block of dates gets the same answer, which I've called 'slot_no':

    select attndate,
      attndate - min(attndate) over ()
        - row_number() over (order by attndate) as slot_no
    from your_table;
    

    With this data every row gets either -1, 0 or 1. (You can add two to those to make them more friendly if you want, but that only really works if the gaps in the data are single days). You can then group by that slot number:

    with cte as (
      select attndate,
        attndate - min(attndate) over ()
          - row_number() over (order by attndate) as slot_no
      from your_table
    )
    select dense_rank() over (order by slot_no) as slot_no,
      min(attndate) as attnfrom, max(attndate) as attntill
    from cte
    group by slot_no
    order by slot_no;
    

    With some generated data:

    alter session set nls_date_format = 'DD/MM/YYYY';
    with your_table (attndate) as (
      select date '2015-11-02' + level - 1 from dual connect by level <= 4
      union all select date '2015-11-07' + level - 1 from dual connect by level <= 13
      union all select date '2015-11-21' + level - 1 from dual connect by level <= 11
    ),
    cte as (
      select attndate,
        attndate - min(attndate) over ()
          - row_number() over (order by attndate) as slot_no
      from your_table
    )
    select dense_rank() over (order by slot_no) as slot_no,
      min(attndate) as attnfrom, max(attndate) as attntill
    from cte
    group by slot_no
    order by slot_no;
    
       SLOT_NO ATTNFROM   ATTNTILL 
    ---------- ---------- ----------
             1 02/11/2015 05/11/2015
             2 07/11/2015 19/11/2015
             3 21/11/2015 01/12/2015
    

    If your real scenario is getting these ranges for multiple keys, say a person ID, then you can add a partition by clause to each of the analytic function calls, in the three over () sections.

    qid & accept id: (35041250, 35041604) query: Auto insert rows with repeated data, following two patterns soup:

    You can use the following sql script to insert the values required into your table:

    \n
    INSERT INTO target (id, letter, `number`)\nSELECT rn, col, (rn - 1) % 4 + 1 AS seq\nFROM (\nSELECT col, @rn := @rn + 1 AS rn \nFROM (\n   SELECT 'a' AS col UNION ALL SELECT 'b' UNION ALL\n   SELECT 'c' UNION ALL SELECT 'd') AS t\nCROSS JOIN (\n   SELECT 1 AS x UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL \n   SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL\n   SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 ) AS t1\nCROSS JOIN (\n   SELECT 1 AS x UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL \n   SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL\n   SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 ) AS t2\nCROSS JOIN (SELECT @rn := 0) AS var  ) AS s\nWHERE rn <= 456\n
    \n

    The above query creates a numbers table of 121 rows using a 11 x 11 cartesian product. These rows are cross joined with in-line table ('a'), ('b'), ('c'), ('d') to produce a total of 484 rows. The outer query selects just the rows needed, i.e. 456 rows in total.

    \n

    Note: If you want to insert values:

    \n
    id, letter, number\n1   'a'     1\n2   'b'     1\n3   'c'     1\n4   'd'     1\n5   'a'     2\n6   'b'     2\n7   'c'     2\n8   'd'     2\n... etc\n
    \n

    instead of values:

    \n
    id, letter, number\n1   'a'     1\n2   'b'     2\n3   'c'     3\n4   'd'     4\n5   'a'     1\n6   'b'     2\n7   'c'     3\n8   'd'     4\n... etc\n
    \n

    then simply replace (rn - 1) % 4 + 1 AS seq with (rn - 1) DIV 4 + 1 AS seq.

    \n

    Demo here

    \n soup wrap:

    You can use the following sql script to insert the values required into your table:

    INSERT INTO target (id, letter, `number`)
    SELECT rn, col, (rn - 1) % 4 + 1 AS seq
    FROM (
    SELECT col, @rn := @rn + 1 AS rn 
    FROM (
       SELECT 'a' AS col UNION ALL SELECT 'b' UNION ALL
       SELECT 'c' UNION ALL SELECT 'd') AS t
    CROSS JOIN (
       SELECT 1 AS x UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL 
       SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
       SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 ) AS t1
    CROSS JOIN (
       SELECT 1 AS x UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL 
       SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
       SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 ) AS t2
    CROSS JOIN (SELECT @rn := 0) AS var  ) AS s
    WHERE rn <= 456
    

    The above query creates a numbers table of 121 rows using a 11 x 11 cartesian product. These rows are cross joined with in-line table ('a'), ('b'), ('c'), ('d') to produce a total of 484 rows. The outer query selects just the rows needed, i.e. 456 rows in total.

    Note: If you want to insert values:

    id, letter, number
    1   'a'     1
    2   'b'     1
    3   'c'     1
    4   'd'     1
    5   'a'     2
    6   'b'     2
    7   'c'     2
    8   'd'     2
    ... etc
    

    instead of values:

    id, letter, number
    1   'a'     1
    2   'b'     2
    3   'c'     3
    4   'd'     4
    5   'a'     1
    6   'b'     2
    7   'c'     3
    8   'd'     4
    ... etc
    

    then simply replace (rn - 1) % 4 + 1 AS seq with (rn - 1) DIV 4 + 1 AS seq.

    Demo here

    qid & accept id: (35093560, 35093714) query: Find all co authors - Faceting/Grouping for many to many mapping table soup:

    Try this:

    \n
    SELECT "AuthorId", COUNT(*)\nFROM BookAuthorMapping\nWHERE "BookId" IN (SELECT "BookId" FROM BookAuthorMapping WHERE "AuthorId" = 1)\nGROUP BY "AuthorId"\n
    \n

    Demo here

    \n

    You can alternatively use an INNER JOIN:

    \n
    SELECT t1."AuthorId", COUNT(*)\nFROM BookAuthorMapping AS t1\nINNER JOIN BookAuthorMapping AS t2 ON t1."BookId" = t2."BookId" AND t2."AuthorId" = 1\nGROUP BY t1."AuthorId"\n
    \n

    Demo here

    \n soup wrap:

    Try this:

    SELECT "AuthorId", COUNT(*)
    FROM BookAuthorMapping
    WHERE "BookId" IN (SELECT "BookId" FROM BookAuthorMapping WHERE "AuthorId" = 1)
    GROUP BY "AuthorId"
    

    Demo here

    You can alternatively use an INNER JOIN:

    SELECT t1."AuthorId", COUNT(*)
    FROM BookAuthorMapping AS t1
    INNER JOIN BookAuthorMapping AS t2 ON t1."BookId" = t2."BookId" AND t2."AuthorId" = 1
    GROUP BY t1."AuthorId"
    

    Demo here

    qid & accept id: (35104380, 35104524) query: Need to get the messages between the user identified by LoginId and others soup:

    try this query:

    \n
        (SELECT * from messages where id_sender='$lgn_id' order by date_msg desc limit 1)\n    UNION ALL\n    (SELECT * from messages where id_receiver='$lgn_id' order by date_msg desc limit 1)\n
    \n

    in case if you want them joined that is how it goes

    \n
        SELECT Msg1.id_sender, Msg1.id_receiver Msg1.date_msg, Msg2.id_sender, Msg2.id_receiver, Msg2.date_msg \n    FROM messages Msg1 join messages Msg2 \n    WHERE Msg1.id_receiver=Msg2.id_sender and Msg1.id_receiver='$lgn_id' or Msg2.id_sender='$lgn_id' order by date_msg desc limit 1\n
    \n soup wrap:

    try this query:

        (SELECT * from messages where id_sender='$lgn_id' order by date_msg desc limit 1)
        UNION ALL
        (SELECT * from messages where id_receiver='$lgn_id' order by date_msg desc limit 1)
    

    in case if you want them joined that is how it goes

        SELECT Msg1.id_sender, Msg1.id_receiver Msg1.date_msg, Msg2.id_sender, Msg2.id_receiver, Msg2.date_msg 
        FROM messages Msg1 join messages Msg2 
        WHERE Msg1.id_receiver=Msg2.id_sender and Msg1.id_receiver='$lgn_id' or Msg2.id_sender='$lgn_id' order by date_msg desc limit 1
    
    qid & accept id: (35150739, 35151231) query: Creating tables on-the-fly soup:

    You can use arguments in the query alias:

    \n
    with selected_ids(id) as (\n    values (1), (3), (5)\n)\nselect *\nfrom someTable\nwhere id = any (select id from selected_ids)\n
    \n

    You can also use join instead of a subquery, example:

    \n
    create table some_table (id int, str text);\ninsert into some_table values\n(1, 'alfa'),\n(2, 'beta'),\n(3, 'gamma');\n\nwith selected_ids(id) as (\n    values (1), (2)\n)\nselect *\nfrom some_table\njoin selected_ids\nusing(id);\n\n id | str  \n----+------\n  1 | alfa\n  2 | beta\n(2 rows)\n
    \n soup wrap:

    You can use arguments in the query alias:

    with selected_ids(id) as (
        values (1), (3), (5)
    )
    select *
    from someTable
    where id = any (select id from selected_ids)
    

    You can also use join instead of a subquery, example:

    create table some_table (id int, str text);
    insert into some_table values
    (1, 'alfa'),
    (2, 'beta'),
    (3, 'gamma');
    
    with selected_ids(id) as (
        values (1), (2)
    )
    select *
    from some_table
    join selected_ids
    using(id);
    
     id | str  
    ----+------
      1 | alfa
      2 | beta
    (2 rows)
    
    qid & accept id: (35183616, 35183945) query: SQL Query with date that does not exist soup:

    Why don't you want to use LAST_DAY() function:

    \n
    SELECT SYSDATE, trunc(LAST_DAY(SYSDATE)) last, \n   LAST_DAY(SYSDATE) - SYSDATE days_left FROM DUAL;\n
    \n

    Output:

    \n
    SYSDATE           LAST               DAYS_LEFT\n----------------- ----------------- ----------\n03.02.16 18:38:26 29.02.16 00:00:00         26\n\n1 row selected.\n
    \n soup wrap:

    Why don't you want to use LAST_DAY() function:

    SELECT SYSDATE, trunc(LAST_DAY(SYSDATE)) last, 
       LAST_DAY(SYSDATE) - SYSDATE days_left FROM DUAL;
    

    Output:

    SYSDATE           LAST               DAYS_LEFT
    ----------------- ----------------- ----------
    03.02.16 18:38:26 29.02.16 00:00:00         26
    
    1 row selected.
    
    qid & accept id: (35222092, 35223633) query: Sql query to search for multiple match in junction table soup:

    Either JOIN estate_comforts twice, one time for comfort_id 1, and another time for comfort_id 2:

    \n
    SELECT DISTINCT "estates".*\nFROM   "estates"\n   INNER JOIN "estate_comforts" ec1\n          ON "estates"."id" = ec1."estate_id"\n   INNER JOIN "estate_comforts" ec2\n          ON "estates"."id" = ec2."estate_id"\nWHERE ec1."comfort_id" = '1'\n  AND ec2."comfort_id" = '2'\n
    \n

    Alternatively, do a GROUP BY on estate_comforts to find estate_id with at least two different comfort_id values. Join with that result:

    \n
    select e.*\nfrom "estates" e\n  join (select "estate_id"\n        from "estate_comforts"\n        WHERE  "comfort_id" IN ( '1', '2' ) \n        group by "estate_id"\n        having count(distinct "comfort_id") >= 2) ec ON e."id" = ec."estate_id"\n
    \n soup wrap:

    Either JOIN estate_comforts twice, one time for comfort_id 1, and another time for comfort_id 2:

    SELECT DISTINCT "estates".*
    FROM   "estates"
       INNER JOIN "estate_comforts" ec1
              ON "estates"."id" = ec1."estate_id"
       INNER JOIN "estate_comforts" ec2
              ON "estates"."id" = ec2."estate_id"
    WHERE ec1."comfort_id" = '1'
      AND ec2."comfort_id" = '2'
    

    Alternatively, do a GROUP BY on estate_comforts to find estate_id with at least two different comfort_id values. Join with that result:

    select e.*
    from "estates" e
      join (select "estate_id"
            from "estate_comforts"
            WHERE  "comfort_id" IN ( '1', '2' ) 
            group by "estate_id"
            having count(distinct "comfort_id") >= 2) ec ON e."id" = ec."estate_id"
    
    qid & accept id: (35232556, 35233023) query: In SQL how to select previous rows based on the current row values? soup:

    As is well known, every table in Postgres has a primary key. Or should have at least. It would be great if you had a primary key defining expected order of rows.

    \n

    Example data:

    \n
    create table msg (\n    id int primary key,\n    from_person text,\n    to_person text,\n    ts timestamp without time zone\n);\n\ninsert into msg values\n(1, 'nancy',   'charlie', '2016-02-01 01:00:00'),\n(2, 'bob',     'charlie', '2016-02-01 01:00:00'),\n(3, 'charlie', 'nancy',   '2016-02-01 01:00:01'),\n(4, 'mary',    'charlie', '2016-02-01 01:02:00');\n
    \n

    The query:

    \n
    select m1.id, count(m2)\nfrom msg m1\nleft join msg m2\non m2.id < m1.id\nand m2.to_person = m1.to_person\nand m2.ts >= m1.ts- '3m'::interval\ngroup by 1\norder by 1;\n\n id | count \n----+-------\n  1 |     0\n  2 |     1\n  3 |     0\n  4 |     2\n(4 rows)\n
    \n

    In the lack of a primary key you can use the function row_number(), for example:

    \n
    with msg_with_rn as (\n    select *, row_number() over (order by ts, from_person desc) rn\n    from msg\n    )\nselect m1.id, count(m2)\nfrom msg_with_rn m1\nleft join msg_with_rn m2\non m2.rn < m1.rn\nand m2.to_person = m1.to_person\nand m2.ts >= m1.ts- '3m'::interval\ngroup by 1\norder by 1;\n
    \n

    Note that I have used row_number() over (order by ts, from_person desc) to get the sequence of rows as you have presented in the question. Of course, you should decide yourself how to resolve ambiguities arising from the same values of the column ts (as in the first two rows).

    \n soup wrap:

    As is well known, every table in Postgres has a primary key. Or should have at least. It would be great if you had a primary key defining expected order of rows.

    Example data:

    create table msg (
        id int primary key,
        from_person text,
        to_person text,
        ts timestamp without time zone
    );
    
    insert into msg values
    (1, 'nancy',   'charlie', '2016-02-01 01:00:00'),
    (2, 'bob',     'charlie', '2016-02-01 01:00:00'),
    (3, 'charlie', 'nancy',   '2016-02-01 01:00:01'),
    (4, 'mary',    'charlie', '2016-02-01 01:02:00');
    

    The query:

    select m1.id, count(m2)
    from msg m1
    left join msg m2
    on m2.id < m1.id
    and m2.to_person = m1.to_person
    and m2.ts >= m1.ts- '3m'::interval
    group by 1
    order by 1;
    
     id | count 
    ----+-------
      1 |     0
      2 |     1
      3 |     0
      4 |     2
    (4 rows)
    

    In the lack of a primary key you can use the function row_number(), for example:

    with msg_with_rn as (
        select *, row_number() over (order by ts, from_person desc) rn
        from msg
        )
    select m1.id, count(m2)
    from msg_with_rn m1
    left join msg_with_rn m2
    on m2.rn < m1.rn
    and m2.to_person = m1.to_person
    and m2.ts >= m1.ts- '3m'::interval
    group by 1
    order by 1;
    

    Note that I have used row_number() over (order by ts, from_person desc) to get the sequence of rows as you have presented in the question. Of course, you should decide yourself how to resolve ambiguities arising from the same values of the column ts (as in the first two rows).

    qid & accept id: (35238887, 35262711) query: OrientDB - Group by date query soup:

    I created this structure and I think that's similar to yours:

    \n
    create class Post\n\ncreate property Post.datePosted date\n\ninsert into Post (datePosted) values ('2016-01-25')\ninsert into Post (datePosted) values ('2016-01-28')\ninsert into Post (datePosted) values ('2016-01-25')\ninsert into Post (datePosted) values ('2016-02-04')\n
    \n

    These are my options to retrieve the results you want:

    \n

    First query:

    \n
    select day, count(*) as posts from (select datePosted.format('yyyy-MM-dd') as day from Post) \ngroup by day\n
    \n

    Output:

    \n
    ----+------+----------+-----\n#   |@CLASS|day       |posts\n----+------+----------+-----\n0   |null  |2016-01-25|2\n1   |null  |2016-01-28|1\n2   |null  |2016-02-04|1\n----+------+----------+-----\n
    \n

    Second query:

    \n
    select datePosted.format('yyyy-MM-dd'), count(*) as posts from Post group by datePosted\n
    \n

    Output:

    \n
    ----+------+----------+-----\n#   |@CLASS|datePosted|posts\n----+------+----------+-----\n0   |null  |2016-01-25|2\n1   |null  |2016-01-28|1\n2   |null  |2016-02-04|1\n----+------+----------+-----\n
    \n

    Hope it helps

    \n

    EDITED

    \n

    Here's an example in Java:

    \n

    Java Code:

    \n
    private static String remote = "remote:localhost/";\n    public static void main(String[] args) {\n        String dbName = "DBname";\n        String path = remote + dbName;\n        OServerAdmin serverAdmin;\n        try {\n            serverAdmin = new OServerAdmin(path).connect("root", "root");\n            if (serverAdmin.existsDatabase()) { // if DB already exists\n                System.out.println("Database '" + dbName + "' already exists");\n                ODatabaseDocumentTx db = new ODatabaseDocumentTx(path);\n                db.open("root", "root");\n                Iterable results = db\n                    .command(new OSQLSynchQuery(\n                            "select day, count(*) as posts from (select datePosted.format('yyyy-MM-dd') as day from Post) group by day"))\n                    .execute();\n                for (ODocument result : results) {\n                    System.out.println("Day: " + result.field("day") + "   Posts: " + result.field("posts"));\n                }\n                db.close();\n            }\n            else {\n                serverAdmin.createDatabase(dbName, "document", "plocal");\n                System.out.println("Database " + dbName + " created");\n            }\n            serverAdmin.close();\n        } catch (IOException e) {\n            e.printStackTrace();\n        }\n    }\n
    \n

    Output:

    \n
    Day: 2016-01-25   Posts: 2\nDay: 2016-01-28   Posts: 1\nDay: 2016-02-04   Posts: 1\n
    \n soup wrap:

    I created this structure and I think that's similar to yours:

    create class Post
    
    create property Post.datePosted date
    
    insert into Post (datePosted) values ('2016-01-25')
    insert into Post (datePosted) values ('2016-01-28')
    insert into Post (datePosted) values ('2016-01-25')
    insert into Post (datePosted) values ('2016-02-04')
    

    These are my options to retrieve the results you want:

    First query:

    select day, count(*) as posts from (select datePosted.format('yyyy-MM-dd') as day from Post) 
    group by day
    

    Output:

    ----+------+----------+-----
    #   |@CLASS|day       |posts
    ----+------+----------+-----
    0   |null  |2016-01-25|2
    1   |null  |2016-01-28|1
    2   |null  |2016-02-04|1
    ----+------+----------+-----
    

    Second query:

    select datePosted.format('yyyy-MM-dd'), count(*) as posts from Post group by datePosted
    

    Output:

    ----+------+----------+-----
    #   |@CLASS|datePosted|posts
    ----+------+----------+-----
    0   |null  |2016-01-25|2
    1   |null  |2016-01-28|1
    2   |null  |2016-02-04|1
    ----+------+----------+-----
    

    Hope it helps

    EDITED

    Here's an example in Java:

    Java Code:

    private static String remote = "remote:localhost/";
        public static void main(String[] args) {
            String dbName = "DBname";
            String path = remote + dbName;
            OServerAdmin serverAdmin;
            try {
                serverAdmin = new OServerAdmin(path).connect("root", "root");
                if (serverAdmin.existsDatabase()) { // if DB already exists
                    System.out.println("Database '" + dbName + "' already exists");
                    ODatabaseDocumentTx db = new ODatabaseDocumentTx(path);
                    db.open("root", "root");
                    Iterable results = db
                        .command(new OSQLSynchQuery(
                                "select day, count(*) as posts from (select datePosted.format('yyyy-MM-dd') as day from Post) group by day"))
                        .execute();
                    for (ODocument result : results) {
                        System.out.println("Day: " + result.field("day") + "   Posts: " + result.field("posts"));
                    }
                    db.close();
                }
                else {
                    serverAdmin.createDatabase(dbName, "document", "plocal");
                    System.out.println("Database " + dbName + " created");
                }
                serverAdmin.close();
            } catch (IOException e) {
                e.printStackTrace();
            }
        }
    

    Output:

    Day: 2016-01-25   Posts: 2
    Day: 2016-01-28   Posts: 1
    Day: 2016-02-04   Posts: 1
    
    qid & accept id: (35260298, 35322026) query: How can I annotate a queryset with information from another model, or paginate a queryset built with raw in the Django Rest Framework? soup:

    You can define an aggregate for the null check (assuming postgresql):

    \n
    from django.db.models import Aggregate\nclass AnyNotNull(Aggregate):\n    function = 'ANY'\n    template = 'true = %(function)s(array_agg(%(expressions)s is not null))'\n
    \n

    And use it in a query:

    \n
    Tense.objects.filter(\n    Q(verbuser_tenses__isnull = True) |\n    Q(verbuser_tenses__verbuser_id = user_id)\n    ).annotate(selected = AnyNotNull('verbuser_tenses__verbuser_id')\n    ).order_by('id')\n
    \n

    This will annotate the tense objects with true in selected if the user has access.

    \n soup wrap:

    You can define an aggregate for the null check (assuming postgresql):

    from django.db.models import Aggregate
    class AnyNotNull(Aggregate):
        function = 'ANY'
        template = 'true = %(function)s(array_agg(%(expressions)s is not null))'
    

    And use it in a query:

    Tense.objects.filter(
        Q(verbuser_tenses__isnull = True) |
        Q(verbuser_tenses__verbuser_id = user_id)
        ).annotate(selected = AnyNotNull('verbuser_tenses__verbuser_id')
        ).order_by('id')
    

    This will annotate the tense objects with true in selected if the user has access.

    qid & accept id: (35263742, 35264817) query: join table.A with table.B, table.B having multiple rows with respect to one id from table.A soup:

    For each product, you can use NOT EXISTS to make sure no image with lower id exists:

    \n
    select p.id, p.product, pi.id, pi.pid, pi.image\nfrom products as p\n  join product_image as pi on p.id = pi.pid\nwhere not exists (select * from product_image as pi2\n                  where pi2.pid = pi.pid\n                    and pi2.id < pi.id)\n
    \n

    Alternatively, have a sub-query that returns each pid's minimum id, join one more time with that sub-query:

    \n
    select p.id, p.product, pi.id, pi.pid, pi.image\nfrom products as p\n  join product_image as pi on p.id = pi.pid\n  join (select pid, min(id) as id from product_image group by pid) pi2\n      on pi.id = pi2.id and pi.pid = pi2.pid\n
    \n

    May execute faster on MySQL.

    \n soup wrap:

    For each product, you can use NOT EXISTS to make sure no image with lower id exists:

    select p.id, p.product, pi.id, pi.pid, pi.image
    from products as p
      join product_image as pi on p.id = pi.pid
    where not exists (select * from product_image as pi2
                      where pi2.pid = pi.pid
                        and pi2.id < pi.id)
    

    Alternatively, have a sub-query that returns each pid's minimum id, join one more time with that sub-query:

    select p.id, p.product, pi.id, pi.pid, pi.image
    from products as p
      join product_image as pi on p.id = pi.pid
      join (select pid, min(id) as id from product_image group by pid) pi2
          on pi.id = pi2.id and pi.pid = pi2.pid
    

    May execute faster on MySQL.

    qid & accept id: (35268549, 35268765) query: insert into a big table where the PK is not an identity soup:

    The recommended approach is drop all indices, including primary keys, when bulk loading data as it speeds up the load and reduces the load on the transaction log. However, you need to make sure you add the IDENTITY property to the new table prior to the load and use SET IDENTITY_INSERT .... ON to allow you to insert your old identity values.

    \n

    For this example, let's assume this your destination table:

    \n
    CREATE TABLE dbo.YourTable(YourTableId INT IDENTITY(1,1), SomeData INT)\n
    \n

    You then need to use IDENTITY_INSERT...ON to ensure you can insert the data from your source table:

    \n
    SET IDENTITY_INSERT dbo.YourTable ON\n\n--copy data from source table\nINSERT INTO dbo.YourTable\n(YourTableId, SomeData)\nSELECT 1,1\nUNION\nSELECT 2,2\n
    \n

    After you have migrated the data, you need to witch the IDENTITY_INSERT off again:

    \n
    SET IDENTITY_INSERT dbo.YourTable OFF\n
    \n

    Add the primary key:

    \n
    ALTER TABLE dbo.[YourTable] ADD CONSTRAINT PK_YourTable_YourTableID PRIMARY KEY CLUSTERED (YourTableID) \n
    \n

    And then reseed your primary key with the RESEED value being equal to the current maximum PK value

    \n
    DBCC CHECKIDENT ('[YourTable]', RESEED, 2)\n
    \n

    After running this command, this record will be inserted with a value of 3 for YourTableId

    \n
    INSERT INTO dbo.YourTable\nSELECT 3\n
    \n soup wrap:

    The recommended approach is drop all indices, including primary keys, when bulk loading data as it speeds up the load and reduces the load on the transaction log. However, you need to make sure you add the IDENTITY property to the new table prior to the load and use SET IDENTITY_INSERT .... ON to allow you to insert your old identity values.

    For this example, let's assume this your destination table:

    CREATE TABLE dbo.YourTable(YourTableId INT IDENTITY(1,1), SomeData INT)
    

    You then need to use IDENTITY_INSERT...ON to ensure you can insert the data from your source table:

    SET IDENTITY_INSERT dbo.YourTable ON
    
    --copy data from source table
    INSERT INTO dbo.YourTable
    (YourTableId, SomeData)
    SELECT 1,1
    UNION
    SELECT 2,2
    

    After you have migrated the data, you need to witch the IDENTITY_INSERT off again:

    SET IDENTITY_INSERT dbo.YourTable OFF
    

    Add the primary key:

    ALTER TABLE dbo.[YourTable] ADD CONSTRAINT PK_YourTable_YourTableID PRIMARY KEY CLUSTERED (YourTableID) 
    

    And then reseed your primary key with the RESEED value being equal to the current maximum PK value

    DBCC CHECKIDENT ('[YourTable]', RESEED, 2)
    

    After running this command, this record will be inserted with a value of 3 for YourTableId

    INSERT INTO dbo.YourTable
    SELECT 3
    
    qid & accept id: (35291850, 35292533) query: SQL concatenate strings soup:

    The way you are storing your data is really bad practice. However here is a solution for training purposes:

    \n
    DECLARE \n   @str1 varchar(30) = 'A1,B1,C1',\n   @str2 varchar(30) = 'A2,B2,C2',\n   @result varchar(60)\n\n;WITH split as\n(\n  SELECT t.c.value('.', 'VARCHAR(2000)') x\n  FROM (\n      SELECT x = CAST('' + \n          REPLACE(@str1 + ',' + @str2, ',', '') + '' AS XML)\n  ) a\nCROSS APPLY x.nodes('/t') t(c)\n)\nSELECT\n  @result =\n    STUFF(( \n        SELECT ',' + x\n        FROM split\n        ORDER BY x\n        for xml path(''), type \n          ).value('.', 'varchar(max)'), 1, 1, '')\n\nSELECT @result\n
    \n

    Result:

    \n
    A1,A2,B1,B2,C1,C2\n
    \n soup wrap:

    The way you are storing your data is really bad practice. However here is a solution for training purposes:

    DECLARE 
       @str1 varchar(30) = 'A1,B1,C1',
       @str2 varchar(30) = 'A2,B2,C2',
       @result varchar(60)
    
    ;WITH split as
    (
      SELECT t.c.value('.', 'VARCHAR(2000)') x
      FROM (
          SELECT x = CAST('' + 
              REPLACE(@str1 + ',' + @str2, ',', '') + '' AS XML)
      ) a
    CROSS APPLY x.nodes('/t') t(c)
    )
    SELECT
      @result =
        STUFF(( 
            SELECT ',' + x
            FROM split
            ORDER BY x
            for xml path(''), type 
              ).value('.', 'varchar(max)'), 1, 1, '')
    
    SELECT @result
    

    Result:

    A1,A2,B1,B2,C1,C2
    
    qid & accept id: (35293084, 35293246) query: Duplicate (repeat) rows in sql query result soup:

    You can use generate_series():

    \n
    select t.id, t.value\nfrom (select t.id, t.value, generate_series(1, t.value)\n      from t \n     ) t;\n
    \n

    You can do the same thing with a lateral join:

    \n
    select t.id, t.value\nfrom t, lateral\n     generate_series(1, t.value);\n
    \n soup wrap:

    You can use generate_series():

    select t.id, t.value
    from (select t.id, t.value, generate_series(1, t.value)
          from t 
         ) t;
    

    You can do the same thing with a lateral join:

    select t.id, t.value
    from t, lateral
         generate_series(1, t.value);
    
    qid & accept id: (35295861, 35296189) query: Converting nvarchar to DATE soup:

    You can try this:

    \n
    DECLARE @date VARCHAR(50) = '102915'\n\nSELECT   CAST(  CAST( '20'+                  --prefix for the year 2000\n                      SUBSTRING( @date,5,2)+ --year\n                      SUBSTRING( @date,1,2)+ --month\n                      SUBSTRING( @date,3,2)  --day\n                 AS VARCHAR(10)) \n          AS DATE)\n
    \n

    result:

    \n

    enter image description here

    \n

    But this will assume your dates are all greater than 1999.

    \n

    as your date format is MMddYY it's hard to attain the correct date part for the year.

    \n

    So for your view you can use:

    \n
    create view v1 as\n    Select CAST(  CAST( '20'+                  --prefix for the year 2000\n                          SUBSTRING( [ISSUE],5,2)+ --year\n                          SUBSTRING( [ISSUE],1,2)+ --month\n                          SUBSTRING( [ISSUE],3,2)  --day\n                     AS VARCHAR(10)) \n              AS DATE) as ISSUE\n          , \n        CAST(  CAST( '20'+                  --prefix for the year 2000\n                          SUBSTRING( [EXPIRE],5,2)+ --year\n                          SUBSTRING( [EXPIRE],1,2)+ --month\n                          SUBSTRING( [EXPIRE],3,2)  --day\n                     AS VARCHAR(10)) \n              AS DATE)  as EXPIRE\n    from tablename\n
    \n

    To have the date in the format mm-dd-yyyy you need to use CONVERT you can see the different conversions here:

    \n
    create view v1 as\n        Select convert(VARCHAR ,CAST( CAST( '20'+                  --prefix for the year 2000\n                              SUBSTRING( [ISSUE],5,2)+ --year\n                              SUBSTRING( [ISSUE],1,2)+ --month\n                              SUBSTRING( [ISSUE],3,2)  --day\n                         AS VARCHAR(10)) \n                   AS DATE), 110) as ISSUE\n              , \n            convert(VARCHAR ,CAST(  CAST( '20'+                  --prefix for the year 2000\n                              SUBSTRING( [EXPIRE],5,2)+ --year\n                              SUBSTRING( [EXPIRE],1,2)+ --month\n                              SUBSTRING( [EXPIRE],3,2)  --day\n                         AS VARCHAR(10)) \n                   AS DATE), 110)  as EXPIRE\n        from tablename\n
    \n soup wrap:

    You can try this:

    DECLARE @date VARCHAR(50) = '102915'
    
    SELECT   CAST(  CAST( '20'+                  --prefix for the year 2000
                          SUBSTRING( @date,5,2)+ --year
                          SUBSTRING( @date,1,2)+ --month
                          SUBSTRING( @date,3,2)  --day
                     AS VARCHAR(10)) 
              AS DATE)
    

    result:

    enter image description here

    But this will assume your dates are all greater than 1999.

    as your date format is MMddYY it's hard to attain the correct date part for the year.

    So for your view you can use:

    create view v1 as
        Select CAST(  CAST( '20'+                  --prefix for the year 2000
                              SUBSTRING( [ISSUE],5,2)+ --year
                              SUBSTRING( [ISSUE],1,2)+ --month
                              SUBSTRING( [ISSUE],3,2)  --day
                         AS VARCHAR(10)) 
                  AS DATE) as ISSUE
              , 
            CAST(  CAST( '20'+                  --prefix for the year 2000
                              SUBSTRING( [EXPIRE],5,2)+ --year
                              SUBSTRING( [EXPIRE],1,2)+ --month
                              SUBSTRING( [EXPIRE],3,2)  --day
                         AS VARCHAR(10)) 
                  AS DATE)  as EXPIRE
        from tablename
    

    To have the date in the format mm-dd-yyyy you need to use CONVERT you can see the different conversions here:

    create view v1 as
            Select convert(VARCHAR ,CAST( CAST( '20'+                  --prefix for the year 2000
                                  SUBSTRING( [ISSUE],5,2)+ --year
                                  SUBSTRING( [ISSUE],1,2)+ --month
                                  SUBSTRING( [ISSUE],3,2)  --day
                             AS VARCHAR(10)) 
                       AS DATE), 110) as ISSUE
                  , 
                convert(VARCHAR ,CAST(  CAST( '20'+                  --prefix for the year 2000
                                  SUBSTRING( [EXPIRE],5,2)+ --year
                                  SUBSTRING( [EXPIRE],1,2)+ --month
                                  SUBSTRING( [EXPIRE],3,2)  --day
                             AS VARCHAR(10)) 
                       AS DATE), 110)  as EXPIRE
            from tablename
    
    qid & accept id: (35321999, 35325088) query: Entity Framework DB Migration Script soup:

    You can switch a project to code first and generate the scripts you need via migrations. See the link below for a guide on moving from db first to code first, but it sounds like you may be partially there already.

    \n

    1) enable-migrations for the project with your context if you haven't already.

    \n

    2) create a baseline migration. EF will use this as a starting point so you won't get a bunch of code to create the objects that already exist. The ignore changes flag tells EF not to create the existing objects. https://msdn.microsoft.com/en-us/data/dn579398.aspx?f=255&MSPPError=-2147217396#option1

    \n
    create-migration InitialCodeFirst -IgnoreChanges\n
    \n

    3) Now modify your schema as you normally would and create a migration:

    \n
    add-migration SomeNewThing\n
    \n

    4) Create a script for a different database (like PROD) by using -Script. This will not update your database it just creates a script, so I usually run it a second time without -Script:

    \n
    update-database -Script    // creates a script (in VS), but does not apply\nupdate-database            // updates the database your connect string points to\n
    \n

    5) DBA runs script and this will add a record to __MigrationHistory to identify it as being applied.

    \n

    http://devgush.com/2014/02/24/migrating-a-project-from-database-first-to-code-first/

    \n

    Here is a useful link on deployment: http://cpratt.co/migrating-production-database-with-entity-framework-code-first/#at_pco=smlwn-1.0&at_si=54ad5c7b61c48943&at_ab=per-12&at_pos=0&at_tot=1

    \n soup wrap:

    You can switch a project to code first and generate the scripts you need via migrations. See the link below for a guide on moving from db first to code first, but it sounds like you may be partially there already.

    1) enable-migrations for the project with your context if you haven't already.

    2) create a baseline migration. EF will use this as a starting point so you won't get a bunch of code to create the objects that already exist. The ignore changes flag tells EF not to create the existing objects. https://msdn.microsoft.com/en-us/data/dn579398.aspx?f=255&MSPPError=-2147217396#option1

    create-migration InitialCodeFirst -IgnoreChanges
    

    3) Now modify your schema as you normally would and create a migration:

    add-migration SomeNewThing
    

    4) Create a script for a different database (like PROD) by using -Script. This will not update your database it just creates a script, so I usually run it a second time without -Script:

    update-database -Script    // creates a script (in VS), but does not apply
    update-database            // updates the database your connect string points to
    

    5) DBA runs script and this will add a record to __MigrationHistory to identify it as being applied.

    http://devgush.com/2014/02/24/migrating-a-project-from-database-first-to-code-first/

    Here is a useful link on deployment: http://cpratt.co/migrating-production-database-with-entity-framework-code-first/#at_pco=smlwn-1.0&at_si=54ad5c7b61c48943&at_ab=per-12&at_pos=0&at_tot=1

    qid & accept id: (35330341, 35330720) query: How to split value using underscore (_) and period (.)? soup:

    Here is one way to do it in SQL Server

    \n
    ;WITH cte\n     AS (SELECT Replace(col, '_', '.') + '.' AS col\n         FROM   (VALUES ('Abc_abc_1.2.3'),\n                        ('PQRST.abc_1'),\n                        ('XY.143_z')) tc (col))\nSELECT original_col = col,\n       column_1=COALESCE(LEFT(col, Charindex('.', col) - 1), ''),\n       column_2=COALESCE(Substring(col, P1.POS + 1, P2.POS - P1.POS - 1), ''),\n       column_3=COALESCE(Substring(col, P2.POS + 1, P3.POS - P2.POS - 1), ''),\n       column_4=COALESCE(Substring(col, P3.POS + 1, P4.POS - P3.POS - 1), ''),\n       column_4=COALESCE(Substring(col, P4.POS + 1, P5.POS - P4.POS - 1), '')\nFROM   cte\n       CROSS APPLY (VALUES (CASE\n                     WHEN Charindex('.', col) >= 1 THEN Charindex('.', col)\n                   END)) AS P1(POS)\n       CROSS APPLY (VALUES (CASE\n                     WHEN Charindex('.', col, P1.POS + 1) >= 1 THEN Charindex('.', col, P1.POS + 1)\n                   END)) AS P2(POS)\n       CROSS APPLY (VALUES (CASE\n                     WHEN Charindex('.', col, P2.POS + 1) >= 1 THEN Charindex('.', col, P2.POS + 1)\n                   END )) AS P3(POS)\n       CROSS APPLY (VALUES (CASE\n                     WHEN Charindex('.', col, P3.POS + 1) >= 1 THEN Charindex('.', col, P3.POS + 1)\n                   END)) AS P4(POS)\n       CROSS APPLY (VALUES (CASE\n                     WHEN Charindex('.', col, P4.POS + 1) >= 1 THEN Charindex('.', col, P4.POS + 1)\n                   END)) AS P5(POS) \n
    \n

    Result:

    \n
    ╔════════════════╦══════════╦══════════╦══════════╦══════════╦══════════╗\n║  original_col  ║ column_1 ║ column_2 ║ column_3 ║ column_4 ║ column_4 ║\n╠════════════════╬══════════╬══════════╬══════════╬══════════╬══════════╣\n║ Abc.abc.1.2.3. ║ Abc      ║ abc      ║ 1        ║        2 ║        3 ║\n║ PQRST.abc.1.   ║ PQRST    ║ abc      ║ 1        ║          ║          ║\n║ XY.143.z.      ║ XY       ║ 143      ║ z        ║          ║          ║\n╚════════════════╩══════════╩══════════╩══════════╩══════════╩══════════╝\n
    \n soup wrap:

    Here is one way to do it in SQL Server

    ;WITH cte
         AS (SELECT Replace(col, '_', '.') + '.' AS col
             FROM   (VALUES ('Abc_abc_1.2.3'),
                            ('PQRST.abc_1'),
                            ('XY.143_z')) tc (col))
    SELECT original_col = col,
           column_1=COALESCE(LEFT(col, Charindex('.', col) - 1), ''),
           column_2=COALESCE(Substring(col, P1.POS + 1, P2.POS - P1.POS - 1), ''),
           column_3=COALESCE(Substring(col, P2.POS + 1, P3.POS - P2.POS - 1), ''),
           column_4=COALESCE(Substring(col, P3.POS + 1, P4.POS - P3.POS - 1), ''),
           column_4=COALESCE(Substring(col, P4.POS + 1, P5.POS - P4.POS - 1), '')
    FROM   cte
           CROSS APPLY (VALUES (CASE
                         WHEN Charindex('.', col) >= 1 THEN Charindex('.', col)
                       END)) AS P1(POS)
           CROSS APPLY (VALUES (CASE
                         WHEN Charindex('.', col, P1.POS + 1) >= 1 THEN Charindex('.', col, P1.POS + 1)
                       END)) AS P2(POS)
           CROSS APPLY (VALUES (CASE
                         WHEN Charindex('.', col, P2.POS + 1) >= 1 THEN Charindex('.', col, P2.POS + 1)
                       END )) AS P3(POS)
           CROSS APPLY (VALUES (CASE
                         WHEN Charindex('.', col, P3.POS + 1) >= 1 THEN Charindex('.', col, P3.POS + 1)
                       END)) AS P4(POS)
           CROSS APPLY (VALUES (CASE
                         WHEN Charindex('.', col, P4.POS + 1) >= 1 THEN Charindex('.', col, P4.POS + 1)
                       END)) AS P5(POS) 
    

    Result:

    ╔════════════════╦══════════╦══════════╦══════════╦══════════╦══════════╗
    ║  original_col  ║ column_1 ║ column_2 ║ column_3 ║ column_4 ║ column_4 ║
    ╠════════════════╬══════════╬══════════╬══════════╬══════════╬══════════╣
    ║ Abc.abc.1.2.3. ║ Abc      ║ abc      ║ 1        ║        2 ║        3 ║
    ║ PQRST.abc.1.   ║ PQRST    ║ abc      ║ 1        ║          ║          ║
    ║ XY.143.z.      ║ XY       ║ 143      ║ z        ║          ║          ║
    ╚════════════════╩══════════╩══════════╩══════════╩══════════╩══════════╝
    
    qid & accept id: (35354450, 35354462) query: list of states with the total number of units that have been sold to that state soup:

    You want to use a subquery to find the orderids that you want to delete in a child table:

    \n
    delete from orderdetail where orderid in (\n     select orderid from orders\n      where customerid = '12341'\n);\n
    \n

    Then you're able to delete the corresponding orders:

    \n
    delete from orders\n where customerid = '12341';\n
    \n

    If your tables are set up with a cascading delete, you can just execute the 2nd delete statement (without first going to execute the first statement).

    \n soup wrap:

    You want to use a subquery to find the orderids that you want to delete in a child table:

    delete from orderdetail where orderid in (
         select orderid from orders
          where customerid = '12341'
    );
    

    Then you're able to delete the corresponding orders:

    delete from orders
     where customerid = '12341';
    

    If your tables are set up with a cascading delete, you can just execute the 2nd delete statement (without first going to execute the first statement).

    qid & accept id: (35361258, 35361961) query: Calling Postgres Stored Procedure with arguments and insert values from a given select soup:

    You can do that using CTE like below(Only if you want to avoid a function)

    \n
    WITH cte (id)\nAS (\n    INSERT INTO another_table (sensorname,starttime)\n    SELECT sensorname\n          ,starttime\n    FROM sensors WHERE id = id \n    returning id;\n    )\nDELETE\nFROM\nsensors\nWHERE id IN (SELECT *FROM cte);\n
    \n

    OR

    \n

    By creating a function it can be like

    \n
    create or replace function fn(id int) returns void as\n$$\ninsert into another_table(sensorname,starttime)  \nSELECT sensorname, starttime from sensors where id =id;\ndelete from sensors where id =id;\n$$\nlanguage sql\n
    \n

    Usage:

    \n
    select fn(12)\n
    \n soup wrap:

    You can do that using CTE like below(Only if you want to avoid a function)

    WITH cte (id)
    AS (
        INSERT INTO another_table (sensorname,starttime)
        SELECT sensorname
              ,starttime
        FROM sensors WHERE id = id 
        returning id;
        )
    DELETE
    FROM
    sensors
    WHERE id IN (SELECT *FROM cte);
    

    OR

    By creating a function it can be like

    create or replace function fn(id int) returns void as
    $$
    insert into another_table(sensorname,starttime)  
    SELECT sensorname, starttime from sensors where id =id;
    delete from sensors where id =id;
    $$
    language sql
    

    Usage:

    select fn(12)
    
    qid & accept id: (35391959, 35392017) query: Joined query, with group by and sub count based on a column soup:

    If you are using a BIT column type you may need to cast this into an integer.

    \n

    I would suggest a CASE select statement, or a subselect.

    \n

    ie,

    \n

    Activated

    \n
    sum(CAST(activated as INT)) as TotalActivated\n
    \n

    or

    \n
    (select sum(CAST(activated as INT)) FROM products WHERE product_type_id = p.product_type_id) as TotalActivated\n
    \n

    Not activated

    \n
    sum(case when activated = 1 then 0 else 1 end) as NotActivated\n
    \n

    or

    \n
    (select sum(case when activated = 1 then 0 else 1 end) FROM products WHERE product_type_id = p.product_type_id) as NotActivated\n
    \n soup wrap:

    If you are using a BIT column type you may need to cast this into an integer.

    I would suggest a CASE select statement, or a subselect.

    ie,

    Activated

    sum(CAST(activated as INT)) as TotalActivated
    

    or

    (select sum(CAST(activated as INT)) FROM products WHERE product_type_id = p.product_type_id) as TotalActivated
    

    Not activated

    sum(case when activated = 1 then 0 else 1 end) as NotActivated
    

    or

    (select sum(case when activated = 1 then 0 else 1 end) FROM products WHERE product_type_id = p.product_type_id) as NotActivated
    
    qid & accept id: (35435745, 35455521) query: Postgresql query join only first row that match pattern soup:

    Just replace = with in:

    \n
    where ovr.progressivo in (\n
    \n

    If you want only the (whatever) first:

    \n
    select distinct on (ovr.progressivo)\n    ovr.progressivo,\n    ovr.art_codice,\n    ovr.descrizione1,\n    ovr.descrizione2,\n    ovr.riga\nfrom ovr\nwhere ovr.progressivo in (\n    select progressivo\n    from ovr\n    where\n        ovr.art_codice ~~ '0034%'::text or\n        ovr.art_codice ~~ '0035%'::text or\n        ovr.art_codice ~~ '0036%'::text\n    group by progressivo\n)\norder by ovr.progressivo\n
    \n

    If you have a criteria to what is first add it to the order by clause

    \n soup wrap:

    Just replace = with in:

    where ovr.progressivo in (
    

    If you want only the (whatever) first:

    select distinct on (ovr.progressivo)
        ovr.progressivo,
        ovr.art_codice,
        ovr.descrizione1,
        ovr.descrizione2,
        ovr.riga
    from ovr
    where ovr.progressivo in (
        select progressivo
        from ovr
        where
            ovr.art_codice ~~ '0034%'::text or
            ovr.art_codice ~~ '0035%'::text or
            ovr.art_codice ~~ '0036%'::text
        group by progressivo
    )
    order by ovr.progressivo
    

    If you have a criteria to what is first add it to the order by clause

    qid & accept id: (35471226, 35471974) query: SQL Regex to select string between second and third forward slash soup:

    What about split_part?

    \n
    SELECT split_part(column, '/', 3) FROM table\n
    \n

    Example:

    \n
    select split_part ('/abc/required_string/2/', '/', 3)\n
    \n

    Returns: required string

    \n soup wrap:

    What about split_part?

    SELECT split_part(column, '/', 3) FROM table
    

    Example:

    select split_part ('/abc/required_string/2/', '/', 3)
    

    Returns: required string

    qid & accept id: (35478088, 35479595) query: Fetch recursive tree with only certain elements "expanded" soup:

    Oracle Setup:

    \n
    CREATE TABLE hierarchy ( id, parent_id ) AS\n  SELECT 1, NULL FROM DUAL UNION ALL\n  SELECT 2, 1 FROM DUAL UNION ALL\n  SELECT 3, 2 FROM DUAL UNION ALL\n  SELECT 4, 1 FROM DUAL UNION ALL\n  SELECT 5, 4 FROM DUAL UNION ALL\n  SELECT 6, 5 FROM DUAL UNION ALL\n  SELECT 7, NULL FROM DUAL UNION ALL\n  SELECT 8, 7 FROM DUAL UNION ALL\n  SELECT 9, 8 FROM DUAL UNION ALL\n  SELECT 10, 9 FROM DUAL UNION ALL\n  SELECT 11, 8 FROM DUAL;\n
    \n

    Query - IN clause has all parents explanded:

    \n
    SELECT LPAD( '+ ', LEVEL*2, ' ' ) || id\nFROM   hierarchy\nSTART WITH parent_id IS NULL\nCONNECT BY PRIOR id = parent_id\nAND        parent_id IN ( 1, 2, 4, 5, 7, 8, 9 );\n
    \n

    Output:

    \n
    + 1\n  + 2\n    + 3\n  +4\n    + 5\n      + 6\n+ 7\n  + 8\n    + 9\n      + 10\n    + 11\n
    \n

    Query - IN clause has all parents expanded except 4 and 8:

    \n
    SELECT LPAD( '+ ', LEVEL*2, ' ' ) || id\nFROM   hierarchy\nSTART WITH parent_id IS NULL\nCONNECT BY PRIOR id = parent_id\nAND        parent_id IN ( 1, 2, 5, 7, 9 );\n
    \n

    Output:

    \n
    + 1\n  + 2\n    + 3\n  +4\n+ 7\n  + 8\n
    \n

    Update - Showing leaf nodes:

    \n
    SELECT LPAD( '+ ', LEVEL*2, ' ' ) || id AS value,\n       isleaf\nFROM   (\n  -- Find the leaves first (as if all parents are expanded)\n  SELECT h.*,\n         CONNECT_BY_ISLEAF AS isLeaf\n  FROM   hierarchy h\n  START WITH parent_id IS NULL\n  CONNECT BY PRIOR id = parent_id\n)\nSTART WITH parent_id IS NULL\nCONNECT BY PRIOR id = parent_id\nAND        parent_id IN ( 1, 2, 4, 7, 9 );\n
    \n

    Output:

    \n
    VALUE                ISLEAF\n---------------- ----------\n+ 1                       0 \n  + 2                     0 \n    + 3                   1 \n  + 4                     0 \n    + 5                   0 \n+ 7                       0 \n  + 8                     0 \n
    \n

    1 Indicates that the node has no children and 0 indicates that the node has children (even though they might not be expanded).

    \n soup wrap:

    Oracle Setup:

    CREATE TABLE hierarchy ( id, parent_id ) AS
      SELECT 1, NULL FROM DUAL UNION ALL
      SELECT 2, 1 FROM DUAL UNION ALL
      SELECT 3, 2 FROM DUAL UNION ALL
      SELECT 4, 1 FROM DUAL UNION ALL
      SELECT 5, 4 FROM DUAL UNION ALL
      SELECT 6, 5 FROM DUAL UNION ALL
      SELECT 7, NULL FROM DUAL UNION ALL
      SELECT 8, 7 FROM DUAL UNION ALL
      SELECT 9, 8 FROM DUAL UNION ALL
      SELECT 10, 9 FROM DUAL UNION ALL
      SELECT 11, 8 FROM DUAL;
    

    Query - IN clause has all parents explanded:

    SELECT LPAD( '+ ', LEVEL*2, ' ' ) || id
    FROM   hierarchy
    START WITH parent_id IS NULL
    CONNECT BY PRIOR id = parent_id
    AND        parent_id IN ( 1, 2, 4, 5, 7, 8, 9 );
    

    Output:

    + 1
      + 2
        + 3
      +4
        + 5
          + 6
    + 7
      + 8
        + 9
          + 10
        + 11
    

    Query - IN clause has all parents expanded except 4 and 8:

    SELECT LPAD( '+ ', LEVEL*2, ' ' ) || id
    FROM   hierarchy
    START WITH parent_id IS NULL
    CONNECT BY PRIOR id = parent_id
    AND        parent_id IN ( 1, 2, 5, 7, 9 );
    

    Output:

    + 1
      + 2
        + 3
      +4
    + 7
      + 8
    

    Update - Showing leaf nodes:

    SELECT LPAD( '+ ', LEVEL*2, ' ' ) || id AS value,
           isleaf
    FROM   (
      -- Find the leaves first (as if all parents are expanded)
      SELECT h.*,
             CONNECT_BY_ISLEAF AS isLeaf
      FROM   hierarchy h
      START WITH parent_id IS NULL
      CONNECT BY PRIOR id = parent_id
    )
    START WITH parent_id IS NULL
    CONNECT BY PRIOR id = parent_id
    AND        parent_id IN ( 1, 2, 4, 7, 9 );
    

    Output:

    VALUE                ISLEAF
    ---------------- ----------
    + 1                       0 
      + 2                     0 
        + 3                   1 
      + 4                     0 
        + 5                   0 
    + 7                       0 
      + 8                     0 
    

    1 Indicates that the node has no children and 0 indicates that the node has children (even though they might not be expanded).

    qid & accept id: (35508881, 35509010) query: SQL/PostgreSQL: How to select limited amount of rows of different types based on limits stored in a different table? soup:

    You can always do it with a union.

    \n
    select top (SELECT Limit FROM Table2 WHERE _Element='A') * from Table1\nWHERE attribute = A\nUNION ALL\nselect top (SELECT Limit FROM Table2 WHERE _Element='B') * from Table1\nWHERE attribute = B\nUNION ALL\nselect top (SELECT Limit FROM Table2 WHERE _Element='C') * from Table1\nWHERE attribute = C\n
    \n

    Or using row_number:

    \n
     with cte as (SELECT _Key, \nattribute, \nROW_NUMBER() OVER (Partition by attribute Order by _Key ASC) as rowno\n    From Table1)\n    SELECT * FROM cte\n    LEFT JOIN Table2 on Table2.Element = Table1.attribute\n    WHERE rowno >= Limit\n
    \n soup wrap:

    You can always do it with a union.

    select top (SELECT Limit FROM Table2 WHERE _Element='A') * from Table1
    WHERE attribute = A
    UNION ALL
    select top (SELECT Limit FROM Table2 WHERE _Element='B') * from Table1
    WHERE attribute = B
    UNION ALL
    select top (SELECT Limit FROM Table2 WHERE _Element='C') * from Table1
    WHERE attribute = C
    

    Or using row_number:

     with cte as (SELECT _Key, 
    attribute, 
    ROW_NUMBER() OVER (Partition by attribute Order by _Key ASC) as rowno
        From Table1)
        SELECT * FROM cte
        LEFT JOIN Table2 on Table2.Element = Table1.attribute
        WHERE rowno >= Limit
    
    qid & accept id: (35574490, 35577315) query: Search a value in the column value that stores comma separated values soup:

    Oracle Setup:

    \n
    CREATE OR REPLACE FUNCTION split_String(\n  i_str    IN  VARCHAR2,\n  i_delim  IN  VARCHAR2 DEFAULT ','\n) RETURN SYS.ODCIVARCHAR2LIST DETERMINISTIC\nAS\n  p_result       SYS.ODCIVARCHAR2LIST := SYS.ODCIVARCHAR2LIST();\n  p_start        NUMBER(5) := 1;\n  p_end          NUMBER(5);\n  c_len CONSTANT NUMBER(5) := LENGTH( i_str );\n  c_ld  CONSTANT NUMBER(5) := LENGTH( i_delim );\nBEGIN\n  IF c_len > 0 THEN\n    p_end := INSTR( i_str, i_delim, p_start );\n    WHILE p_end > 0 LOOP\n      p_result.EXTEND;\n      p_result( p_result.COUNT ) := SUBSTR( i_str, p_start, p_end - p_start );\n      p_start := p_end + c_ld;\n      p_end := INSTR( i_str, i_delim, p_start );\n    END LOOP;\n    IF p_start <= c_len + 1 THEN\n      p_result.EXTEND;\n      p_result( p_result.COUNT ) := SUBSTR( i_str, p_start, c_len - p_start + 1 );\n    END IF;\n  END IF;\n  RETURN p_result;\nEND;\n/\n\nCREATE TABLE xyz ( weekend_days ) AS\nSELECT 'SATURDAY,SUNDAY' FROM DUAL;\n\nCREATE TABLE abc ( act_date ) AS\nSELECT DATE '2016-02-02' FROM DUAL UNION ALL\nSELECT DATE '2016-02-06' FROM DUAL;\n
    \n

    Query

    \n
    SELECT act_date,\n       CASE WHEN w.Weekend_day IS NULL THEN 0 ELSE 1 END AS weekend_flag\nFROM  abc a\n      LEFT OUTER JOIN\n      ( SELECT t.column_value AS weekend_day\n        FROM   xyz x,\n               TABLE( split_String( x.weekend_days ) ) t\n      ) w\n      ON TRIM( TO_CHAR( a.ACT_DATE, 'DAY' ) ) = w.Weekend_day;\n
    \n

    Output:

    \n
    ACT_DATE  WEEKEND_FLAG\n--------- ------------\n06-FEB-16            1 \n02-FEB-16            0 \n
    \n

    Alternate Query:

    \n
    SELECT act_date,\n       CASE\n         WHEN INSTR( x.weekend_days, TRIM( TO_CHAR( act_date, 'DAY' ) ) ) > 0\n         THEN 1\n         ELSE 0\n         END AS weekend_flag\nFROM   abc a\n       CROSS JOIN\n       xyz x;\n
    \n

    This will give the same output and will work for names of days but will not work for a general case as you might get a false positive match to a sub-string.

    \n soup wrap:

    Oracle Setup:

    CREATE OR REPLACE FUNCTION split_String(
      i_str    IN  VARCHAR2,
      i_delim  IN  VARCHAR2 DEFAULT ','
    ) RETURN SYS.ODCIVARCHAR2LIST DETERMINISTIC
    AS
      p_result       SYS.ODCIVARCHAR2LIST := SYS.ODCIVARCHAR2LIST();
      p_start        NUMBER(5) := 1;
      p_end          NUMBER(5);
      c_len CONSTANT NUMBER(5) := LENGTH( i_str );
      c_ld  CONSTANT NUMBER(5) := LENGTH( i_delim );
    BEGIN
      IF c_len > 0 THEN
        p_end := INSTR( i_str, i_delim, p_start );
        WHILE p_end > 0 LOOP
          p_result.EXTEND;
          p_result( p_result.COUNT ) := SUBSTR( i_str, p_start, p_end - p_start );
          p_start := p_end + c_ld;
          p_end := INSTR( i_str, i_delim, p_start );
        END LOOP;
        IF p_start <= c_len + 1 THEN
          p_result.EXTEND;
          p_result( p_result.COUNT ) := SUBSTR( i_str, p_start, c_len - p_start + 1 );
        END IF;
      END IF;
      RETURN p_result;
    END;
    /
    
    CREATE TABLE xyz ( weekend_days ) AS
    SELECT 'SATURDAY,SUNDAY' FROM DUAL;
    
    CREATE TABLE abc ( act_date ) AS
    SELECT DATE '2016-02-02' FROM DUAL UNION ALL
    SELECT DATE '2016-02-06' FROM DUAL;
    

    Query

    SELECT act_date,
           CASE WHEN w.Weekend_day IS NULL THEN 0 ELSE 1 END AS weekend_flag
    FROM  abc a
          LEFT OUTER JOIN
          ( SELECT t.column_value AS weekend_day
            FROM   xyz x,
                   TABLE( split_String( x.weekend_days ) ) t
          ) w
          ON TRIM( TO_CHAR( a.ACT_DATE, 'DAY' ) ) = w.Weekend_day;
    

    Output:

    ACT_DATE  WEEKEND_FLAG
    --------- ------------
    06-FEB-16            1 
    02-FEB-16            0 
    

    Alternate Query:

    SELECT act_date,
           CASE
             WHEN INSTR( x.weekend_days, TRIM( TO_CHAR( act_date, 'DAY' ) ) ) > 0
             THEN 1
             ELSE 0
             END AS weekend_flag
    FROM   abc a
           CROSS JOIN
           xyz x;
    

    This will give the same output and will work for names of days but will not work for a general case as you might get a false positive match to a sub-string.

    qid & accept id: (35575481, 35576485) query: PostgreSQL- splitting rows soup:
    CREATE TABLE tosplit\n        ( id text NOT NULL\n        , name text\n        , details text\n        );\n\nINSERT INTO tosplit( id , name , details ) VALUES\n ( '1.3.1-3' , 'Jack' , 'a' )\n,( '5.4.1-2' , 'John' , 'b' )\n,( '1.4.5' , 'Alex' , 'c' )\n\n\nWITH zzz AS (\n        SELECT id\n        , regexp_replace(id, '([0-9\.]+\.)([0-9]+)-([0-9]+)', e'\\1', e'g') AS one\n        , regexp_replace(id, '([0-9\.]+\.)([0-9]+)-([0-9]+)', e'\\2', e'g') AS two\n        , regexp_replace(id, '([0-9\.]+\.)([0-9]+)-([0-9]+)', e'\\3', e'g') AS three\n        , name\n        , details\n        FROM tosplit\n        )\n    SELECT z1.id\n        -- , z1.one\n        , z1.one || generate_series( z1.two::integer, z1.three::integer)::text AS four\n        , z1.name, z1.details\nFROM zzz z1\nWHERE z1.two <> z1.one\nUNION ALL\nSELECT z0.id\n        -- , z0.one\n        , z0.one AS four\n        , z0.name, z0.details\nFROM zzz z0\nWHERE z0.two = z0.one\n        ;\n
    \n
    \n

    Result:

    \n
    CREATE TABLE\nINSERT 0 3\n   id    | four  | name | details \n---------+-------+------+---------\n 1.3.1-3 | 1.3.1 | Jack | a\n 1.3.1-3 | 1.3.2 | Jack | a\n 1.3.1-3 | 1.3.3 | Jack | a\n 5.4.1-2 | 5.4.1 | John | b\n 5.4.1-2 | 5.4.2 | John | b\n 1.4.5   | 1.4.5 | Alex | c\n
    \n soup wrap:
    CREATE TABLE tosplit
            ( id text NOT NULL
            , name text
            , details text
            );
    
    INSERT INTO tosplit( id , name , details ) VALUES
     ( '1.3.1-3' , 'Jack' , 'a' )
    ,( '5.4.1-2' , 'John' , 'b' )
    ,( '1.4.5' , 'Alex' , 'c' )
    
    
    WITH zzz AS (
            SELECT id
            , regexp_replace(id, '([0-9\.]+\.)([0-9]+)-([0-9]+)', e'\\1', e'g') AS one
            , regexp_replace(id, '([0-9\.]+\.)([0-9]+)-([0-9]+)', e'\\2', e'g') AS two
            , regexp_replace(id, '([0-9\.]+\.)([0-9]+)-([0-9]+)', e'\\3', e'g') AS three
            , name
            , details
            FROM tosplit
            )
        SELECT z1.id
            -- , z1.one
            , z1.one || generate_series( z1.two::integer, z1.three::integer)::text AS four
            , z1.name, z1.details
    FROM zzz z1
    WHERE z1.two <> z1.one
    UNION ALL
    SELECT z0.id
            -- , z0.one
            , z0.one AS four
            , z0.name, z0.details
    FROM zzz z0
    WHERE z0.two = z0.one
            ;
    

    Result:

    CREATE TABLE
    INSERT 0 3
       id    | four  | name | details 
    ---------+-------+------+---------
     1.3.1-3 | 1.3.1 | Jack | a
     1.3.1-3 | 1.3.2 | Jack | a
     1.3.1-3 | 1.3.3 | Jack | a
     5.4.1-2 | 5.4.1 | John | b
     5.4.1-2 | 5.4.2 | John | b
     1.4.5   | 1.4.5 | Alex | c
    
    qid & accept id: (35577600, 35577640) query: Using wildcard characters in SUM function soup:

    Use left():

    \n
    SELECT ID, Classification,\n       SUM(Area) over (partition by LEFT(Classification, 3) order by ID)\nFROM MyTable\nWHERE Classification LIKE '1-2-%' OR     \n      Classification LIKE '1-4-%';\n
    \n

    EDIT:

    \n

    On second glance, you seem to want the data aggregated and then cumulatively summed. For this:

    \n
    SELECT ID, LEFT(Classification, 3),\n       SUM(SUM(Area)) over (partition by LEFT(Classification, 3) order by ID)\nFROM MyTable\nWHERE Classification LIKE '1-2-%' OR     \n      Classification LIKE '1-4-%'\nGROUP BY ID, LEFT(Classification, 3);\n
    \n soup wrap:

    Use left():

    SELECT ID, Classification,
           SUM(Area) over (partition by LEFT(Classification, 3) order by ID)
    FROM MyTable
    WHERE Classification LIKE '1-2-%' OR     
          Classification LIKE '1-4-%';
    

    EDIT:

    On second glance, you seem to want the data aggregated and then cumulatively summed. For this:

    SELECT ID, LEFT(Classification, 3),
           SUM(SUM(Area)) over (partition by LEFT(Classification, 3) order by ID)
    FROM MyTable
    WHERE Classification LIKE '1-2-%' OR     
          Classification LIKE '1-4-%'
    GROUP BY ID, LEFT(Classification, 3);
    
    qid & accept id: (35578867, 35578884) query: to find the string(e.g name) which ends with % in sql soup:

    Most versions of SQL support ESCAPE:

    \n
    where lastname like '%/%' escape '/'\n
    \n

    Other options are:

    \n
      \n
    • Look for the character using a function such as instr(), position(), charindex().
    • \n
    • Use a regular expression.
    • \n
    • Use right() or a substring function.
    • \n
    \n

    For instance:

    \n
    where right(lastname, 1) = '%'\n
    \n soup wrap:

    Most versions of SQL support ESCAPE:

    where lastname like '%/%' escape '/'
    

    Other options are:

    • Look for the character using a function such as instr(), position(), charindex().
    • Use a regular expression.
    • Use right() or a substring function.

    For instance:

    where right(lastname, 1) = '%'
    
    qid & accept id: (35640282, 35643150) query: Three table join sql server to get latest versions and count soup:

    There's still a lot of ambiguity but based on your description so far...

    \n

    First lets transform the feedback table

    \n

    This dataset gives you only the latest version as well as two columns corresponding to open and closed:

    \n
        SELECT \n    FeedbackID, FeedbackVersion,\n    CASE WHEN Status='Open' THEN 1 ELSE 0 END As OpenCount\n    CASE WHEN Status='Closed' THEN 1 ELSE 0 END As ClosedCount\n    FROM Feedback F\n    WHERE FeedbackVersion = (\n             SELECT MAX(FeedbackVersion)\n             FROM Feedback FM\n             WHERE FM.FeedbackID = F.FeedbackID \n             )\n
    \n

    Now we just join this back to projects:

    \n
    SELECT P.ProjectID, SUM(OpenCount), SUM(ClosedCount)\nFROM ProjectID P\nINNER JOIN\n(\n    SELECT \n    FeedbackID, FeedbackVersion,\n    CASE WHEN Status='Open' THEN 1 ELSE 0 END As OpenCount\n    CASE WHEN Status='Closed' THEN 1 ELSE 0 END As ClosedCount\n    FROM Feedback F\n    WHERE FeedbackVersion = (\n             SELECT MAX(FeedbackVersion)\n             FROM Feedback FM\n             WHERE FM.FeedbackID = F.FeedbackID \n             )\n) MaxVersion\nON  P.FeedbackID=MaxVersion.FeedbackID\nAND P.FeedbackVersion=MaxVersion.FeedbackVersion\nGROUP BY P.ProjectID\n
    \n

    It's not clear what you want to do with ProjectID records with old versions. i.e. if project id 7 only exists in ProjectID against an old version, it will not appear in this query.

    \n soup wrap:

    There's still a lot of ambiguity but based on your description so far...

    First lets transform the feedback table

    This dataset gives you only the latest version as well as two columns corresponding to open and closed:

        SELECT 
        FeedbackID, FeedbackVersion,
        CASE WHEN Status='Open' THEN 1 ELSE 0 END As OpenCount
        CASE WHEN Status='Closed' THEN 1 ELSE 0 END As ClosedCount
        FROM Feedback F
        WHERE FeedbackVersion = (
                 SELECT MAX(FeedbackVersion)
                 FROM Feedback FM
                 WHERE FM.FeedbackID = F.FeedbackID 
                 )
    

    Now we just join this back to projects:

    SELECT P.ProjectID, SUM(OpenCount), SUM(ClosedCount)
    FROM ProjectID P
    INNER JOIN
    (
        SELECT 
        FeedbackID, FeedbackVersion,
        CASE WHEN Status='Open' THEN 1 ELSE 0 END As OpenCount
        CASE WHEN Status='Closed' THEN 1 ELSE 0 END As ClosedCount
        FROM Feedback F
        WHERE FeedbackVersion = (
                 SELECT MAX(FeedbackVersion)
                 FROM Feedback FM
                 WHERE FM.FeedbackID = F.FeedbackID 
                 )
    ) MaxVersion
    ON  P.FeedbackID=MaxVersion.FeedbackID
    AND P.FeedbackVersion=MaxVersion.FeedbackVersion
    GROUP BY P.ProjectID
    

    It's not clear what you want to do with ProjectID records with old versions. i.e. if project id 7 only exists in ProjectID against an old version, it will not appear in this query.

    qid & accept id: (35647425, 35648574) query: Update the total based on the previous row of balance soup:

    Here comes a solution with assist of one user variable.

    \n

    The result is verified with the full demo attached.

    \n

    SQL:

    \n
    -- data preparation for demo\ncreate table tbl(Name char(100), id int, Col1 int, Col2 int, Col3 char(20), Col4 char(20), Total int, Balance int);\ninsert into tbl values\n('Row1',1,6,1,'A','Z',0,0),\n('Row2',2,2,3,'B','Z',0,0),\n('Row3',3,9,5,'B','Y',0,0),\n('Row4',4,12,8,'C','Y',0,0);\nSELECT * FROM tbl;\n\n-- Query needed\nSET @bal = 0;\nUPDATE tbl\nSET\n    Total = CASE    WHEN Col3 = 'A' and Col4 <> 'Z'\n                        THEN Col1+Col2\n                    WHEN Col3 = 'B' and Col4 <> 'Z'\n                        THEN Col1-Col2\n                    WHEN Col3 = 'C' and Col4 <> 'Z'\n                        THEN Col1*Col2\n                    ELSE 0 END,\n    Balance = (@bal:=@bal + Total);\nSELECT * FROM tbl;\n
    \n

    Output(as expected):

    \n
    mysql> SELECT * FROM tbl;\n+------+------+------+------+------+------+-------+---------+\n| Name | id   | Col1 | Col2 | Col3 | Col4 | Total | Balance |\n+------+------+------+------+------+------+-------+---------+\n| Row1 |    1 |    6 |    1 | A    | Z    |     0 |       0 |\n| Row2 |    2 |    2 |    3 | B    | Z    |     0 |       0 |\n| Row3 |    3 |    9 |    5 | B    | Y    |     0 |       0 |\n| Row4 |    4 |   12 |    8 | C    | Y    |     0 |       0 |\n+------+------+------+------+------+------+-------+---------+\n4 rows in set (0.00 sec)\n\nmysql> -- Query needed\nmysql> SET @bal = 0;\nQuery OK, 0 rows affected (0.00 sec)\n\nmysql> UPDATE tbl\n    -> SET\n    ->     Total = CASE    WHEN Col3 = 'A' and Col4 <> 'Z'\n    ->                         THEN Col1+Col2\n    ->                     WHEN Col3 = 'B' and Col4 <> 'Z'\n    ->                         THEN Col1-Col2\n    ->                     WHEN Col3 = 'C' and Col4 <> 'Z'\n    ->                         THEN Col1*Col2\n    ->                     ELSE 0 END,\n    ->     Balance = (@bal:=@bal + Total);\nQuery OK, 2 rows affected (0.00 sec)\nRows matched: 4  Changed: 2  Warnings: 0\n\nmysql>\nmysql> SELECT * FROM tbl;\n+------+------+------+------+------+------+-------+---------+\n| Name | id   | Col1 | Col2 | Col3 | Col4 | Total | Balance |\n+------+------+------+------+------+------+-------+---------+\n| Row1 |    1 |    6 |    1 | A    | Z    |     0 |       0 |\n| Row2 |    2 |    2 |    3 | B    | Z    |     0 |       0 |\n| Row3 |    3 |    9 |    5 | B    | Y    |     4 |       4 |\n| Row4 |    4 |   12 |    8 | C    | Y    |    96 |     100 |\n+------+------+------+------+------+------+-------+---------+\n4 rows in set (0.00 sec)\n
    \n soup wrap:

    Here comes a solution with assist of one user variable.

    The result is verified with the full demo attached.

    SQL:

    -- data preparation for demo
    create table tbl(Name char(100), id int, Col1 int, Col2 int, Col3 char(20), Col4 char(20), Total int, Balance int);
    insert into tbl values
    ('Row1',1,6,1,'A','Z',0,0),
    ('Row2',2,2,3,'B','Z',0,0),
    ('Row3',3,9,5,'B','Y',0,0),
    ('Row4',4,12,8,'C','Y',0,0);
    SELECT * FROM tbl;
    
    -- Query needed
    SET @bal = 0;
    UPDATE tbl
    SET
        Total = CASE    WHEN Col3 = 'A' and Col4 <> 'Z'
                            THEN Col1+Col2
                        WHEN Col3 = 'B' and Col4 <> 'Z'
                            THEN Col1-Col2
                        WHEN Col3 = 'C' and Col4 <> 'Z'
                            THEN Col1*Col2
                        ELSE 0 END,
        Balance = (@bal:=@bal + Total);
    SELECT * FROM tbl;
    

    Output(as expected):

    mysql> SELECT * FROM tbl;
    +------+------+------+------+------+------+-------+---------+
    | Name | id   | Col1 | Col2 | Col3 | Col4 | Total | Balance |
    +------+------+------+------+------+------+-------+---------+
    | Row1 |    1 |    6 |    1 | A    | Z    |     0 |       0 |
    | Row2 |    2 |    2 |    3 | B    | Z    |     0 |       0 |
    | Row3 |    3 |    9 |    5 | B    | Y    |     0 |       0 |
    | Row4 |    4 |   12 |    8 | C    | Y    |     0 |       0 |
    +------+------+------+------+------+------+-------+---------+
    4 rows in set (0.00 sec)
    
    mysql> -- Query needed
    mysql> SET @bal = 0;
    Query OK, 0 rows affected (0.00 sec)
    
    mysql> UPDATE tbl
        -> SET
        ->     Total = CASE    WHEN Col3 = 'A' and Col4 <> 'Z'
        ->                         THEN Col1+Col2
        ->                     WHEN Col3 = 'B' and Col4 <> 'Z'
        ->                         THEN Col1-Col2
        ->                     WHEN Col3 = 'C' and Col4 <> 'Z'
        ->                         THEN Col1*Col2
        ->                     ELSE 0 END,
        ->     Balance = (@bal:=@bal + Total);
    Query OK, 2 rows affected (0.00 sec)
    Rows matched: 4  Changed: 2  Warnings: 0
    
    mysql>
    mysql> SELECT * FROM tbl;
    +------+------+------+------+------+------+-------+---------+
    | Name | id   | Col1 | Col2 | Col3 | Col4 | Total | Balance |
    +------+------+------+------+------+------+-------+---------+
    | Row1 |    1 |    6 |    1 | A    | Z    |     0 |       0 |
    | Row2 |    2 |    2 |    3 | B    | Z    |     0 |       0 |
    | Row3 |    3 |    9 |    5 | B    | Y    |     4 |       4 |
    | Row4 |    4 |   12 |    8 | C    | Y    |    96 |     100 |
    +------+------+------+------+------+------+-------+---------+
    4 rows in set (0.00 sec)
    
    qid & accept id: (35661363, 35661576) query: How to create a pseudo-column showing the "occurrence number" in a day in PostgreSQL? soup:

    It sounds like you just need GROUP BY:

    \n
    SELECT  uuid, project_name, to_char(analysis_date, 'YYYY-MM-DD') d, count(*)\nFROM    t\nGROUP BY uuid, project_name, d\nORDER BY uuid, project_name, d\n;\n
    \n

    EDIT:

    \n

    Okay, I realized you are not asking for a count but for the sequence number. In that case you can say this:

    \n
    SELECT  uuid, project_name, to_char(analysis_date, 'YYYY-MM-DD') d, \n        row_number() OVER (PARTITION BY analysis_date::date ORDER BY analysis_date)\nFROM    t\nORDER BY uuid, project_name, d\n; \n
    \n

    Or if you want independent numbering within each project, include that in the PARTITION like this: PARTITION BY project_name, to_char(analysis_date, 'YYYY-MM-DD').

    \n

    Note that you are going to encounter time zone issues here, because Postgres has to decide when each day starts and ends. Since you have a TIMESTAMP WITHOUT TIME ZONE, there will be no automatic conversion based on the client's timezone settings.

    \n soup wrap:

    It sounds like you just need GROUP BY:

    SELECT  uuid, project_name, to_char(analysis_date, 'YYYY-MM-DD') d, count(*)
    FROM    t
    GROUP BY uuid, project_name, d
    ORDER BY uuid, project_name, d
    ;
    

    EDIT:

    Okay, I realized you are not asking for a count but for the sequence number. In that case you can say this:

    SELECT  uuid, project_name, to_char(analysis_date, 'YYYY-MM-DD') d, 
            row_number() OVER (PARTITION BY analysis_date::date ORDER BY analysis_date)
    FROM    t
    ORDER BY uuid, project_name, d
    ; 
    

    Or if you want independent numbering within each project, include that in the PARTITION like this: PARTITION BY project_name, to_char(analysis_date, 'YYYY-MM-DD').

    Note that you are going to encounter time zone issues here, because Postgres has to decide when each day starts and ends. Since you have a TIMESTAMP WITHOUT TIME ZONE, there will be no automatic conversion based on the client's timezone settings.

    qid & accept id: (35700944, 35701161) query: sql table with 4 bool columns and 1 int column. How to add all bools from one int together soup:

    Add the values together for each record, and then use an aggregate to sum the rows. Since they are Boolean values adding zero will not effect the summed result.

    \n
    SELECT sum(MO+DI+MI+DO) myResult\nFROM name\nwhere kw = 8\n
    \n

    If you need a data set grouped by each kw... you could do it this way as well... it would be better performance than querying by each kw individually.

    \n
    SELECT kw, sum(MO+DI+MI+DO) myResult\nFROM name\nGROUP BY kw\n
    \n soup wrap:

    Add the values together for each record, and then use an aggregate to sum the rows. Since they are Boolean values adding zero will not effect the summed result.

    SELECT sum(MO+DI+MI+DO) myResult
    FROM name
    where kw = 8
    

    If you need a data set grouped by each kw... you could do it this way as well... it would be better performance than querying by each kw individually.

    SELECT kw, sum(MO+DI+MI+DO) myResult
    FROM name
    GROUP BY kw
    
    qid & accept id: (35701321, 35702068) query: Querying XML data from a SQL Server table soup:

    Try this:

    \n
    SELECT\n    Name = xc.value('(NAME)[1]', 'varchar(50)'),\n    CompEnabled = xc.value('(PROPERTIES/COMP_ENABLED)[1]', 'varchar(10)')\nFROM \n    dbo.YourTable\nCROSS APPLY\n    SC.nodes('/SC_ROOT/COMPONENTS/COMPONENT') AS XT(XC)\nWHERE\n    xc.value('(NAME)[1]', 'varchar(50)') LIKE '%Detection'\n
    \n

    The .nodes() call bascially returns a "virtual table" with a table alias XT which has one column (alias XC) that contains the XML fragment that corresponds to the XPath expression - basically the XML fragment. You reach into that to extract the detailed info you need

    \n

    Update: if your XML looks like this:

    \n
    \n  Status A Detection\n  \n\n
    \n

    then use this code to get the results:

    \n
    SELECT\n    Name = xc.value('(NAME)[1]', 'varchar(50)'),\n    CompEnabled = xc.value('(PROPERTIES[@NAME="COMP_ENABLED"]/@VALUE)[1]', 'varchar(10)')\nFROM \n    dbo.YourTable\nCROSS APPLY\n    SC.nodes('/SC_ROOT/COMPONENTS/COMPONENT') AS XT(XC)\nWHERE\n    xc.value('(NAME)[1]', 'varchar(50)') LIKE '%Detection'\n
    \n soup wrap:

    Try this:

    SELECT
        Name = xc.value('(NAME)[1]', 'varchar(50)'),
        CompEnabled = xc.value('(PROPERTIES/COMP_ENABLED)[1]', 'varchar(10)')
    FROM 
        dbo.YourTable
    CROSS APPLY
        SC.nodes('/SC_ROOT/COMPONENTS/COMPONENT') AS XT(XC)
    WHERE
        xc.value('(NAME)[1]', 'varchar(50)') LIKE '%Detection'
    

    The .nodes() call bascially returns a "virtual table" with a table alias XT which has one column (alias XC) that contains the XML fragment that corresponds to the XPath expression - basically the XML fragment. You reach into that to extract the detailed info you need

    Update: if your XML looks like this:

    
      Status A Detection
      
    
    

    then use this code to get the results:

    SELECT
        Name = xc.value('(NAME)[1]', 'varchar(50)'),
        CompEnabled = xc.value('(PROPERTIES[@NAME="COMP_ENABLED"]/@VALUE)[1]', 'varchar(10)')
    FROM 
        dbo.YourTable
    CROSS APPLY
        SC.nodes('/SC_ROOT/COMPONENTS/COMPONENT') AS XT(XC)
    WHERE
        xc.value('(NAME)[1]', 'varchar(50)') LIKE '%Detection'
    
    qid & accept id: (35753972, 35754304) query: Using countif in Oracle sql to calculate for each row soup:

    You can use first UNPIVOT data and then do conditional aggregation:

    \n
    SELECT ID, Name,\n       COUNT(CASE WHEN val = 'High' THEN 1 END)   AS High,\n       COUNT(CASE WHEN val = 'Medium' THEN 1 END) AS Medium,\n       COUNT(CASE WHEN val = 'Low' THEN 1 END)    AS Low\nFROM tab\nUNPIVOT( val FOR col_name IN (Value1, Value2, Value3, ValueN)) unpvt\nGROUP BY ID, Name\n
    \n

    LiveDemo

    \n

    To handle more column just add them to:

    \n
    val FOR col_name IN (Value1, Value2, Value3, ValueN)\n
    \n
    \n

    And final query with all columns from original tab:

    \n
    SELECT t.*, sub.High, sub.Medium, sub.Low\nFROM tab t\nJOIN (SELECT ID, Name, \n         COUNT(CASE WHEN val = 'High' THEN 1 END)   AS High,\n         COUNT(CASE WHEN val = 'Medium' THEN 1 END) AS Medium,\n         COUNT(CASE WHEN val = 'Low' THEN 1 END)    AS Low\n      FROM tab\n      UNPIVOT( val FOR col_name IN (Value1, Value2, Value3, ValueN)) unpvt\n      GROUP BY ID, Name) sub\n  ON t.ID = sub.ID\n
    \n

    LiveDemo2

    \n

    Output:

    \n
    ╔════╦══════╦════════╦════════╦════════╦════════╦══════╦════════╦═════╗\n║ ID ║ Name ║ Value1 ║ Value2 ║ Value3 ║ ValueN ║ High ║ Medium ║ Low ║\n╠════╬══════╬════════╬════════╬════════╬════════╬══════╬════════╬═════╣\n║  1 ║ A    ║ High   ║ High   ║ Low    ║ Medium ║    2 ║      1 ║   1 ║\n║  2 ║ AB   ║ Low    ║ Medium ║ Low    ║ High   ║    1 ║      1 ║   2 ║\n║  3 ║ ABC  ║ High   ║ Low    ║ Low    ║ High   ║    2 ║      0 ║   2 ║\n╚════╩══════╩════════╩════════╩════════╩════════╩══════╩════════╩═════╝\n
    \n soup wrap:

    You can use first UNPIVOT data and then do conditional aggregation:

    SELECT ID, Name,
           COUNT(CASE WHEN val = 'High' THEN 1 END)   AS High,
           COUNT(CASE WHEN val = 'Medium' THEN 1 END) AS Medium,
           COUNT(CASE WHEN val = 'Low' THEN 1 END)    AS Low
    FROM tab
    UNPIVOT( val FOR col_name IN (Value1, Value2, Value3, ValueN)) unpvt
    GROUP BY ID, Name
    

    LiveDemo

    To handle more column just add them to:

    val FOR col_name IN (Value1, Value2, Value3, ValueN)
    

    And final query with all columns from original tab:

    SELECT t.*, sub.High, sub.Medium, sub.Low
    FROM tab t
    JOIN (SELECT ID, Name, 
             COUNT(CASE WHEN val = 'High' THEN 1 END)   AS High,
             COUNT(CASE WHEN val = 'Medium' THEN 1 END) AS Medium,
             COUNT(CASE WHEN val = 'Low' THEN 1 END)    AS Low
          FROM tab
          UNPIVOT( val FOR col_name IN (Value1, Value2, Value3, ValueN)) unpvt
          GROUP BY ID, Name) sub
      ON t.ID = sub.ID
    

    LiveDemo2

    Output:

    ╔════╦══════╦════════╦════════╦════════╦════════╦══════╦════════╦═════╗
    ║ ID ║ Name ║ Value1 ║ Value2 ║ Value3 ║ ValueN ║ High ║ Medium ║ Low ║
    ╠════╬══════╬════════╬════════╬════════╬════════╬══════╬════════╬═════╣
    ║  1 ║ A    ║ High   ║ High   ║ Low    ║ Medium ║    2 ║      1 ║   1 ║
    ║  2 ║ AB   ║ Low    ║ Medium ║ Low    ║ High   ║    1 ║      1 ║   2 ║
    ║  3 ║ ABC  ║ High   ║ Low    ║ Low    ║ High   ║    2 ║      0 ║   2 ║
    ╚════╩══════╩════════╩════════╩════════╩════════╩══════╩════════╩═════╝
    
    qid & accept id: (35756149, 35757302) query: Converting SQL query with inner select to sqlalchemy soup:

    The second argument of .join() is the join condition:

    \n
    db.session.query(RevisionModel.id) \\n          .join(subquery, and_(subquery.c.content_id == RevisionModel.content_id,\n                               subquery.c.min_ts_created == RevisionModel.ts_created))\n
    \n

    You'll also need to make sure convert your sub-Query into a select with correctly labeled columns:

    \n
    subquery = db.session.query(RevisionModel.content_id.label("content_id"),\n                            func.min(RevisionModel.ts_created).label("min_ts_created")) \\n    .group_by(RevisionModel.content_id) \\n    .order_by(func.min(RevisionModel.ts_created).desc()) \\n    .subquery("inner_result")\n
    \n soup wrap:

    The second argument of .join() is the join condition:

    db.session.query(RevisionModel.id) \
              .join(subquery, and_(subquery.c.content_id == RevisionModel.content_id,
                                   subquery.c.min_ts_created == RevisionModel.ts_created))
    

    You'll also need to make sure convert your sub-Query into a select with correctly labeled columns:

    subquery = db.session.query(RevisionModel.content_id.label("content_id"),
                                func.min(RevisionModel.ts_created).label("min_ts_created")) \
        .group_by(RevisionModel.content_id) \
        .order_by(func.min(RevisionModel.ts_created).desc()) \
        .subquery("inner_result")
    
    qid & accept id: (35777522, 35777602) query: update table from a view present in ms sql database soup:

    Get everything from the view, then just add two columns as null.

    \n
    SELECT *, null as location, null as roles\nINTO new_table\nFROM the_view\n
    \n

    Then just update the location and roles field for each role as you see fit.

    \n

    EDIT:\nAfter the initial creation you can do:

    \n
    INSERT INTO new_table\nSELECT *, null as location, null as roles\nFROM the_view\n
    \n

    or

    \n
    INSERT INTO new_table (firstName, lastName, employee_id)\nSELECT *\nFROM the_view\n
    \n soup wrap:

    Get everything from the view, then just add two columns as null.

    SELECT *, null as location, null as roles
    INTO new_table
    FROM the_view
    

    Then just update the location and roles field for each role as you see fit.

    EDIT: After the initial creation you can do:

    INSERT INTO new_table
    SELECT *, null as location, null as roles
    FROM the_view
    

    or

    INSERT INTO new_table (firstName, lastName, employee_id)
    SELECT *
    FROM the_view
    
    qid & accept id: (35800153, 35800849) query: RENAME table if target table does not exist soup:

    This has been answered...

    \n

    Mysql: RENAME TABLE IF EXISTS

    \n

    With the following code... (all credit to original author)

    \n
    SELECT Count(*)\nINTO @exists\nFROM information_schema.tables \nWHERE table_schema = [DATABASE_NAME]\nAND table_type = 'BASE TABLE'\nAND table_name = 'oldName';\nSET @query = If(@exists=0,'RENAME TABLE oldName TO newName','SELECT \'nothing to rename\' status');\nPREPARE stmt FROM @query;\nEXECUTE stmt;\n
    \n

    When you don't want to replace [DATABASE NAME] manually you can use the following variable

    \n
    SELECT DATABASE() INTO @db_name FROM DUAL;\n
    \n soup wrap:

    This has been answered...

    Mysql: RENAME TABLE IF EXISTS

    With the following code... (all credit to original author)

    SELECT Count(*)
    INTO @exists
    FROM information_schema.tables 
    WHERE table_schema = [DATABASE_NAME]
    AND table_type = 'BASE TABLE'
    AND table_name = 'oldName';
    SET @query = If(@exists=0,'RENAME TABLE oldName TO newName','SELECT \'nothing to rename\' status');
    PREPARE stmt FROM @query;
    EXECUTE stmt;
    

    When you don't want to replace [DATABASE NAME] manually you can use the following variable

    SELECT DATABASE() INTO @db_name FROM DUAL;
    
    qid & accept id: (35802709, 35802989) query: using MIN within WHERE soup:

    You don't need only one

    \n
    WHERE\n   (otherPersons, date) IN (\n      SELECT\n         person, MIN(date)\n      FROM\n         orders\n      WHERE\n         date > "1/1/2015" and date < "1/31/2015"\n      GROUP BY\n        person\n    )\n      GROUP BY\n        person\n
    \n

    This is the same as a join with two clauses

    \n
    JOIN  (\n      SELECT\n         person, MIN(date) as mindate\n      FROM\n         orders\n      WHERE\n         date > "1/1/2015" and date < "1/31/2015"\n      GROUP BY\n        person\n    ) sub ON otherPersons = sub.person and date = sub.mindate\n
    \n soup wrap:

    You don't need only one

    WHERE
       (otherPersons, date) IN (
          SELECT
             person, MIN(date)
          FROM
             orders
          WHERE
             date > "1/1/2015" and date < "1/31/2015"
          GROUP BY
            person
        )
          GROUP BY
            person
    

    This is the same as a join with two clauses

    JOIN  (
          SELECT
             person, MIN(date) as mindate
          FROM
             orders
          WHERE
             date > "1/1/2015" and date < "1/31/2015"
          GROUP BY
            person
        ) sub ON otherPersons = sub.person and date = sub.mindate
    
    qid & accept id: (35804288, 35805423) query: How to update the same column for multiple rows in DB2 soup:

    The query you need is

    \n
    UPDATE client SET address_1 = address_2, address_2 = ''\nWHERE address_1 = '' AND address_2 != ''\n
    \n

    In the WHERE, it finds all the problem rows, then it moves address_2 to address_1 and blanks out address_2

    \n

    Note: Make sure you're not confusing empty string '' with NULL. In DB2, those are not the same. If your values are actually NULL, your query would need to be:

    \n
    UPDATE client SET address_1 = address_2, address_2 = NULL\nWHERE address_1 IS NULL AND address_2 IS NOT NULL\n
    \n soup wrap:

    The query you need is

    UPDATE client SET address_1 = address_2, address_2 = ''
    WHERE address_1 = '' AND address_2 != ''
    

    In the WHERE, it finds all the problem rows, then it moves address_2 to address_1 and blanks out address_2

    Note: Make sure you're not confusing empty string '' with NULL. In DB2, those are not the same. If your values are actually NULL, your query would need to be:

    UPDATE client SET address_1 = address_2, address_2 = NULL
    WHERE address_1 IS NULL AND address_2 IS NOT NULL
    
    qid & accept id: (35822621, 35822649) query: SQL query using NULL soup:

    Using Conditional Aggregate you can count the number of permanent student in each school.

    \n

    If total count of a school is same as the conditional count of a school then the school does not have any temporary students.

    \n

    Using JOIN

    \n
    SELECT sc.schid, \n       sc.schname \nFROM   student s \n       JOIN school sc \n         ON s.schid = sc.schid \nGROUP  BY sc.schid, \n          sc.schname \nHAVING( CASE WHEN status IS NULL THEN 1 END ) = Count(*) \n
    \n

    Another way using EXISTS

    \n
    SELECT sc.schid, \n       sc.schname \nFROM   school sc \nWHERE  EXISTS (SELECT 1 \n               FROM   student s \n               WHERE  s.schid = sc.schid \n               HAVING( CASE WHEN status IS NULL THEN 1 END ) = Count(*)) \n
    \n soup wrap:

    Using Conditional Aggregate you can count the number of permanent student in each school.

    If total count of a school is same as the conditional count of a school then the school does not have any temporary students.

    Using JOIN

    SELECT sc.schid, 
           sc.schname 
    FROM   student s 
           JOIN school sc 
             ON s.schid = sc.schid 
    GROUP  BY sc.schid, 
              sc.schname 
    HAVING( CASE WHEN status IS NULL THEN 1 END ) = Count(*) 
    

    Another way using EXISTS

    SELECT sc.schid, 
           sc.schname 
    FROM   school sc 
    WHERE  EXISTS (SELECT 1 
                   FROM   student s 
                   WHERE  s.schid = sc.schid 
                   HAVING( CASE WHEN status IS NULL THEN 1 END ) = Count(*)) 
    
    qid & accept id: (35840854, 35840898) query: Get frist date from timestamp in SQL soup:

    You can easy use this:

    \n
    select sessid,min(timestart) FROM mytable GROUP by sessid;\n
    \n

    And for your second question, something like this:

    \n
    SELECT\n  my.id,\n  my.sessid,\n  IF(my.timestart = m.timestart, 'yes', 'NO' ) AS First,\n  my.timestart\nFROM mytable my\nLEFT JOIN \n  (\n    SELECT sessid,min(timestart) AS timestart FROM mytable GROUP BY sessid\n  ) AS m ON m.sessid = my.sessid;\n
    \n soup wrap:

    You can easy use this:

    select sessid,min(timestart) FROM mytable GROUP by sessid;
    

    And for your second question, something like this:

    SELECT
      my.id,
      my.sessid,
      IF(my.timestart = m.timestart, 'yes', 'NO' ) AS First,
      my.timestart
    FROM mytable my
    LEFT JOIN 
      (
        SELECT sessid,min(timestart) AS timestart FROM mytable GROUP BY sessid
      ) AS m ON m.sessid = my.sessid;
    
    qid & accept id: (35869713, 35869777) query: How can I include primary key when using SELECT MAX() and GROUP BY? soup:

    You can use ROW_NUMBER for this:

    \n
    SELECT TrackID,  IMEI, LastDate\nFROM (\n  SELECT TrackID,  IMEI, LastDate, \n         ROW_NUMBER() OVER (PARTITION BY IMEI \n                            ORDER BY LastPacketTime DESC) AS rn\n  FROM dbo.Tracks) AS t\nWHERE t.rn = 1\n
    \n

    If you have multiple records sharing the same maximum LastPacketTime value and you want the TrackID values of all these records returned, then use RANK in place of ROW_NUMBER.

    \n

    Edit: In case of ties you can extend the ORDER BY clause of ROW_NUMBER so as to selectively pick either the smaller TrackID:

    \n
    ORDER BY LastPacketTime DESC, TrackID\n
    \n

    or the bigger one:

    \n
    ORDER BY LastPacketTime DESC, TrackID DESC\n
    \n soup wrap:

    You can use ROW_NUMBER for this:

    SELECT TrackID,  IMEI, LastDate
    FROM (
      SELECT TrackID,  IMEI, LastDate, 
             ROW_NUMBER() OVER (PARTITION BY IMEI 
                                ORDER BY LastPacketTime DESC) AS rn
      FROM dbo.Tracks) AS t
    WHERE t.rn = 1
    

    If you have multiple records sharing the same maximum LastPacketTime value and you want the TrackID values of all these records returned, then use RANK in place of ROW_NUMBER.

    Edit: In case of ties you can extend the ORDER BY clause of ROW_NUMBER so as to selectively pick either the smaller TrackID:

    ORDER BY LastPacketTime DESC, TrackID
    

    or the bigger one:

    ORDER BY LastPacketTime DESC, TrackID DESC
    
    qid & accept id: (35903825, 35903868) query: How to calculate "running total" in SQL soup:
    select id, fruit, row_number() over (partition by fruit order by id) as running_total\nfrom fruits\norder by id\n
    \n

    And then,

    \n
    alter table Fruits add RUNNING_TOTAL int null\n\nupdate fruits set running_total = subquery.running_total\nfrom fruits\ninner join (\n select id, row_number() over (partition by fruit order by id) as running_total\n from fruits\n )subquery on fruits.id = subquery.id\n\nselect * from fruits\n
    \n soup wrap:
    select id, fruit, row_number() over (partition by fruit order by id) as running_total
    from fruits
    order by id
    

    And then,

    alter table Fruits add RUNNING_TOTAL int null
    
    update fruits set running_total = subquery.running_total
    from fruits
    inner join (
     select id, row_number() over (partition by fruit order by id) as running_total
     from fruits
     )subquery on fruits.id = subquery.id
    
    select * from fruits
    
    qid & accept id: (35913338, 35914510) query: Substitute variable with case-insensitive pattern matching and match shown in report title soup:

    You can change the ttitle using a column value, as shown in the documentation:

    \n
    \n

    You can reference a column value in a top title by storing the desired value in a variable and referencing the variable in a TTITLE command

    \n
    \n

    Using Aleksej's data but expanded to multiple rows, you can do:

    \n
    set verify off\n\naccept subtitute_value prompt 'insert smt: ';\n\nttitle center "Report for: " subttitle skip 5\ncolumn column1 new_value subttitle\nbreak on column1 skip page\n\nselect column1, column2, column3\nfrom table1\nwhere upper(table1.column1) like upper('%&subtitute_value%')\norder by column1; -- and others\n
    \n

    Which gets:

    \n
    insert smt: ora\n\n                           Report for: 123 ORAC LE 333\n\n\n\n\nCOLUMN1                 COLUMN2 COLUMN3\n-------------------- ---------- --------------------\n123 ORAC LE 333              95 Some text\n                             99 Some text\n\n                             Report for: xxOracleyy\n\n\n\n\nCOLUMN1                 COLUMN2 COLUMN3\n-------------------- ---------- --------------------\nxxOracleyy                   13 Some text\n                              7 Some text\n                             42 Some text\n                              5 Some text\n                             71 Some text\n\n7 rows selected.\n
    \n

    The column ... new_value subttitle defines the subtitle referred to in the ttitle directive. Not that is not referred to as a substitution variable, to no & here. You need to break on the column being used for the title, as noted in the documentation. And if you only want to see the value in the title, not in the report itself, you can add noprint to that column directive.

    \n soup wrap:

    You can change the ttitle using a column value, as shown in the documentation:

    You can reference a column value in a top title by storing the desired value in a variable and referencing the variable in a TTITLE command

    Using Aleksej's data but expanded to multiple rows, you can do:

    set verify off
    
    accept subtitute_value prompt 'insert smt: ';
    
    ttitle center "Report for: " subttitle skip 5
    column column1 new_value subttitle
    break on column1 skip page
    
    select column1, column2, column3
    from table1
    where upper(table1.column1) like upper('%&subtitute_value%')
    order by column1; -- and others
    

    Which gets:

    insert smt: ora
    
                               Report for: 123 ORAC LE 333
    
    
    
    
    COLUMN1                 COLUMN2 COLUMN3
    -------------------- ---------- --------------------
    123 ORAC LE 333              95 Some text
                                 99 Some text
    
                                 Report for: xxOracleyy
    
    
    
    
    COLUMN1                 COLUMN2 COLUMN3
    -------------------- ---------- --------------------
    xxOracleyy                   13 Some text
                                  7 Some text
                                 42 Some text
                                  5 Some text
                                 71 Some text
    
    7 rows selected.
    

    The column ... new_value subttitle defines the subtitle referred to in the ttitle directive. Not that is not referred to as a substitution variable, to no & here. You need to break on the column being used for the title, as noted in the documentation. And if you only want to see the value in the title, not in the report itself, you can add noprint to that column directive.

    qid & accept id: (36057894, 36057999) query: Count number of values per id soup:

    Do a GROUP BY, use COUNT (which only counts non-null values):

    \n
    select id,\n       count(value1) as value1,\n       count(value2) as value2,\n       count(value3) as value3\nfrom table1\ngroup by id\n
    \n

    Edit:

    \n

    If values are not null but '.' (or something else), do use case expressions to do conditional counting, something like:

    \n
    select id,\n       count(case when value1 <> '.' then 1 end) as value1,\n       count(case when value2 <> '.' then 1 end) as value2,\n       count(case when value3 <> '.' then 1 end) as value3\nfrom table1\ngroup by id\n
    \n soup wrap:

    Do a GROUP BY, use COUNT (which only counts non-null values):

    select id,
           count(value1) as value1,
           count(value2) as value2,
           count(value3) as value3
    from table1
    group by id
    

    Edit:

    If values are not null but '.' (or something else), do use case expressions to do conditional counting, something like:

    select id,
           count(case when value1 <> '.' then 1 end) as value1,
           count(case when value2 <> '.' then 1 end) as value2,
           count(case when value3 <> '.' then 1 end) as value3
    from table1
    group by id
    
    qid & accept id: (36086980, 36089055) query: Generate a parent-child hierarchy from table with levels paths soup:

    You could create a new table with the hierarchical structure, and an auto incrementing ID, like this:

    \n
    create table hierarchy (\n  id int not null identity (1,1) primary key,\n  element varchar(100),\n  parent int\n);\n
    \n

    Then you would first add the level 1 elements to it, as they have no parent:

    \n
    insert into hierarchy (element, parent)\n  select     distinct f.level1, null\n  from       flat f;\n
    \n

    As you now have the id values generated for these elements, you can add the next level, like this:

    \n
    insert into hierarchy (element, parent)\n  select     distinct f.level2, h1.id\n  from       hierarchy h1\n  inner join flat f\n          on f.level1 = h1.element\n  where      h1.parent is null;\n
    \n

    This pattern you can repeat to the next levels:

    \n
    insert into hierarchy (element, parent)\n  select     distinct f.level3, h2.id\n  from       hierarchy h1\n  inner join hierarchy h2\n          on h2.parent = h1.id\n  inner join flat f\n          on f.level1 = h1.element\n         and f.level2 = h2.element\n  where      h1.parent is null;\n\ninsert into hierarchy (element, parent)\n  select     distinct f.level4, h3.id\n  from       hierarchy h1\n  inner join hierarchy h2\n          on h2.parent = h1.id\n  inner join hierarchy h3\n          on h3.parent = h2.id\n  inner join flat f\n          on f.level1 = h1.element\n         and f.level2 = h2.element\n         and f.level3 = h3.element\n  where      h1.parent is null;\n\ninsert into hierarchy (element, parent)\n  select     distinct f.level5, h3.id\n  from       hierarchy h1\n  inner join hierarchy h2\n          on h2.parent = h1.id\n  inner join hierarchy h3\n          on h3.parent = h2.id\n  inner join hierarchy h4\n          on h4.parent = h3.id\n  inner join flat f\n          on f.level1 = h1.element\n         and f.level2 = h2.element\n         and f.level3 = h3.element\n         and f.level4 = h4.element\n  where      h1.parent is null;\n\ninsert into hierarchy (element, parent)\n  select     distinct f.level6, h3.id\n  from       hierarchy h1\n  inner join hierarchy h2\n          on h2.parent = h1.id\n  inner join hierarchy h3\n          on h3.parent = h2.id\n  inner join hierarchy h4\n          on h4.parent = h3.id\n  inner join hierarchy h5\n          on h5.parent = h4.id\n  inner join flat f\n          on f.level1 = h1.element\n         and f.level2 = h2.element\n         and f.level3 = h3.element\n         and f.level4 = h4.element\n         and f.level5 = h5.element\n  where      h1.parent is null;\n
    \n

    ... etc, as far into the levels as needed.

    \n soup wrap:

    You could create a new table with the hierarchical structure, and an auto incrementing ID, like this:

    create table hierarchy (
      id int not null identity (1,1) primary key,
      element varchar(100),
      parent int
    );
    

    Then you would first add the level 1 elements to it, as they have no parent:

    insert into hierarchy (element, parent)
      select     distinct f.level1, null
      from       flat f;
    

    As you now have the id values generated for these elements, you can add the next level, like this:

    insert into hierarchy (element, parent)
      select     distinct f.level2, h1.id
      from       hierarchy h1
      inner join flat f
              on f.level1 = h1.element
      where      h1.parent is null;
    

    This pattern you can repeat to the next levels:

    insert into hierarchy (element, parent)
      select     distinct f.level3, h2.id
      from       hierarchy h1
      inner join hierarchy h2
              on h2.parent = h1.id
      inner join flat f
              on f.level1 = h1.element
             and f.level2 = h2.element
      where      h1.parent is null;
    
    insert into hierarchy (element, parent)
      select     distinct f.level4, h3.id
      from       hierarchy h1
      inner join hierarchy h2
              on h2.parent = h1.id
      inner join hierarchy h3
              on h3.parent = h2.id
      inner join flat f
              on f.level1 = h1.element
             and f.level2 = h2.element
             and f.level3 = h3.element
      where      h1.parent is null;
    
    insert into hierarchy (element, parent)
      select     distinct f.level5, h3.id
      from       hierarchy h1
      inner join hierarchy h2
              on h2.parent = h1.id
      inner join hierarchy h3
              on h3.parent = h2.id
      inner join hierarchy h4
              on h4.parent = h3.id
      inner join flat f
              on f.level1 = h1.element
             and f.level2 = h2.element
             and f.level3 = h3.element
             and f.level4 = h4.element
      where      h1.parent is null;
    
    insert into hierarchy (element, parent)
      select     distinct f.level6, h3.id
      from       hierarchy h1
      inner join hierarchy h2
              on h2.parent = h1.id
      inner join hierarchy h3
              on h3.parent = h2.id
      inner join hierarchy h4
              on h4.parent = h3.id
      inner join hierarchy h5
              on h5.parent = h4.id
      inner join flat f
              on f.level1 = h1.element
             and f.level2 = h2.element
             and f.level3 = h3.element
             and f.level4 = h4.element
             and f.level5 = h5.element
      where      h1.parent is null;
    

    ... etc, as far into the levels as needed.

    qid & accept id: (36117235, 36124719) query: SQL on Spark: How do I get all values of DISTINCT? soup:

    collect_list will give you a list without removing duplicates.\ncollect_set will automatically remove duplicates\nso just

    \n
    select \nName,\ncount(distinct color) as Distinct, # not a very good name\ncollect_set(Color) as Values\nfrom TblName\ngroup by Name\n
    \n

    this feature is implemented since spark 1.6.0 check it out:

    \n

    https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/functions.scala

    \n
    /**\n   * Aggregate function: returns a set of objects with duplicate elements eliminated.\n   *\n   * For now this is an alias for the collect_set Hive UDAF.\n   *\n   * @group agg_funcs\n   * @since 1.6.0\n   */\n  def collect_set(columnName: String): Column = collect_set(Column(columnName))\n
    \n soup wrap:

    collect_list will give you a list without removing duplicates. collect_set will automatically remove duplicates so just

    select 
    Name,
    count(distinct color) as Distinct, # not a very good name
    collect_set(Color) as Values
    from TblName
    group by Name
    

    this feature is implemented since spark 1.6.0 check it out:

    https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/functions.scala

    /**
       * Aggregate function: returns a set of objects with duplicate elements eliminated.
       *
       * For now this is an alias for the collect_set Hive UDAF.
       *
       * @group agg_funcs
       * @since 1.6.0
       */
      def collect_set(columnName: String): Column = collect_set(Column(columnName))
    
    qid & accept id: (36131131, 36131218) query: How to sort my sql statement by products as well as report a quantity soup:

    First of all, do proper joins, then add a GROUP BY:

    \n
    SELECT o.ORDERNUMBER, p.PRODUCTNAME, SUM(od.quantity)\nFROM ORDERS o\n  JOIN order_details od ON o.ORDERNUMBER= od.ORDERNUMBER\n  JOIN PRODUCTS p ON od.ProductCode = p.ProductCode\nWHERE SHIPPEDDATE LIKE '%MAY-04'\nGROUP BY o.ORDERNUMBER, p.PRODUCTNAME\n
    \n

    I don't know Oracle very well, but I suppose you could do something like

    \n
    WHERE YEAR(SHIPPEDDATE) = 2004 and MONTH(SHIPPEDDATE) = 5\n
    \n

    Or, as in Gordon Linoff's answer

    \n
    WHERE SHIPPEDDATE >= DATE '2004-05-01' AND SHIPPEDDATE < DATE '2004-06-01'\n
    \n soup wrap:

    First of all, do proper joins, then add a GROUP BY:

    SELECT o.ORDERNUMBER, p.PRODUCTNAME, SUM(od.quantity)
    FROM ORDERS o
      JOIN order_details od ON o.ORDERNUMBER= od.ORDERNUMBER
      JOIN PRODUCTS p ON od.ProductCode = p.ProductCode
    WHERE SHIPPEDDATE LIKE '%MAY-04'
    GROUP BY o.ORDERNUMBER, p.PRODUCTNAME
    

    I don't know Oracle very well, but I suppose you could do something like

    WHERE YEAR(SHIPPEDDATE) = 2004 and MONTH(SHIPPEDDATE) = 5
    

    Or, as in Gordon Linoff's answer

    WHERE SHIPPEDDATE >= DATE '2004-05-01' AND SHIPPEDDATE < DATE '2004-06-01'
    
    qid & accept id: (36176385, 36176426) query: Moving value from below row to upper one soup:

    You can do what you want using aggregation:

    \n
    select max(UniqueDatabaseNo) as UniqueDatabaseNo, Creditor,\n       max(case when BankAccountNo like '[a-Z][a-Z]%' then BankAccountNo end) as BankAccountNo\nfrom t\ngroup by Creditor;\n
    \n

    Edit:

    \n

    You might was conditional logic for UniqueDatabaseNo as well:

    \n
    select max(case when UniqueDatabaseNo > 0 then UniqueDatabaseNo end) as UniqueDatabaseNo\n
    \n

    This is not necessary for your sample data.

    \n soup wrap:

    You can do what you want using aggregation:

    select max(UniqueDatabaseNo) as UniqueDatabaseNo, Creditor,
           max(case when BankAccountNo like '[a-Z][a-Z]%' then BankAccountNo end) as BankAccountNo
    from t
    group by Creditor;
    

    Edit:

    You might was conditional logic for UniqueDatabaseNo as well:

    select max(case when UniqueDatabaseNo > 0 then UniqueDatabaseNo end) as UniqueDatabaseNo
    

    This is not necessary for your sample data.

    qid & accept id: (36239016, 36292269) query: Database: item can have alternatives, how to link in database? soup:

    The suggestion I'm thinking about might be similar to what you proposed, but I'll try to formulate it in database terms.

    \n

    As far as I understand your requirements, your alternatives relationship will be completely transitive. This means your item set is partitioned in equivalence classes of subsets containing mutually alternative items. (If an item has no alternative yet, the subset consists of this item alone.)

    \n

    If that's true, then the most elegant and redundancy free way to represent this is to choose one of the items of such as subset as a representative of the entire subset. This is reflected by the following table design:

    \n
    item(id, equivalence_id, other attributes, ...)\n
    \n

    where equivalence_id is a foreign key to the representative. Each item gets born with an equivalence id of null. If it is made equivalent to another item,

    \n
      \n
    • if the item already present has an equivalence id of null, assign the id of this representative to the equivalence it of both items,

    • \n
    • if the item already present has a non-null equivalence id, assign this to the equivalence id of the new item.

    • \n
    \n

    Note that this works, in case there are many items in the same equivalence class, no matter which of this items are used to link the new one.

    \n

    Example:

    \n
    id   equivalence_id   name\n1    1                abc\n2                     def\n3    1                ghi\n4    4                jkl\n5    4                mno\n
    \n

    This means abc and ghi are equivalent, as well as jkl and mno, but def isn't yet equivalent to anything. Now if pqr comes along and should become equivalent to abc, it would get equivalence id 1. The effect is the same as making it equivalent to ghi.

    \n

    To find all items equivalent to a specific one, query

    \n
    select *\nfrom item\nwhere equivalence_id = :my_equivalence_id\n
    \n

    If some information pertaining to the equivalence class as a whole should be stored, a separate table for the equivalence classes only should be created.

    \n soup wrap:

    The suggestion I'm thinking about might be similar to what you proposed, but I'll try to formulate it in database terms.

    As far as I understand your requirements, your alternatives relationship will be completely transitive. This means your item set is partitioned in equivalence classes of subsets containing mutually alternative items. (If an item has no alternative yet, the subset consists of this item alone.)

    If that's true, then the most elegant and redundancy free way to represent this is to choose one of the items of such as subset as a representative of the entire subset. This is reflected by the following table design:

    item(id, equivalence_id, other attributes, ...)
    

    where equivalence_id is a foreign key to the representative. Each item gets born with an equivalence id of null. If it is made equivalent to another item,

    • if the item already present has an equivalence id of null, assign the id of this representative to the equivalence it of both items,

    • if the item already present has a non-null equivalence id, assign this to the equivalence id of the new item.

    Note that this works, in case there are many items in the same equivalence class, no matter which of this items are used to link the new one.

    Example:

    id   equivalence_id   name
    1    1                abc
    2                     def
    3    1                ghi
    4    4                jkl
    5    4                mno
    

    This means abc and ghi are equivalent, as well as jkl and mno, but def isn't yet equivalent to anything. Now if pqr comes along and should become equivalent to abc, it would get equivalence id 1. The effect is the same as making it equivalent to ghi.

    To find all items equivalent to a specific one, query

    select *
    from item
    where equivalence_id = :my_equivalence_id
    

    If some information pertaining to the equivalence class as a whole should be stored, a separate table for the equivalence classes only should be created.

    qid & accept id: (36249094, 36265084) query: How to index a comma separated text column using Oracle text soup:

    You need to create and tune your own lexer with desired parameters(documentation).

    \n

    Something like this(sorry, not tested):

    \n
    begin\n  ctx_ddl.create_preference('comma_lexer', 'BASIC_LEXER');\n  ctx_ddl.set_attribute('comma_lexer', 'PRINTJOINS', '''()/^&"');\n  ctx_ddl.set_attribute('comma_lexer', 'PUNCTUATIONS', ',.-?!');\nend;\n/\n\ncreate index node_sequence_index \n  on testtable(node_sequence)\n  indextype is ctxsys.context \n  parameters ('lexer comma_lexer')\n;\n
    \n

    Update

    \n

    Code from comment by @Chandan which works for conditions mentioned in the question:

    \n
    begin \n  ctx_ddl.create_preference('comma_lexer', 'BASIC_LEXER');\n  ctx_ddl.set_attribute('comma_lexer', 'WHITESPACE', ',');\n  ctx_ddl.set_attribute('comma_lexer', 'NUMGROUP', '#'); \nend; \n/\n\ncreate index node_sequence_index \n  on testtable(node_sequence) \n  indextype is ctxsys.context \n  parameters ('lexer comma_lexer')\n;\n
    \n soup wrap:

    You need to create and tune your own lexer with desired parameters(documentation).

    Something like this(sorry, not tested):

    begin
      ctx_ddl.create_preference('comma_lexer', 'BASIC_LEXER');
      ctx_ddl.set_attribute('comma_lexer', 'PRINTJOINS', '''()/^&"');
      ctx_ddl.set_attribute('comma_lexer', 'PUNCTUATIONS', ',.-?!');
    end;
    /
    
    create index node_sequence_index 
      on testtable(node_sequence)
      indextype is ctxsys.context 
      parameters ('lexer comma_lexer')
    ;
    

    Update

    Code from comment by @Chandan which works for conditions mentioned in the question:

    begin 
      ctx_ddl.create_preference('comma_lexer', 'BASIC_LEXER');
      ctx_ddl.set_attribute('comma_lexer', 'WHITESPACE', ',');
      ctx_ddl.set_attribute('comma_lexer', 'NUMGROUP', '#'); 
    end; 
    /
    
    create index node_sequence_index 
      on testtable(node_sequence) 
      indextype is ctxsys.context 
      parameters ('lexer comma_lexer')
    ;
    
    qid & accept id: (36281485, 36351784) query: Report Builder expression SQL list certain values only soup:

    Use Filter expression:

    \n
    =Join(Filter(Fields!Software.Value,"Office"),",")\n
    \n

    I added Join so you can display the list in Text Box.

    \n

    If you want to get particular element from the list, for example the first one, use this:

    \n
    =Filter(Fields!Software.Value,"Office")(0)\n
    \n soup wrap:

    Use Filter expression:

    =Join(Filter(Fields!Software.Value,"Office"),",")
    

    I added Join so you can display the list in Text Box.

    If you want to get particular element from the list, for example the first one, use this:

    =Filter(Fields!Software.Value,"Office")(0)
    
    qid & accept id: (36294617, 36295270) query: Creating view with additional column based on multiple condition soup:

    You shouldn't compare timestamp values using LIKE. To select the rows with a time between 06:00 and 14:00 cast the timestamp to a time and compare that:

    \n
    SELECT creation_time, product_id, warehouse_name\nFROM products \nWHERE creation_time::time BETWEEN time '06:00:00' AND time '14:00:00';\n
    \n

    The same "trick" can be used to create the shift column:

    \n
    SELECT creation_time, product_id, warehouse_name, \n       case \n         when creation_time::time between time '06:00:00' AND time '14:00:00' then 1\n         when creation_time::time BETWEEN time '14:00:01' AND time '22:00:00' then 2\n         else 3\n       end as shift_number\nFROM products;\n
    \n

    The between operator includes both values that's why the second check starts 1 second after 14:00

    \n

    time '06:00:00' specifies a time literal (constant). For more details on how to specify time (or timestamp) values, please see the manual:

    \n

    http://www.postgresql.org/docs/current/static/datatype-datetime.html#DATATYPE-DATETIME-INPUT

    \n soup wrap:

    You shouldn't compare timestamp values using LIKE. To select the rows with a time between 06:00 and 14:00 cast the timestamp to a time and compare that:

    SELECT creation_time, product_id, warehouse_name
    FROM products 
    WHERE creation_time::time BETWEEN time '06:00:00' AND time '14:00:00';
    

    The same "trick" can be used to create the shift column:

    SELECT creation_time, product_id, warehouse_name, 
           case 
             when creation_time::time between time '06:00:00' AND time '14:00:00' then 1
             when creation_time::time BETWEEN time '14:00:01' AND time '22:00:00' then 2
             else 3
           end as shift_number
    FROM products;
    

    The between operator includes both values that's why the second check starts 1 second after 14:00

    time '06:00:00' specifies a time literal (constant). For more details on how to specify time (or timestamp) values, please see the manual:

    http://www.postgresql.org/docs/current/static/datatype-datetime.html#DATATYPE-DATETIME-INPUT

    qid & accept id: (36330259, 36330470) query: Flattening xml data in sql soup:

    You can use this code to get the result you're looking for:

    \n
    ;WITH XMLNAMESPACES(DEFAULT 'http://somelongasslink.org/hasalsosomestuffhere')\nSELECT\n    rq.Name,\n    LocalID = TC.value('(LocalId)[1]', 'nvarchar(10)') \nFROM \n     [database].[requests] rq\nCROSS APPLY\n    rq.Data.nodes('/Dataform') AS TX(TC)\nGO\n
    \n

    There were two problems with your code:

    \n
      \n
    1. you're not respecting / including the XML namespace that's defined on the XML document

      \n
    2. \n
    3. you didn't pay attention to the case-sensitivity of XML in your call to .nodes() - you need to use .nodes('/Dataform') (not /DataForm - the F is not capitalized in your XML)

    4. \n
    \n soup wrap:

    You can use this code to get the result you're looking for:

    ;WITH XMLNAMESPACES(DEFAULT 'http://somelongasslink.org/hasalsosomestuffhere')
    SELECT
        rq.Name,
        LocalID = TC.value('(LocalId)[1]', 'nvarchar(10)') 
    FROM 
         [database].[requests] rq
    CROSS APPLY
        rq.Data.nodes('/Dataform') AS TX(TC)
    GO
    

    There were two problems with your code:

    1. you're not respecting / including the XML namespace that's defined on the XML document

    2. you didn't pay attention to the case-sensitivity of XML in your call to .nodes() - you need to use .nodes('/Dataform') (not /DataForm - the F is not capitalized in your XML)

    qid & accept id: (36377860, 36378125) query: SELECT value WHERE IN SELECT soup:

    You want to select nicknames from users:

    \n
    select nickname\nfrom users\n
    \n

    You only want to select records where the phone number is in the set of friends of user 1:

    \n
    where phone in\n(\n  select phone\n  from phonebook\n  where ownerid = 1\n)\n
    \n

    Combined:

    \n
    select nickname\nfrom users\nwhere phone in\n(\n  select phone\n  from phonebook\n  where ownerid = 1\n);\n
    \n soup wrap:

    You want to select nicknames from users:

    select nickname
    from users
    

    You only want to select records where the phone number is in the set of friends of user 1:

    where phone in
    (
      select phone
      from phonebook
      where ownerid = 1
    )
    

    Combined:

    select nickname
    from users
    where phone in
    (
      select phone
      from phonebook
      where ownerid = 1
    );
    
    qid & accept id: (36402026, 36402304) query: Tsql all ids and values in one row soup:

    Try it like this:

    \n
    CREATE TABLE #test(id INT,value VARCHAR(100),condition INT);\nINSERT INTO #test VALUES\n (1,'value1',0)\n,(2,'value2',0)\n,(3,'value3',1)\n,(4,'value4',1);\n\nWITH MyDistinctConditions AS\n(\n    SELECT DISTINCT condition\n    FROM #test \n)\nSELECT c.condition\n      ,(SELECT STUFF(\n                      (\n                      SELECT ', ' + CAST(t.id AS VARCHAR(10)) \n                      FROM #test AS t \n                      WHERE t.condition=c.condition\n                      FOR XML PATH('')),1,2,'')) AS ids\n      ,(SELECT STUFF(\n                      (\n                      SELECT ', ' + t.value \n                      FROM #test AS t \n                      WHERE t.condition=c.condition\n                      FOR XML PATH('')),1,2,'')) AS [values]\nFROM MyDistinctConditions AS c;\n\nDROP TABLE #test;\n
    \n

    The result

    \n
    condition   ids     values\n0           1, 2    value1, value2\n1           3, 4    value3, value4\n
    \n soup wrap:

    Try it like this:

    CREATE TABLE #test(id INT,value VARCHAR(100),condition INT);
    INSERT INTO #test VALUES
     (1,'value1',0)
    ,(2,'value2',0)
    ,(3,'value3',1)
    ,(4,'value4',1);
    
    WITH MyDistinctConditions AS
    (
        SELECT DISTINCT condition
        FROM #test 
    )
    SELECT c.condition
          ,(SELECT STUFF(
                          (
                          SELECT ', ' + CAST(t.id AS VARCHAR(10)) 
                          FROM #test AS t 
                          WHERE t.condition=c.condition
                          FOR XML PATH('')),1,2,'')) AS ids
          ,(SELECT STUFF(
                          (
                          SELECT ', ' + t.value 
                          FROM #test AS t 
                          WHERE t.condition=c.condition
                          FOR XML PATH('')),1,2,'')) AS [values]
    FROM MyDistinctConditions AS c;
    
    DROP TABLE #test;
    

    The result

    condition   ids     values
    0           1, 2    value1, value2
    1           3, 4    value3, value4
    
    qid & accept id: (36425557, 36426038) query: SQL replacement for Recursive CTE soup:

    You could do the brute-force approach with a table that you gradually populate. Assuming your test table looks something like:

    \n
    create table test (tablename varchar2(9), columnvalue varchar2(11), rankofcolumn number);\n
    \n

    then the result table could be created with:

    \n
    create table result (tablename varchar2(9), columnvalue varchar2(11), rankofcolumn number,\n  path varchar2(50));\n
    \n

    Then create the result entries for the lowest rank:

    \n
    insert into result (tablename, columnvalue, rankofcolumn, path)\nselect t.tablename, t.columnvalue, t.rankofcolumn, t.columnvalue\nfrom test t\nwhere t.rankofcolumn = 1;\n\n3 rows inserted.\n
    \n

    And repeatedly add rows building on the highest existing rank, getting the following values (if there are any for that tablename) from the test table:

    \n
    insert into result (tablename, columnvalue, rankofcolumn, path)\nselect t.tablename, t.columnvalue, t.rankofcolumn,\n  concat(concat(r.path, '->'), t.columnvalue)\nfrom test t\njoin result r\non r.tablename = t.tablename\nand r.rankofcolumn = t.rankofcolumn - 1\nwhere t.rankofcolumn = 2;\n\n3 rows inserted.\n\ninsert into result (tablename, columnvalue, rankofcolumn, path)\nselect t.tablename, t.columnvalue, t.rankofcolumn,\n  concat(concat(r.path, '->'), t.columnvalue)\nfrom test t\njoin result r\non r.tablename = t.tablename\nand r.rankofcolumn = t.rankofcolumn - 1\nwhere t.rankofcolumn = 3;\n\n2 rows inserted.\n\ninsert into result (tablename, columnvalue, rankofcolumn, path)\nselect t.tablename, t.columnvalue, t.rankofcolumn,\n  concat(concat(r.path, '->'), t.columnvalue)\nfrom test t\njoin result r\non r.tablename = t.tablename\nand r.rankofcolumn = t.rankofcolumn - 1\nwhere t.rankofcolumn = 4;\n\n1 row inserted.\n
    \n

    And keep going for the maximum possible number of columns (i.e. highest rankofcolumn for any table). You may be able to do that procedurally in WX2, iterating until zero rows are inserted; but you've made it sound pretty limited.

    \n

    After all those iterations the table now contains:

    \n
    select * from result\norder by tablename, rankofcolumn;\n\nTABLENAME COLUMNVALUE RANKOFCOLUMN PATH                                             \n--------- ----------- ------------ --------------------------------------------------\nA         C1                     1 C1                                                \nA         C2                     2 C1->C2                                            \nA         C3                     3 C1->C2->C3                                        \nA         C4                     4 C1->C2->C3->C4                                    \nB         CX1                    1 CX1                                               \nB         CX2                    2 CX1->CX2                                          \nC         CY1                    1 CY1                                               \nC         CY2                    2 CY1->CY2                                          \nC         CY3                    3 CY1->CY2->CY3                                     \n
    \n

    Tested in Oracle but trying to avoid anything Oracle-specific; might need tweaking for WX2 of course.

    \n soup wrap:

    You could do the brute-force approach with a table that you gradually populate. Assuming your test table looks something like:

    create table test (tablename varchar2(9), columnvalue varchar2(11), rankofcolumn number);
    

    then the result table could be created with:

    create table result (tablename varchar2(9), columnvalue varchar2(11), rankofcolumn number,
      path varchar2(50));
    

    Then create the result entries for the lowest rank:

    insert into result (tablename, columnvalue, rankofcolumn, path)
    select t.tablename, t.columnvalue, t.rankofcolumn, t.columnvalue
    from test t
    where t.rankofcolumn = 1;
    
    3 rows inserted.
    

    And repeatedly add rows building on the highest existing rank, getting the following values (if there are any for that tablename) from the test table:

    insert into result (tablename, columnvalue, rankofcolumn, path)
    select t.tablename, t.columnvalue, t.rankofcolumn,
      concat(concat(r.path, '->'), t.columnvalue)
    from test t
    join result r
    on r.tablename = t.tablename
    and r.rankofcolumn = t.rankofcolumn - 1
    where t.rankofcolumn = 2;
    
    3 rows inserted.
    
    insert into result (tablename, columnvalue, rankofcolumn, path)
    select t.tablename, t.columnvalue, t.rankofcolumn,
      concat(concat(r.path, '->'), t.columnvalue)
    from test t
    join result r
    on r.tablename = t.tablename
    and r.rankofcolumn = t.rankofcolumn - 1
    where t.rankofcolumn = 3;
    
    2 rows inserted.
    
    insert into result (tablename, columnvalue, rankofcolumn, path)
    select t.tablename, t.columnvalue, t.rankofcolumn,
      concat(concat(r.path, '->'), t.columnvalue)
    from test t
    join result r
    on r.tablename = t.tablename
    and r.rankofcolumn = t.rankofcolumn - 1
    where t.rankofcolumn = 4;
    
    1 row inserted.
    

    And keep going for the maximum possible number of columns (i.e. highest rankofcolumn for any table). You may be able to do that procedurally in WX2, iterating until zero rows are inserted; but you've made it sound pretty limited.

    After all those iterations the table now contains:

    select * from result
    order by tablename, rankofcolumn;
    
    TABLENAME COLUMNVALUE RANKOFCOLUMN PATH                                             
    --------- ----------- ------------ --------------------------------------------------
    A         C1                     1 C1                                                
    A         C2                     2 C1->C2                                            
    A         C3                     3 C1->C2->C3                                        
    A         C4                     4 C1->C2->C3->C4                                    
    B         CX1                    1 CX1                                               
    B         CX2                    2 CX1->CX2                                          
    C         CY1                    1 CY1                                               
    C         CY2                    2 CY1->CY2                                          
    C         CY3                    3 CY1->CY2->CY3                                     
    

    Tested in Oracle but trying to avoid anything Oracle-specific; might need tweaking for WX2 of course.

    qid & accept id: (36433001, 36433104) query: With structure - using multiple select queries without repeating the "with" soup:

    Common table expression works only for one statement.

    \n
    \n

    Specifies a temporary named result set, known as a common table\n expression (CTE). This is derived from a simple query and defined\n within the execution scope of a single SELECT, INSERT, UPDATE, or\n DELETE statement.

    \n
    \n
    select id from temptable;\nselect name from temptable;\n
    \n

    are two statements, so you cannot use it in second query.

    \n

    The alternative is to use temp table:

    \n
    SELECT .... INTO #temptable FROM ...; -- your query from CTE\nSELECT id   FROM #temptable; \nSELECT name FROM #temptable;\n
    \n soup wrap:

    Common table expression works only for one statement.

    Specifies a temporary named result set, known as a common table expression (CTE). This is derived from a simple query and defined within the execution scope of a single SELECT, INSERT, UPDATE, or DELETE statement.

    select id from temptable;
    select name from temptable;
    

    are two statements, so you cannot use it in second query.

    The alternative is to use temp table:

    SELECT .... INTO #temptable FROM ...; -- your query from CTE
    SELECT id   FROM #temptable; 
    SELECT name FROM #temptable;
    
    qid & accept id: (36435813, 36518817) query: Need multiple maxdates-oracle sql developer 4.0.2 soup:

    you can do it much easier, and probably faster:

    \n
    WITH\n  parms as (select 'Phase 1' AS "Phase1", 'Phase 2' AS "Phase2", 'Phase 3' AS "Phase3", '{091225F8-4606-401C-872E-FC5ACDC1D8E2}' AS case_id from dual)\nSELECT \ndc.case_id, \nparms."Phase1", \nSELECT Max(updated_ts) FROM a_identifiers WHERE identifier_value = parms."Phase1" AND group_ID = dc.case_ID) AS "Phase1Enddt",\nparms."Phase2",\nSELECT Max(updated_ts) FROM a_identifiers WHERE identifier_value = parms."Phase2" AND group_ID = dc.case_ID) AS "Phase2Enddt",\nparms."Phase3",\nSELECT Max(updated_ts) FROM a_identifiers WHERE identifier_value = parms."Phase3" AND group_ID = dc.case_ID) AS "Phase3Enddt",\nFROM parms, cmreporting.d_solution ds\nINNER JOIN cmreporting.d_case dc ON ds.solution_sqn = dc.solution_sqn\nWHERE dc.case_id = parms.case_id \nAND rownum = 1 --if ther is more then one row\n
    \n

    EDIT:
    \nthe same result you can get by this query:

    \n
    WITH\n  parms as (select 'Phase 1' AS "Phase1", 'Phase 2' AS "Phase2", 'Phase 3' AS "Phase3", '{091225F8-4606-401C-872E-FC5ACDC1D8E2}' AS case_id from dual)\nSELECT \ngroup_ID as case_id, \nparms."Phase1", \nMax(Case When identifier_value = parms."Phase1" Then updated_ts End) AS "Phase1Enddt",\nparms."Phase2",\nMax(Case When identifier_value = parms."Phase2" Then updated_ts End) AS "Phase2Enddt",\nparms."Phase3",\nMax(Case When identifier_value = parms."Phase3" Then updated_ts End) AS "Phase3Enddt",\nFROM parms, a_identifiers\nWhere Exists (Select 1 From cmreporting.d_solution ds\n  INNER JOIN cmreporting.d_case dc ON ds.solution_sqn = dc.solution_sqn\n  WHERE dc.case_id = a_identifiers.group_ID)\nAND group_ID = parms.case_ID\nGROUP BY group_ID\n
    \n soup wrap:

    you can do it much easier, and probably faster:

    WITH
      parms as (select 'Phase 1' AS "Phase1", 'Phase 2' AS "Phase2", 'Phase 3' AS "Phase3", '{091225F8-4606-401C-872E-FC5ACDC1D8E2}' AS case_id from dual)
    SELECT 
    dc.case_id, 
    parms."Phase1", 
    SELECT Max(updated_ts) FROM a_identifiers WHERE identifier_value = parms."Phase1" AND group_ID = dc.case_ID) AS "Phase1Enddt",
    parms."Phase2",
    SELECT Max(updated_ts) FROM a_identifiers WHERE identifier_value = parms."Phase2" AND group_ID = dc.case_ID) AS "Phase2Enddt",
    parms."Phase3",
    SELECT Max(updated_ts) FROM a_identifiers WHERE identifier_value = parms."Phase3" AND group_ID = dc.case_ID) AS "Phase3Enddt",
    FROM parms, cmreporting.d_solution ds
    INNER JOIN cmreporting.d_case dc ON ds.solution_sqn = dc.solution_sqn
    WHERE dc.case_id = parms.case_id 
    AND rownum = 1 --if ther is more then one row
    

    EDIT:
    the same result you can get by this query:

    WITH
      parms as (select 'Phase 1' AS "Phase1", 'Phase 2' AS "Phase2", 'Phase 3' AS "Phase3", '{091225F8-4606-401C-872E-FC5ACDC1D8E2}' AS case_id from dual)
    SELECT 
    group_ID as case_id, 
    parms."Phase1", 
    Max(Case When identifier_value = parms."Phase1" Then updated_ts End) AS "Phase1Enddt",
    parms."Phase2",
    Max(Case When identifier_value = parms."Phase2" Then updated_ts End) AS "Phase2Enddt",
    parms."Phase3",
    Max(Case When identifier_value = parms."Phase3" Then updated_ts End) AS "Phase3Enddt",
    FROM parms, a_identifiers
    Where Exists (Select 1 From cmreporting.d_solution ds
      INNER JOIN cmreporting.d_case dc ON ds.solution_sqn = dc.solution_sqn
      WHERE dc.case_id = a_identifiers.group_ID)
    AND group_ID = parms.case_ID
    GROUP BY group_ID
    
    qid & accept id: (36468687, 36468887) query: How to perform an action on one result at a time in a sql query return that should return multiple results? soup:

    Use a loop to iterate through the results of your query.

    \n
    SELECT EmailAddress \nFROM Customers` \nWHERE EmailFlag = 'true'` \nAND DATEDIFF(day, GETDATE(),DateOfVisit) >= 90;\n
    \n

    Replace day with other units you want to get the difference in, like second, minute etc.

    \n

    c#:

    \n
    foreach(DataRow dr in queryResult.Tables[0].Rows)\n{\n   string email = dr["EmailAddress"].ToString();\n   // Code to send email\n   //Execute Query UPDATE Customers SET EmailFlag = False WHERE EmailAddress = email \n}\n
    \n

    This is just a draft. You should replace the comments with the actual code to make it work. No need to fetch from your initial query 1 result at a time.

    \n soup wrap:

    Use a loop to iterate through the results of your query.

    SELECT EmailAddress 
    FROM Customers` 
    WHERE EmailFlag = 'true'` 
    AND DATEDIFF(day, GETDATE(),DateOfVisit) >= 90;
    

    Replace day with other units you want to get the difference in, like second, minute etc.

    c#:

    foreach(DataRow dr in queryResult.Tables[0].Rows)
    {
       string email = dr["EmailAddress"].ToString();
       // Code to send email
       //Execute Query UPDATE Customers SET EmailFlag = False WHERE EmailAddress = email 
    }
    

    This is just a draft. You should replace the comments with the actual code to make it work. No need to fetch from your initial query 1 result at a time.

    qid & accept id: (36478669, 36478875) query: SQL: converting a + separated list to an integer soup:

    If you are using SQL Server you could split and calculate sum:

    \n
    CREATE TABLE tab(ID INT IDENTITY(1,1), col VARCHAR(1000));\n\nINSERT INTO tab(col) VALUES('5212667+5212662'),('1+2+3'),('2'), (NULL), ('1+-1');\n\nSELECT *\nFROM tab\nCROSS APPLY (\n    SELECT [result] = SUM( Split.a.value('.', 'BIGINT'))\n    FROM (SELECT [X] = CAST (''+REPLACE(col, '+', '') + '' AS XML)) AS A \n    CROSS APPLY X.nodes ('/M') AS Split(a)\n) AS s;\n
    \n

    LiveDemo

    \n

    Output:

    \n
    ╔════╦═════════════════╦══════════╗\n║ ID ║       col       ║  result  ║\n╠════╬═════════════════╬══════════╣\n║  1 ║ 5212667+5212662 ║ 10425329 ║\n║  2 ║ 1+2+3           ║ 6        ║\n║  3 ║ 2               ║ 2        ║\n║  4 ║ NULL            ║ NULL     ║\n║  5 ║ 1+-1            ║ 0        ║\n╚════╩═════════════════╩══════════╝\n
    \n
    \n

    The correct way is to normalize your table schema.

    \n soup wrap:

    If you are using SQL Server you could split and calculate sum:

    CREATE TABLE tab(ID INT IDENTITY(1,1), col VARCHAR(1000));
    
    INSERT INTO tab(col) VALUES('5212667+5212662'),('1+2+3'),('2'), (NULL), ('1+-1');
    
    SELECT *
    FROM tab
    CROSS APPLY (
        SELECT [result] = SUM( Split.a.value('.', 'BIGINT'))
        FROM (SELECT [X] = CAST (''+REPLACE(col, '+', '') + '' AS XML)) AS A 
        CROSS APPLY X.nodes ('/M') AS Split(a)
    ) AS s;
    

    LiveDemo

    Output:

    ╔════╦═════════════════╦══════════╗
    ║ ID ║       col       ║  result  ║
    ╠════╬═════════════════╬══════════╣
    ║  1 ║ 5212667+5212662 ║ 10425329 ║
    ║  2 ║ 1+2+3           ║ 6        ║
    ║  3 ║ 2               ║ 2        ║
    ║  4 ║ NULL            ║ NULL     ║
    ║  5 ║ 1+-1            ║ 0        ║
    ╚════╩═════════════════╩══════════╝
    

    The correct way is to normalize your table schema.

    qid & accept id: (36510415, 36510600) query: one attribute referencing, attributes in two different tables soup:

    Can you do something like this?

    \n
    CREATE TABLE subscription (\n  custID     INT\n             CONSTRAINT subscription__custid__fk REFERENCES Customer( CustomerId ),\n  name       VARCHAR2(50)\n             CONSTRAINT subscription__mag_name__fk REFERENCES Magazine( Name )\n             CONSTRAINT subscription__news_name__fk REFERENCES Newspaper( Name ),\n  startdate  DATE\n             CONSTRAINT subscription__startdate__nn NOT NULL,\n  enddate    DATE\n);\n
    \n

    Yes, you can and you will have two foreign keys on the same column pointing to different tables but if the value in the column is non-null then it will expect there to be a matching name in both the magazines table and the newspapers table - which is probably not what you are after.

    \n

    Can you have a foreign key that asks can the value be in either exclusively in this table or that table (but not in both)? No.

    \n

    But you can re-factor your database so you merge the newspapers and magazines tables into a single table (which you can then easily reference); like this:

    \n
    CREATE TABLE customer (\n  CustomerID INT\n             CONSTRAINT customer__CustomerId__pk PRIMARY KEY,\n  name       VARCHAR2(50)\n             CONSTRAINT customer__name__nn NOT NULL\n);\n\nCREATE TABLE Publications (\n  id         INT\n             CONSTRAINT publications__id__pk PRIMARY KEY,\n  name       VARCHAR2(50)\n             CONSTRAINT publications__name__nn NOT NULL,\n  cost       NUMBER(6,2)\n             CONSTRAINT publications__cost__chk CHECK ( cost >= 0 ),\n  noofissues INT,\n  type       CHAR(1),\n             CONSTRAINT publications__type__chk CHECK ( type IN ( 'M', 'N' ) )\n);\n\nCREATE TABLE subscription (\n  custID     INT\n             CONSTRAINT subscription__custid__fk REFERENCES Customer( CustomerId ),\n  pubID      INT\n             CONSTRAINT subscription__pubid__fk REFERENCES Publications( Id ),\n  startdate  DATE\n             CONSTRAINT subscription__startdate__nn NOT NULL,\n  enddate    DATE\n);\n
    \n soup wrap:

    Can you do something like this?

    CREATE TABLE subscription (
      custID     INT
                 CONSTRAINT subscription__custid__fk REFERENCES Customer( CustomerId ),
      name       VARCHAR2(50)
                 CONSTRAINT subscription__mag_name__fk REFERENCES Magazine( Name )
                 CONSTRAINT subscription__news_name__fk REFERENCES Newspaper( Name ),
      startdate  DATE
                 CONSTRAINT subscription__startdate__nn NOT NULL,
      enddate    DATE
    );
    

    Yes, you can and you will have two foreign keys on the same column pointing to different tables but if the value in the column is non-null then it will expect there to be a matching name in both the magazines table and the newspapers table - which is probably not what you are after.

    Can you have a foreign key that asks can the value be in either exclusively in this table or that table (but not in both)? No.

    But you can re-factor your database so you merge the newspapers and magazines tables into a single table (which you can then easily reference); like this:

    CREATE TABLE customer (
      CustomerID INT
                 CONSTRAINT customer__CustomerId__pk PRIMARY KEY,
      name       VARCHAR2(50)
                 CONSTRAINT customer__name__nn NOT NULL
    );
    
    CREATE TABLE Publications (
      id         INT
                 CONSTRAINT publications__id__pk PRIMARY KEY,
      name       VARCHAR2(50)
                 CONSTRAINT publications__name__nn NOT NULL,
      cost       NUMBER(6,2)
                 CONSTRAINT publications__cost__chk CHECK ( cost >= 0 ),
      noofissues INT,
      type       CHAR(1),
                 CONSTRAINT publications__type__chk CHECK ( type IN ( 'M', 'N' ) )
    );
    
    CREATE TABLE subscription (
      custID     INT
                 CONSTRAINT subscription__custid__fk REFERENCES Customer( CustomerId ),
      pubID      INT
                 CONSTRAINT subscription__pubid__fk REFERENCES Publications( Id ),
      startdate  DATE
                 CONSTRAINT subscription__startdate__nn NOT NULL,
      enddate    DATE
    );
    
    qid & accept id: (36518328, 36518378) query: Readable "always false" evaluation in TSQL soup:

    Although you can use where 1 = 0 for this purpose, I think top 0 is more common:

    \n
    select top 0 . . . \n. . .\n
    \n

    This also prevents an "accident" in the where clause. If you change this:

    \n
    where condition x or condition y\n
    \n

    to:

    \n
    where 1 = 0 and condition x or condition y\n
    \n

    The parentheses are wrong.

    \n soup wrap:

    Although you can use where 1 = 0 for this purpose, I think top 0 is more common:

    select top 0 . . . 
    . . .
    

    This also prevents an "accident" in the where clause. If you change this:

    where condition x or condition y
    

    to:

    where 1 = 0 and condition x or condition y
    

    The parentheses are wrong.

    qid & accept id: (36535515, 36535623) query: how to use not in clause with SUM clause in SQL soup:

    You can use the in clause in where clause

    \n
    SELECT \nstudent_record.id,\nSUM(courses.crd * grades.points) AS sum_grade_credits\nFROM grades \nINNER JOIN student_record ON grades.letter = student_record.grade\nINNER JOIN courses ON courses.course_no = student_record.course_no \nWHERE student_record.id=2255\nAND student_record.grade not in ('NP', 'NF');\n
    \n

    or you can use in join condition

    \n
    SELECT \nstudent_record.id,\nSUM(courses.crd * grades.points) AS sum_grade_credits\nFROM grades \nINNER JOIN student_record ON (grades.letter = student_record.grade \n  and student_record.grade  not in ( 'NP', 'NF'))\nINNER JOIN courses ON courses.course_no = student_record.course_no \nWHERE student_record.id=2255;\n
    \n soup wrap:

    You can use the in clause in where clause

    SELECT 
    student_record.id,
    SUM(courses.crd * grades.points) AS sum_grade_credits
    FROM grades 
    INNER JOIN student_record ON grades.letter = student_record.grade
    INNER JOIN courses ON courses.course_no = student_record.course_no 
    WHERE student_record.id=2255
    AND student_record.grade not in ('NP', 'NF');
    

    or you can use in join condition

    SELECT 
    student_record.id,
    SUM(courses.crd * grades.points) AS sum_grade_credits
    FROM grades 
    INNER JOIN student_record ON (grades.letter = student_record.grade 
      and student_record.grade  not in ( 'NP', 'NF'))
    INNER JOIN courses ON courses.course_no = student_record.course_no 
    WHERE student_record.id=2255;
    
    qid & accept id: (36550777, 36552125) query: Average rating from SQL database survey data soup:

    In MySQL:

    \n
    SELECT  o.*,\n        AVG(answer_3)\nFROM    surveyUserResponse sur\nJOIN    workshops w\nUSING   (survey_id)\nJOIN    organizers o\nUSING   (organizer_id)\nGROUP BY\n        organizer_id\n
    \n

    In SQL Server:

    \n
    SELECT  *\nFROM    organizers o\nCROSS APPLY\n        (\n        SELECT  AVG(answer_3)\n        FROM    workshops w\n        JOIN    surveyUserResponse sur\n        ON      sur.survey_id = w.survey_id\n        WHERE   w.organizer_id = o.organizer_id\n        ) q (rating)\n
    \n soup wrap:

    In MySQL:

    SELECT  o.*,
            AVG(answer_3)
    FROM    surveyUserResponse sur
    JOIN    workshops w
    USING   (survey_id)
    JOIN    organizers o
    USING   (organizer_id)
    GROUP BY
            organizer_id
    

    In SQL Server:

    SELECT  *
    FROM    organizers o
    CROSS APPLY
            (
            SELECT  AVG(answer_3)
            FROM    workshops w
            JOIN    surveyUserResponse sur
            ON      sur.survey_id = w.survey_id
            WHERE   w.organizer_id = o.organizer_id
            ) q (rating)
    
    qid & accept id: (36569182, 36569386) query: SQL distinct values in multiple tables soup:

    This should get what you want:

    \n
    select A.Name, B.Surname, count(*), C.Pages\n  from TableA\n       Join TableB on A.Number = B.Number\n       Join TableC on A.Number = C.Number\ngroup by A.Name, B.Surname, C.Pages;\n
    \n

    Alternatively you could do it with a sub-query if it makes subsequent alterations easier, though generally speaking these don't perform as well:

    \n
    select A.Name, B.Surname,\n       (select count(*)\n          from TableBB\n         where B.Number = A.Number) As CNT,\n       C.Pages\n  from TableA\n       Join TableC on A.Number = C.Number;\n
    \n soup wrap:

    This should get what you want:

    select A.Name, B.Surname, count(*), C.Pages
      from TableA
           Join TableB on A.Number = B.Number
           Join TableC on A.Number = C.Number
    group by A.Name, B.Surname, C.Pages;
    

    Alternatively you could do it with a sub-query if it makes subsequent alterations easier, though generally speaking these don't perform as well:

    select A.Name, B.Surname,
           (select count(*)
              from TableBB
             where B.Number = A.Number) As CNT,
           C.Pages
      from TableA
           Join TableC on A.Number = C.Number;
    
    qid & accept id: (36572728, 36572933) query: Joining one table through other tables to another soup:

    Try using conditional aggregation using CASE EXPRESSION . Your idea is correct, you need to join first all three tables, and then as you can see I joined to manager tables on m.manager_id IN(s.manager_id,r.manager_id,c.manager_id) . The aggregation after is to pivot the output since there will be 3 records for each name, containing manager_name for each one of the tables.

    \n
    SELECT s.name,\n      MAX(CASE WHEN s.manager_id = m.manager_id THEN m.name END) as GM_NAME,\n      MAX(CASE WHEN r.manager_id= m.manager_id THEN m.name END) as RM_NAME,\n      MAX(CASE WHEN c.manager_id = m.manager_id THEN m.name END) as CM_NAME\nFROM site_table s\nINNER JOIN region_table r ON(s.region_id = r.region_id)\nINNER JOIN country_table c ON(s.country_id = c.country_id)\nINNER JOIN manager_table m ON(m.manager_id IN(s.manager_id,r.manager_id,c.manager_id))\nGROUP BY s.name\n
    \n

    If you can have missing data - nulls on any of the site_table id columns, which doesn't sound very reasonable by the data you provided and even by the idea of this structure, then use a left join

    \n
    SELECT s.name,\n      MAX(CASE WHEN s.manager_id = m.manager_id THEN m.name END) as GM_NAME,\n      MAX(CASE WHEN r.manager_id= m.manager_id THEN m.name END) as RM_NAME,\n      MAX(CASE WHEN c.manager_id = m.manager_id THEN m.name END) as CM_NAME\nFROM site_table s\nLEFT JOIN region_table r ON(s.region_id = r.region_id)\nLEFT JOIN country_table c ON(s.country_id = c.country_id)\nLEFT JOIN manager_table m ON(m.manager_id IN(s.manager_id,r.manager_id,c.manager_id))\nGROUP BY s.name\n
    \n soup wrap:

    Try using conditional aggregation using CASE EXPRESSION . Your idea is correct, you need to join first all three tables, and then as you can see I joined to manager tables on m.manager_id IN(s.manager_id,r.manager_id,c.manager_id) . The aggregation after is to pivot the output since there will be 3 records for each name, containing manager_name for each one of the tables.

    SELECT s.name,
          MAX(CASE WHEN s.manager_id = m.manager_id THEN m.name END) as GM_NAME,
          MAX(CASE WHEN r.manager_id= m.manager_id THEN m.name END) as RM_NAME,
          MAX(CASE WHEN c.manager_id = m.manager_id THEN m.name END) as CM_NAME
    FROM site_table s
    INNER JOIN region_table r ON(s.region_id = r.region_id)
    INNER JOIN country_table c ON(s.country_id = c.country_id)
    INNER JOIN manager_table m ON(m.manager_id IN(s.manager_id,r.manager_id,c.manager_id))
    GROUP BY s.name
    

    If you can have missing data - nulls on any of the site_table id columns, which doesn't sound very reasonable by the data you provided and even by the idea of this structure, then use a left join

    SELECT s.name,
          MAX(CASE WHEN s.manager_id = m.manager_id THEN m.name END) as GM_NAME,
          MAX(CASE WHEN r.manager_id= m.manager_id THEN m.name END) as RM_NAME,
          MAX(CASE WHEN c.manager_id = m.manager_id THEN m.name END) as CM_NAME
    FROM site_table s
    LEFT JOIN region_table r ON(s.region_id = r.region_id)
    LEFT JOIN country_table c ON(s.country_id = c.country_id)
    LEFT JOIN manager_table m ON(m.manager_id IN(s.manager_id,r.manager_id,c.manager_id))
    GROUP BY s.name
    
    qid & accept id: (36583115, 36584632) query: SQLite difference between latest and second latest row soup:

    You could always simulate ROW_NUMBER:

    \n
    WITH cte AS\n(\n     SELECT *,\n           (SELECT COUNT(*) + 1 \n            FROM "events" e1\n            WHERE e1.event_type = e.event_type\n              AND e1.time > e.time) AS rn\n     FROM "events" e\n)\nSELECT c.event_type, c."value" - c2."value" AS "value"\nFROM cte c\nJOIN cte c2\n  ON c.event_type = c2.event_type\n AND c.rn = 1 AND c2.rn = 2\nORDER BY event_type, time;\n
    \n

    SqlFiddleDemo

    \n

    Output:

    \n
    ╔═══════════════╦═══════╗\n║ event_type    ║ value ║\n╠═══════════════╬═══════╣\n║            2  ║    -5 ║\n║            3  ║     4 ║\n╚═══════════════╩═══════╝\n
    \n
    \n

    Identifiers like time/events/value are reserwed words in some SQL dialects.

    \n soup wrap:

    You could always simulate ROW_NUMBER:

    WITH cte AS
    (
         SELECT *,
               (SELECT COUNT(*) + 1 
                FROM "events" e1
                WHERE e1.event_type = e.event_type
                  AND e1.time > e.time) AS rn
         FROM "events" e
    )
    SELECT c.event_type, c."value" - c2."value" AS "value"
    FROM cte c
    JOIN cte c2
      ON c.event_type = c2.event_type
     AND c.rn = 1 AND c2.rn = 2
    ORDER BY event_type, time;
    

    SqlFiddleDemo

    Output:

    ╔═══════════════╦═══════╗
    ║ event_type    ║ value ║
    ╠═══════════════╬═══════╣
    ║            2  ║    -5 ║
    ║            3  ║     4 ║
    ╚═══════════════╩═══════╝
    

    Identifiers like time/events/value are reserwed words in some SQL dialects.

    qid & accept id: (36586811, 36586831) query: Select statement subquery, multiple conditions soup:

    I think you just need a where clause. For the filtering:

    \n
    select t.*\nfrom data_tbl t\nwhere (column2 = 'Condition_1') and\n      (column3 = 'Condition_2' or column4 = 'Condition_3);\n
    \n

    I'm not sure what you want to return when both column3 and column4 meet the respective conditions, but I think this is what you want:

    \n
    select (case when column3 = 'Condition_2' then column3 else column4 end)\nfrom data_tbl t\nwhere (column2 = 'Condition_1') and\n      (column3 = 'Condition_2' or column4 = 'Condition_3);\n
    \n soup wrap:

    I think you just need a where clause. For the filtering:

    select t.*
    from data_tbl t
    where (column2 = 'Condition_1') and
          (column3 = 'Condition_2' or column4 = 'Condition_3);
    

    I'm not sure what you want to return when both column3 and column4 meet the respective conditions, but I think this is what you want:

    select (case when column3 = 'Condition_2' then column3 else column4 end)
    from data_tbl t
    where (column2 = 'Condition_1') and
          (column3 = 'Condition_2' or column4 = 'Condition_3);
    
    qid & accept id: (36598033, 36598139) query: Mysql select * FROM table_one WHERE columns_one and columns_two in tables one and 2 have the same data soup:

    That sounds like a job for EXISTS() which will check if a record with the same (column1,column2) exists.

    \n
    SELECT * FROM Table1 t\nWHERE EXISTS(SELECT 1 FROM Table2 s\n             WHERE t.column1 = s.column1 and t.column2 = s.column2)\n
    \n

    Can also be done with an INNER JOIN :

    \n
    SELECT t.* FROM Table1 t\nINNER JOIN Table2 s\n ON(t.column1 = s.column1 and t.column2 = s.column2)\n
    \n soup wrap:

    That sounds like a job for EXISTS() which will check if a record with the same (column1,column2) exists.

    SELECT * FROM Table1 t
    WHERE EXISTS(SELECT 1 FROM Table2 s
                 WHERE t.column1 = s.column1 and t.column2 = s.column2)
    

    Can also be done with an INNER JOIN :

    SELECT t.* FROM Table1 t
    INNER JOIN Table2 s
     ON(t.column1 = s.column1 and t.column2 = s.column2)
    
    qid & accept id: (36650818, 36651290) query: SQL delete records in order soup:

    The following will delete all rows that are not themselves parents. If the table is big and there's no index on ParentCommentID, it might take a while to run...

    \n
    DELETE Comment\n from Comment co\n where not exists (--  Correlated subquery\n                   select 1\n                    from Comment\n                    where ParentCommentID = co.ID)\n
    \n

    If the table is truly large, a big delete can do bad things to your system, such as locking the table and bloating the transaction log file. The following will limit just how many rows will be deleted:

    \n
    DELETE top (1000) Comment  --  (1000 is not very many)\n from Comment co\n where not exists (--  Correlated subquery\n                   select 1\n                    from Comment\n                    where ParentCommentID = co.ID)\n
    \n

    As deleting some but not all might not be so useful, here's a looping structure that will keep going until everything's gone:

    \n
    DECLARE @Done int = 1\n\n--BEGIN TRANSACTION\n\nWHILE @Done > 0\n BEGIN\n    --  Loop until nothing left to delete\n    DELETE top (1000) Comment\n     from Comment co\n     where not exists (--  Correlated subquery\n                       select 1\n                        from Comment\n                        where ParentCommentID = co.ID)\n    SET @Done = @@Rowcount\n\n END\n\n--ROLLBACK\n
    \n

    This last, of course, is dangerous (note the begin/end transaction used for testing!) You'll want WHERE clauses to limit what gets deleted, and something or to ensure you don't somehow hit an infinite loop--all details that depend on your data and circumstances.

    \n soup wrap:

    The following will delete all rows that are not themselves parents. If the table is big and there's no index on ParentCommentID, it might take a while to run...

    DELETE Comment
     from Comment co
     where not exists (--  Correlated subquery
                       select 1
                        from Comment
                        where ParentCommentID = co.ID)
    

    If the table is truly large, a big delete can do bad things to your system, such as locking the table and bloating the transaction log file. The following will limit just how many rows will be deleted:

    DELETE top (1000) Comment  --  (1000 is not very many)
     from Comment co
     where not exists (--  Correlated subquery
                       select 1
                        from Comment
                        where ParentCommentID = co.ID)
    

    As deleting some but not all might not be so useful, here's a looping structure that will keep going until everything's gone:

    DECLARE @Done int = 1
    
    --BEGIN TRANSACTION
    
    WHILE @Done > 0
     BEGIN
        --  Loop until nothing left to delete
        DELETE top (1000) Comment
         from Comment co
         where not exists (--  Correlated subquery
                           select 1
                            from Comment
                            where ParentCommentID = co.ID)
        SET @Done = @@Rowcount
    
     END
    
    --ROLLBACK
    

    This last, of course, is dangerous (note the begin/end transaction used for testing!) You'll want WHERE clauses to limit what gets deleted, and something or to ensure you don't somehow hit an infinite loop--all details that depend on your data and circumstances.

    qid & accept id: (36741708, 36744772) query: SQL calculate DayTime and NightTime between two DateTime values soup:

    You can use cte for that count:

    \n
    DECLARE\n@DateTime1 datetime = '2016-04-20 13:30',\n@DateTime2 datetime = '2016-04-21 07:15'\n\n;WITH times AS(\nSELECT  @DateTime1 as d,\n        CASE WHEN DATEPART(hour,@DateTime1) between 6 and 22 then 'd' else 'n' end as a,\n        0 as m\nUNION ALL\nSELECT  DATEADD(minute,1,d),\n        CASE WHEN DATEPART(hour,DATEADD(minute,1,d)) between 6 and 22 then 'd' else 'n' end as a,\n        DATEDIFF(minute,d,DATEADD(minute,1,d)) \nFROM times\nWHERE DATEADD(minute,1,d) <= @DateTime2\n)\n\nSELECT  CASE WHEN a = 'd' THEN 'DayTime' ELSE 'NightTime' END as TimePart,\n        sum(m)/60 as H,\n        sum(m) - (sum(m)/60)* 60 as M\nFROM times\nGROUP BY a\nOPTION (MAXRECURSION 0)\n
    \n

    Output be like:

    \n
    TimePart  H           M\n--------- ----------- -----------\nDayTime   10          45\nNightTime 7           0\n\n(2 row(s) affected)\n
    \n soup wrap:

    You can use cte for that count:

    DECLARE
    @DateTime1 datetime = '2016-04-20 13:30',
    @DateTime2 datetime = '2016-04-21 07:15'
    
    ;WITH times AS(
    SELECT  @DateTime1 as d,
            CASE WHEN DATEPART(hour,@DateTime1) between 6 and 22 then 'd' else 'n' end as a,
            0 as m
    UNION ALL
    SELECT  DATEADD(minute,1,d),
            CASE WHEN DATEPART(hour,DATEADD(minute,1,d)) between 6 and 22 then 'd' else 'n' end as a,
            DATEDIFF(minute,d,DATEADD(minute,1,d)) 
    FROM times
    WHERE DATEADD(minute,1,d) <= @DateTime2
    )
    
    SELECT  CASE WHEN a = 'd' THEN 'DayTime' ELSE 'NightTime' END as TimePart,
            sum(m)/60 as H,
            sum(m) - (sum(m)/60)* 60 as M
    FROM times
    GROUP BY a
    OPTION (MAXRECURSION 0)
    

    Output be like:

    TimePart  H           M
    --------- ----------- -----------
    DayTime   10          45
    NightTime 7           0
    
    (2 row(s) affected)
    
    qid & accept id: (36770316, 36770360) query: Select from cross-reference based on inclusion (column values being subset) soup:

    Hmmm . . . One way uses aggregation:

    \n
    select a_id\nfrom t\ngroup by a_id\nhaving sum(case when b_id not in (1, 2, 3, 4, 5) then 1 else 0 end) = 0;\n
    \n

    However, assuming you have an a table, then I prefer this method:

    \n
    select a_id\nfrom a\nwhere not exists (select 1\n                  from t\n                  where t.a_id = a.a_id and t.b_id not in (1, 2, 3, 4, 5)\n                 );\n
    \n

    This saves the expense of aggregation and the lookup can take advantage of an appropriate index (on t(a_id, b_id)) so this should have better performance.

    \n soup wrap:

    Hmmm . . . One way uses aggregation:

    select a_id
    from t
    group by a_id
    having sum(case when b_id not in (1, 2, 3, 4, 5) then 1 else 0 end) = 0;
    

    However, assuming you have an a table, then I prefer this method:

    select a_id
    from a
    where not exists (select 1
                      from t
                      where t.a_id = a.a_id and t.b_id not in (1, 2, 3, 4, 5)
                     );
    

    This saves the expense of aggregation and the lookup can take advantage of an appropriate index (on t(a_id, b_id)) so this should have better performance.

    qid & accept id: (36834990, 36835186) query: Insert in SQL from a certain date soup:

    After first insert you have to use update statement. like this:

    \n
    update schedule \nset place='Room A'\n
    \n

    OR you can do this just as one insert:

    \n
    INSERT INTO schedule (date, place)\n  VALUES\n  ('2016-05-16 13:00:00','Room A'),\n  ('2016-05-16 14:00:00','Room A'),\n  ('2016-05-16 15:00:00','Room A'),\n  ('2016-05-16 16:00:00','Room A'),\n  ('2016-05-16 17:00:00','Room A'),\n  ('2016-05-17 13:00:00','Room A'),\n  ('2016-05-17 14:00:00','Room A'),\n  ('2016-05-17 15:00:00','Room A'),\n  ('2016-05-17 16:00:00','Room A'),\n  ('2016-05-17 17:00:00','Room A'),\n  ('2016-05-18 13:00:00','Room A'),\n  ('2016-05-18 14:00:00','Room A'),\n  ('2016-05-18 15:00:00','Room A'),\n  ('2016-05-18 16:00:00','Room A'),\n  ('2016-05-18 17:00:00','Room A'),\n  ('2016-05-19 13:00:00','Room A'),\n  ('2016-05-19 14:00:00','Room A'),\n  ('2016-05-19 15:00:00','Room A'),\n  ('2016-05-19 16:00:00','Room A'),\n  ('2016-05-19 17:00:00','Room A');\n
    \n soup wrap:

    After first insert you have to use update statement. like this:

    update schedule 
    set place='Room A'
    

    OR you can do this just as one insert:

    INSERT INTO schedule (date, place)
      VALUES
      ('2016-05-16 13:00:00','Room A'),
      ('2016-05-16 14:00:00','Room A'),
      ('2016-05-16 15:00:00','Room A'),
      ('2016-05-16 16:00:00','Room A'),
      ('2016-05-16 17:00:00','Room A'),
      ('2016-05-17 13:00:00','Room A'),
      ('2016-05-17 14:00:00','Room A'),
      ('2016-05-17 15:00:00','Room A'),
      ('2016-05-17 16:00:00','Room A'),
      ('2016-05-17 17:00:00','Room A'),
      ('2016-05-18 13:00:00','Room A'),
      ('2016-05-18 14:00:00','Room A'),
      ('2016-05-18 15:00:00','Room A'),
      ('2016-05-18 16:00:00','Room A'),
      ('2016-05-18 17:00:00','Room A'),
      ('2016-05-19 13:00:00','Room A'),
      ('2016-05-19 14:00:00','Room A'),
      ('2016-05-19 15:00:00','Room A'),
      ('2016-05-19 16:00:00','Room A'),
      ('2016-05-19 17:00:00','Room A');
    
    qid & accept id: (36909436, 36910225) query: yii2 how to create and condition in another one soup:

    andFilterWhere and orFilterWhere doesn't allow nesting this way since they operate on the query object - not on a condition object. You could define your query this way:

    \n
    $query->where(\n    ['and',\n        [\n            'between',\n            'str_to_date(\'' . $this->dateRecherche . '\', \'%d/%m/%Y %H:%i\')',\n            new Expression('str_to_date(erDateDebut, \'%d/%m/%Y %H:%i\')'),\n            new Expression('str_to_date(erDateFin, \'%d/%m/%Y %H:%i\')'),\n        ],\n        [\n            'not', ['erDateFin' => null],\n        ]\n    ]);\n\n$query->orWhere(\n    ['and',\n        [\n            'between',\n            'str_to_date(\'' . $this->dateRecherche . '\', \'%d/%m/%Y %H:%i\')',\n            new Expression('str_to_date(erDateDebut, \'%d/%m/%Y %H:%i\')'),\n            new Expression('now()'),\n        ],\n        [\n            'is', 'erDateFin', null,\n        ]\n    ]);\n
    \n

    You can even put it all into one where method call:

    \n
    $query->where(\n    ['or',\n        ['and',\n            [\n                'between',\n                'str_to_date(\'' . $this->dateRecherche . '\', \'%d/%m/%Y %H:%i\')',\n                new Expression('str_to_date(erDateDebut, \'%d/%m/%Y %H:%i\')'),\n                new Expression('str_to_date(erDateFin, \'%d/%m/%Y %H:%i\')'),\n            ],\n            [\n                'not', ['erDateFin' => null],\n            ]\n        ],\n        ['and',\n            [\n                'between',\n                'str_to_date(\'' . $this->dateRecherche . '\', \'%d/%m/%Y %H:%i\')',\n                new Expression('str_to_date(erDateDebut, \'%d/%m/%Y %H:%i\')'),\n                new Expression('now()'),\n            ],\n            [\n                'is', 'erDateFin', null,\n            ]\n        ]\n    ]);\n
    \n

    I have used the where method, not the filterWhere methods, because you probably don't want to remove empty operands from the query. More information about filtering can be found here. where also documents the and and or operators.

    \n

    As you probably already know, count can be done with

    \n
    $count = $query->count();\n
    \n soup wrap:

    andFilterWhere and orFilterWhere doesn't allow nesting this way since they operate on the query object - not on a condition object. You could define your query this way:

    $query->where(
        ['and',
            [
                'between',
                'str_to_date(\'' . $this->dateRecherche . '\', \'%d/%m/%Y %H:%i\')',
                new Expression('str_to_date(erDateDebut, \'%d/%m/%Y %H:%i\')'),
                new Expression('str_to_date(erDateFin, \'%d/%m/%Y %H:%i\')'),
            ],
            [
                'not', ['erDateFin' => null],
            ]
        ]);
    
    $query->orWhere(
        ['and',
            [
                'between',
                'str_to_date(\'' . $this->dateRecherche . '\', \'%d/%m/%Y %H:%i\')',
                new Expression('str_to_date(erDateDebut, \'%d/%m/%Y %H:%i\')'),
                new Expression('now()'),
            ],
            [
                'is', 'erDateFin', null,
            ]
        ]);
    

    You can even put it all into one where method call:

    $query->where(
        ['or',
            ['and',
                [
                    'between',
                    'str_to_date(\'' . $this->dateRecherche . '\', \'%d/%m/%Y %H:%i\')',
                    new Expression('str_to_date(erDateDebut, \'%d/%m/%Y %H:%i\')'),
                    new Expression('str_to_date(erDateFin, \'%d/%m/%Y %H:%i\')'),
                ],
                [
                    'not', ['erDateFin' => null],
                ]
            ],
            ['and',
                [
                    'between',
                    'str_to_date(\'' . $this->dateRecherche . '\', \'%d/%m/%Y %H:%i\')',
                    new Expression('str_to_date(erDateDebut, \'%d/%m/%Y %H:%i\')'),
                    new Expression('now()'),
                ],
                [
                    'is', 'erDateFin', null,
                ]
            ]
        ]);
    

    I have used the where method, not the filterWhere methods, because you probably don't want to remove empty operands from the query. More information about filtering can be found here. where also documents the and and or operators.

    As you probably already know, count can be done with

    $count = $query->count();
    
    qid & accept id: (36921158, 36944570) query: JOIN multiple rows to multiple columns in single row Netezza/Postgres soup:

    I found one way of getting the desired results, by writing a row_number() sub select limit to the desired window size. Which gives each entry per date s.th like this

    \n
    Date         Name      Value    Row_Num\n---------------------------------------\n2015-01-01    A         12        0\n2015-01-01    A         12        1\n2015-01-01    A         12        2\n2015-01-01    A         12        3\n
    \n

    In the next step one can use

    \n
    (Date + Row_Num*INTERVAL'1 DAY')::DATE \n
    \n

    which then can be joined on the initial table and pivoted. This will allow for any arbitrary combination of Names per date.

    \n soup wrap:

    I found one way of getting the desired results, by writing a row_number() sub select limit to the desired window size. Which gives each entry per date s.th like this

    Date         Name      Value    Row_Num
    ---------------------------------------
    2015-01-01    A         12        0
    2015-01-01    A         12        1
    2015-01-01    A         12        2
    2015-01-01    A         12        3
    

    In the next step one can use

    (Date + Row_Num*INTERVAL'1 DAY')::DATE 
    

    which then can be joined on the initial table and pivoted. This will allow for any arbitrary combination of Names per date.

    qid & accept id: (36936128, 36936188) query: Convert date to nvarchar and merge two columns soup:

    Although you can fiddle around with conversion codes, just use replace:

    \n
    REPLACE(CONVERT(NVARCHAR(255), Column_1) + CONVERT(NVARCHAR(255), Column_2), '-', '') AS TEST\n
    \n

    Or, if you don't want to be dependent on the local date format:

    \n
    CONVERT(NVARCHAR(255), Column_1, 112) + CONVERT(NVARCHAR(255), Column_2, 112) AS TEST\n
    \n soup wrap:

    Although you can fiddle around with conversion codes, just use replace:

    REPLACE(CONVERT(NVARCHAR(255), Column_1) + CONVERT(NVARCHAR(255), Column_2), '-', '') AS TEST
    

    Or, if you don't want to be dependent on the local date format:

    CONVERT(NVARCHAR(255), Column_1, 112) + CONVERT(NVARCHAR(255), Column_2, 112) AS TEST
    
    qid & accept id: (36988239, 36988356) query: T-SQL- Concatenate variable # of rows soup:

    Looks like presentation matter only and it should be done in application layer.

    \n


    \nYou could use DENSE_RANK() and integer division to create groups:\n\n
    WITH cte AS\n(\n   SELECT DISTINCT ID, (DENSE_RANK() OVER(ORDER BY ID) - 1)/3 AS grp \n   FROM #Temp\n)\nSELECT DISTINCT STUFF((SELECT ',' + ID\n                       FROM cte T1\n                       WHERE T1.grp = T2.grp\n                       ORDER BY ID\n                       FOR XML PATH('')\n                       ), 1, 1, '') ID\nFROM cte T2;\n
    \n

    LiveDemo

    \n

    Output:

    \n
    ╔═════════════╗\n║     ID      ║\n╠═════════════╣\n║ 123,234,345 ║\n║ 456,567,678 ║\n║ 789,890,901 ║\n╚═════════════╝\n
    \n soup wrap:

    Looks like presentation matter only and it should be done in application layer.


    You could use DENSE_RANK() and integer division to create groups:
    WITH cte AS
    (
       SELECT DISTINCT ID, (DENSE_RANK() OVER(ORDER BY ID) - 1)/3 AS grp 
       FROM #Temp
    )
    SELECT DISTINCT STUFF((SELECT ',' + ID
                           FROM cte T1
                           WHERE T1.grp = T2.grp
                           ORDER BY ID
                           FOR XML PATH('')
                           ), 1, 1, '') ID
    FROM cte T2;
    

    LiveDemo

    Output:

    ╔═════════════╗
    ║     ID      ║
    ╠═════════════╣
    ║ 123,234,345 ║
    ║ 456,567,678 ║
    ║ 789,890,901 ║
    ╚═════════════╝
    
    qid & accept id: (36989059, 36989525) query: oracle database: select between certain time of the day soup:
    select * from workpaths where to_char(wp_stime,'hh24') between 9 and 16; \n
    \n

    Should help. Oracle will extract the hour part from your date field as string, and on seeing that you are comparing with numbers would implicitly convert it to number. Thus you can compare between hours. Effectively, this query gives dates whose time is greater than 9 am and less than 5pm.

    \n

    EDIT :

    \n

    17 is replaced with 16, so that values till 16:59:59 would be considered.

    \n

    EDIT 2 :

    \n

    To explicitly perform string to numeric casting :

    \n
    select * from workpaths where to_number(to_char(wp_stime,'hh24')) between 9 and 16; \n
    \n soup wrap:
    select * from workpaths where to_char(wp_stime,'hh24') between 9 and 16; 
    

    Should help. Oracle will extract the hour part from your date field as string, and on seeing that you are comparing with numbers would implicitly convert it to number. Thus you can compare between hours. Effectively, this query gives dates whose time is greater than 9 am and less than 5pm.

    EDIT :

    17 is replaced with 16, so that values till 16:59:59 would be considered.

    EDIT 2 :

    To explicitly perform string to numeric casting :

    select * from workpaths where to_number(to_char(wp_stime,'hh24')) between 9 and 16; 
    
    qid & accept id: (37009235, 37009274) query: SQL Query query matching columns and counting rows soup:

    You are looking for this

    \n
    SELECT\nCASE WHEN P1 IS NOT NULL THEN P1 ELSE P2 END AS P,\nCOUNT(SomeData) AS Counts\nFROM MyTable\nGROUP BY CASE WHEN P1 IS NOT NULL THEN P1 ELSE P2 END\n
    \n

    or Case can be simplified by Coalesce/Isnull function

    \n
    SELECT\nCOALESCE(P1,P2) AS P,\nCOUNT(SomeData) AS Counts\nFROM MyTable\nGROUP BY COALESCE(P1,P2)\n
    \n
      \n
    1. Just use the entire CASE statement instead of alias
    2. \n
    3. No need to use Case statement to count the Not NULL values. Count(Colname) aggregate counts only the Not NULL values
    4. \n
    \n soup wrap:

    You are looking for this

    SELECT
    CASE WHEN P1 IS NOT NULL THEN P1 ELSE P2 END AS P,
    COUNT(SomeData) AS Counts
    FROM MyTable
    GROUP BY CASE WHEN P1 IS NOT NULL THEN P1 ELSE P2 END
    

    or Case can be simplified by Coalesce/Isnull function

    SELECT
    COALESCE(P1,P2) AS P,
    COUNT(SomeData) AS Counts
    FROM MyTable
    GROUP BY COALESCE(P1,P2)
    
    1. Just use the entire CASE statement instead of alias
    2. No need to use Case statement to count the Not NULL values. Count(Colname) aggregate counts only the Not NULL values
    qid & accept id: (37064962, 37065041) query: Get the word before particular word in sql soup:

    Try this

    \n
    DECLARE @a varchar(500), @v varchar(500)\nSET @a='MALTON ROAD WICKLOW EIRE'\nSELECT @v = LTRIM(RTRIM(SUBSTRING(@a,1,charindex('EIRE',@a)-1)))\nSELECT REVERSE( LEFT( REVERSE(@v), \n            ISNULL(NULLIF(CHARINDEX(' ', REVERSE(@v)),0)-1,LEN(@v)) ) )\n
    \n

    Result:

    \n
    DATA                        RESULT\n-----------------------------------\nMALTON EIRE                 MALTON\nMALTON ROAD WICKLOW EIRE    WICKLOW\n
    \n soup wrap:

    Try this

    DECLARE @a varchar(500), @v varchar(500)
    SET @a='MALTON ROAD WICKLOW EIRE'
    SELECT @v = LTRIM(RTRIM(SUBSTRING(@a,1,charindex('EIRE',@a)-1)))
    SELECT REVERSE( LEFT( REVERSE(@v), 
                ISNULL(NULLIF(CHARINDEX(' ', REVERSE(@v)),0)-1,LEN(@v)) ) )
    

    Result:

    DATA                        RESULT
    -----------------------------------
    MALTON EIRE                 MALTON
    MALTON ROAD WICKLOW EIRE    WICKLOW
    
    qid & accept id: (37084523, 37084548) query: Grouping of pairs in sql soup:

    Group by a case statement that selects the pairs in alphabetical order:

    \n
    select case when col1 < col2 then col1 else col2 end as col1,\ncase when col1 < col2 then col2 else col1 end as col2\nfrom (\n    select 'a' as col1, 'b' as col2\n    union all\n    select 'b', 'a'\n    union all\n    select 'c', 'd'\n    union all\n    select 'a', 'c'\n    union all\n    select 'a', 'd'\n    union all\n    select 'b', 'c'\n    union all\n    select 'd', 'a'\n) t group by case when col1 < col2 then col1 else col2 end,\ncase when col1 < col2 then col2 else col1 end\n
    \n

    http://sqlfiddle.com/#!3/9eecb7db59d16c80417c72d1/6977

    \n

    If you simply want unique values (as opposed to a grouping for aggregation) then you can use distinct instead of group by

    \n
    select distinct case when col1 < col2 then col1 else col2 end as col1,\ncase when col1 < col2 then col2 else col1 end as col2\nfrom (\n    select 'a' as col1, 'b' as col2\n    union all\n    select 'b', 'a'\n    union all\n    select 'c', 'd'\n    union all\n    select 'a', 'c'\n    union all\n    select 'a', 'd'\n    union all\n    select 'b', 'c'\n    union all\n    select 'd', 'a'\n) t\n
    \n soup wrap:

    Group by a case statement that selects the pairs in alphabetical order:

    select case when col1 < col2 then col1 else col2 end as col1,
    case when col1 < col2 then col2 else col1 end as col2
    from (
        select 'a' as col1, 'b' as col2
        union all
        select 'b', 'a'
        union all
        select 'c', 'd'
        union all
        select 'a', 'c'
        union all
        select 'a', 'd'
        union all
        select 'b', 'c'
        union all
        select 'd', 'a'
    ) t group by case when col1 < col2 then col1 else col2 end,
    case when col1 < col2 then col2 else col1 end
    

    http://sqlfiddle.com/#!3/9eecb7db59d16c80417c72d1/6977

    If you simply want unique values (as opposed to a grouping for aggregation) then you can use distinct instead of group by

    select distinct case when col1 < col2 then col1 else col2 end as col1,
    case when col1 < col2 then col2 else col1 end as col2
    from (
        select 'a' as col1, 'b' as col2
        union all
        select 'b', 'a'
        union all
        select 'c', 'd'
        union all
        select 'a', 'c'
        union all
        select 'a', 'd'
        union all
        select 'b', 'c'
        union all
        select 'd', 'a'
    ) t
    
    qid & accept id: (37088208, 37088438) query: how to add data two colums add after add two add next soup:

    I'm not 100% sure what you're asking, but it sounds like you want a cumulative sum. That's a question that's been answered already:

    \n

    https://stackoverflow.com/a/2120639/2565840

    \n

    EDIT: in your case I think the query below should work

    \n
    WITH    \nrows AS (\n        SELECT  *, ROW_NUMBER() OVER (ORDER BY gps_time) AS rn\n        FROM    rawtTackHistory_A2Z where car_id = 12956 \n),\ndifferences AS (\n    SELECT  mc.rn, mc.gps_time,DATEDIFF(second, mc.gps_time, mp.gps_time) time_diff\n    FROM    rows mc\n    JOIN    rows mp\n    ON      mc.rn = mp.rn - 1\n)\nSELECT t1.gps_time, t1.time_diff, SUM(t2.time_diff) time_sum\nFROM differences t1\nINNER JOIN differences t2 \nON t1.rn >= t2.rn\nGROUP BY t1.rn, t1.gps_time, t1.time_diff\nORDER BY t1.rn\n
    \n

    or if you're using SQL Server 2012 or later, this should run quicker:

    \n
    SELECT gps_time\n     , DATEDIFF(second, LAG(gps_time) OVER (ORDER BY gps_time), gps_time) time_diff\n     , DATEDIFF(second, MIN(gps_time) OVER (ORDER BY gps_time), gps_time) time_sum\nFROM rawtTackHistory_A2Z \nORDER BY gps_time\n
    \n

    It's using a windowing clause (OVER). More detail here: https://msdn.microsoft.com/en-us/library/ms189461.aspx

    \n soup wrap:

    I'm not 100% sure what you're asking, but it sounds like you want a cumulative sum. That's a question that's been answered already:

    https://stackoverflow.com/a/2120639/2565840

    EDIT: in your case I think the query below should work

    WITH    
    rows AS (
            SELECT  *, ROW_NUMBER() OVER (ORDER BY gps_time) AS rn
            FROM    rawtTackHistory_A2Z where car_id = 12956 
    ),
    differences AS (
        SELECT  mc.rn, mc.gps_time,DATEDIFF(second, mc.gps_time, mp.gps_time) time_diff
        FROM    rows mc
        JOIN    rows mp
        ON      mc.rn = mp.rn - 1
    )
    SELECT t1.gps_time, t1.time_diff, SUM(t2.time_diff) time_sum
    FROM differences t1
    INNER JOIN differences t2 
    ON t1.rn >= t2.rn
    GROUP BY t1.rn, t1.gps_time, t1.time_diff
    ORDER BY t1.rn
    

    or if you're using SQL Server 2012 or later, this should run quicker:

    SELECT gps_time
         , DATEDIFF(second, LAG(gps_time) OVER (ORDER BY gps_time), gps_time) time_diff
         , DATEDIFF(second, MIN(gps_time) OVER (ORDER BY gps_time), gps_time) time_sum
    FROM rawtTackHistory_A2Z 
    ORDER BY gps_time
    

    It's using a windowing clause (OVER). More detail here: https://msdn.microsoft.com/en-us/library/ms189461.aspx

    qid & accept id: (37109635, 37110021) query: MIN on a date field soup:

    This is a simple question but there's a big catch to it.

    \n

    Is it possible to apply MIN to date column ?

    \n
      \n
    • Yes you can.
    • \n
    \n

    Is this the correct way MIN(TO_DATE(Person.birthday, 'DD-Mon-YY')) ?

    \n
      \n
    • No. You should use MIN(Person.birthday) If the column is already\nDATE type, you should not use TO_DATE to convert it again as\nORACLE converts it implicitly.
    • \n
    \n

    Here's an example why -

    \n

    DATA -

    \n
    +-------+--------+-----------+------+-------------+------+------+--------+\n| EMPNO | ENAME  |    JOB    | MGR  |  HIREDATE   | SAL  | COMM | DEPTNO |\n+-------+--------+-----------+------+-------------+------+------+--------+\n|  7369 | SMITH  | CLERK     | 7902 | 17/Dec/1980 |  800 |      |     20 |\n|  7499 | ALLEN  | SALESMAN  | 7698 | 20/Feb/1981 | 1600 |  300 |     30 |\n|  7521 | WARD   | SALESMAN  | 7698 | 22/Feb/1981 | 1250 |  500 |     30 |\n+-------+--------+-----------+------+-------------+------+------+--------+\n
    \n

    QUERY 1

    \n
    select MIN(TO_DATE(hiredate, 'DD-Mon-YY')) from emp;\n
    \n

    RESULT 1

    \n
    17/12/2080\n
    \n

    QUERY 2

    \n
    select MIN(hiredate) from emp;\n
    \n

    RESULT 2

    \n
    17/12/1980\n
    \n

    As you can see the century is messed up when you use TO_DATE function in QUERY 1. However, the result is as expected in QUERY 2

    \n
    \n

    If you necessarily have to use TO_DATE function, I would suggest\n you to use DD-Mon-RR format as it takes care of the century\n mismatch. This format was created when the problem for year Y2K (the Millennium bug) came up. However, I still wouldn't advise to go for it.

    \n
    \n

    EDIT 1:

    \n

    I am not sure how Person.birthday is a valid column name. Can anyone enlighten me on this ?

    \n soup wrap:

    This is a simple question but there's a big catch to it.

    Is it possible to apply MIN to date column ?

    • Yes you can.

    Is this the correct way MIN(TO_DATE(Person.birthday, 'DD-Mon-YY')) ?

    • No. You should use MIN(Person.birthday) If the column is already DATE type, you should not use TO_DATE to convert it again as ORACLE converts it implicitly.

    Here's an example why -

    DATA -

    +-------+--------+-----------+------+-------------+------+------+--------+
    | EMPNO | ENAME  |    JOB    | MGR  |  HIREDATE   | SAL  | COMM | DEPTNO |
    +-------+--------+-----------+------+-------------+------+------+--------+
    |  7369 | SMITH  | CLERK     | 7902 | 17/Dec/1980 |  800 |      |     20 |
    |  7499 | ALLEN  | SALESMAN  | 7698 | 20/Feb/1981 | 1600 |  300 |     30 |
    |  7521 | WARD   | SALESMAN  | 7698 | 22/Feb/1981 | 1250 |  500 |     30 |
    +-------+--------+-----------+------+-------------+------+------+--------+
    

    QUERY 1

    select MIN(TO_DATE(hiredate, 'DD-Mon-YY')) from emp;
    

    RESULT 1

    17/12/2080
    

    QUERY 2

    select MIN(hiredate) from emp;
    

    RESULT 2

    17/12/1980
    

    As you can see the century is messed up when you use TO_DATE function in QUERY 1. However, the result is as expected in QUERY 2

    If you necessarily have to use TO_DATE function, I would suggest you to use DD-Mon-RR format as it takes care of the century mismatch. This format was created when the problem for year Y2K (the Millennium bug) came up. However, I still wouldn't advise to go for it.

    EDIT 1:

    I am not sure how Person.birthday is a valid column name. Can anyone enlighten me on this ?

    qid & accept id: (37111927, 37112045) query: Oracle sql execute sql from varchar field soup:

    You can't dynamicly add SQL within SQL.

    \n

    Alternatively you can use encapsulate the query logic in a function and use Dynamic SQL in PL/SQL.

    \n

    For this you would need to create a function (my_function in the SQL below) that returns a collection of string and accepts a SQL statement as a parameter and write your query this way

    \n
     SELECT *\n   FROM table_B, table_A \n   WHERE table_B.id = table_A.id\n     AND table_B.value IN (select column_value from Table(MY_FUNCTION(Table_A.SQL_Statement))\n
    \n

    Performance is not to be ignored with this approach. I suggest you to evaluate the consequence of context switching before going with this solution

    \n

    Additionally, you'll have to analyze if SQL Injection is a possibility and make sure that no malicious SQL is passed as a parameter to the function

    \n

    Sample Code

    \n
    CREATE TYPE varchar_tab_t AS TABLE OF VARCHAR2(30);\n/\n\n\nCREATE OR REPLACE function MY_FUNCTION (sqlstring in varchar2) return varchar_tab_t IS\n v_values_tab varchar_tab_t;\nBEGIN\n\n  EXECUTE IMMEDIATE sqlstring bulk collect into v_values_tab;\n  return v_values_tab;  \nEND MY_FUNCTION;\n/\n\n\nwith table_a (id, SQL_STATEMENT) as \n  (select 1, 'Select 1 from dual union select 2 from dual union select 3 from dual' from dual)\n, table_b (id, value) as \n  (            select 1, 1 from dual \n    union  all select 1, 2 from dual \n    union  all select 1, 5 from dual -- this one should not be shown\n   )  \n SELECT *\n   FROM table_B, table_A \n   WHERE table_B.id = table_A.id\n     AND table_B.value IN (select column_value from Table(MY_FUNCTION(Table_A.SQL_Statement)))\n
    \n

    Result

    \n
    1   1   1   Select 1 from dual union select 2 from dual union select 3 from dual\n1   2   1   Select 1 from dual union select 2 from dual union select 3 from dual\n
    \n soup wrap:

    You can't dynamicly add SQL within SQL.

    Alternatively you can use encapsulate the query logic in a function and use Dynamic SQL in PL/SQL.

    For this you would need to create a function (my_function in the SQL below) that returns a collection of string and accepts a SQL statement as a parameter and write your query this way

     SELECT *
       FROM table_B, table_A 
       WHERE table_B.id = table_A.id
         AND table_B.value IN (select column_value from Table(MY_FUNCTION(Table_A.SQL_Statement))
    

    Performance is not to be ignored with this approach. I suggest you to evaluate the consequence of context switching before going with this solution

    Additionally, you'll have to analyze if SQL Injection is a possibility and make sure that no malicious SQL is passed as a parameter to the function

    Sample Code

    CREATE TYPE varchar_tab_t AS TABLE OF VARCHAR2(30);
    /
    
    
    CREATE OR REPLACE function MY_FUNCTION (sqlstring in varchar2) return varchar_tab_t IS
     v_values_tab varchar_tab_t;
    BEGIN
    
      EXECUTE IMMEDIATE sqlstring bulk collect into v_values_tab;
      return v_values_tab;  
    END MY_FUNCTION;
    /
    
    
    with table_a (id, SQL_STATEMENT) as 
      (select 1, 'Select 1 from dual union select 2 from dual union select 3 from dual' from dual)
    , table_b (id, value) as 
      (            select 1, 1 from dual 
        union  all select 1, 2 from dual 
        union  all select 1, 5 from dual -- this one should not be shown
       )  
     SELECT *
       FROM table_B, table_A 
       WHERE table_B.id = table_A.id
         AND table_B.value IN (select column_value from Table(MY_FUNCTION(Table_A.SQL_Statement)))
    

    Result

    1   1   1   Select 1 from dual union select 2 from dual union select 3 from dual
    1   2   1   Select 1 from dual union select 2 from dual union select 3 from dual
    
    qid & accept id: (37148559, 37150100) query: Calculating overlap in MySQL soup:

    You can do that by generating results to represent links : src -> dst = nb

    \n

    1) Get matrix

    \n
    select c1.class src_class, c2.class dst_class\nfrom (select distinct class from classes) c1\njoin (select distinct class from classes) c2\norder by src_class, dst_class\n
    \n

    The "select distinct class" is not necessary to generate matrix, you can just directly select classes and GROUP BY. But, at step 2 we need that unique results.

    \n

    Result :

    \n
    src_class      dst_class\n-----------------------------\nalgebra        algebra\nalgebra        gym\nalgebra        world_history\ngym            algebra\ngym            gym\ngym            world_history\nworld_history  algebra\nworld_history  gym\nworld_history  world_history\n
    \n

    2) Join list of students that match the source and destination

    \n
    select c1.class src_class, c2.class dst_class, count(v.student_id) overlap\nfrom (select distinct class from classes) c1\njoin (select distinct class from classes) c2\nleft join classes v on\n(\n    v.class = c1.class\n    and v.student_id in (select student_id from classes\n                         where class = c2.class)\n)\ngroup by src_class, dst_class\norder by src_class, dst_class\n
    \n

    The distinct values (step 1) allow us to get all classes, even if they are no links (and put 0 instead).

    \n

    Result :

    \n
    src_class      dst_class      overlap\n-------------------------------------\nalgebra        algebra           7\nalgebra        gym               2\nalgebra        world_history     1\ngym            algebra           2\ngym            gym               5\ngym            world_history     2\nworld_history  algebra           1\nworld_history  gym               2\nworld_history  world_history     6\n
    \n

    3 - Make a different calcul if classes are equals

    \n
    select c1.class src_class, c2.class dst_class, count(v.student_id) overlap\nfrom (select distinct class from classes) c1\njoin (select distinct class from classes) c2\nleft join classes v on\n(\n    v.class = c1.class and\n    (\n        -- When classes are equals\n        -- Students presents only in that class\n        (c1.class = c2.class\n         and 1 = (select count(*) from classes\n                  where student_id = v.student_id))\n    or\n        -- When classes are differents\n        -- Students present in both classes\n        (c1.class != c2.class\n         and v.student_id in (select student_id from classes\n                              where class = c2.class))\n    )\n)\ngroup by src_class, dst_class\norder by src_class, dst_class\n
    \n

    Result :

    \n
    src_class      dst_class      overlap\n-------------------------------------\nalgebra        algebra           5\nalgebra        gym               2\nalgebra        world_history     1\ngym            algebra           2\ngym            gym               2\ngym            world_history     2\nworld_history  algebra           1\nworld_history  gym               2\nworld_history  world_history     4\n
    \n soup wrap:

    You can do that by generating results to represent links : src -> dst = nb

    1) Get matrix

    select c1.class src_class, c2.class dst_class
    from (select distinct class from classes) c1
    join (select distinct class from classes) c2
    order by src_class, dst_class
    

    The "select distinct class" is not necessary to generate matrix, you can just directly select classes and GROUP BY. But, at step 2 we need that unique results.

    Result :

    src_class      dst_class
    -----------------------------
    algebra        algebra
    algebra        gym
    algebra        world_history
    gym            algebra
    gym            gym
    gym            world_history
    world_history  algebra
    world_history  gym
    world_history  world_history
    

    2) Join list of students that match the source and destination

    select c1.class src_class, c2.class dst_class, count(v.student_id) overlap
    from (select distinct class from classes) c1
    join (select distinct class from classes) c2
    left join classes v on
    (
        v.class = c1.class
        and v.student_id in (select student_id from classes
                             where class = c2.class)
    )
    group by src_class, dst_class
    order by src_class, dst_class
    

    The distinct values (step 1) allow us to get all classes, even if they are no links (and put 0 instead).

    Result :

    src_class      dst_class      overlap
    -------------------------------------
    algebra        algebra           7
    algebra        gym               2
    algebra        world_history     1
    gym            algebra           2
    gym            gym               5
    gym            world_history     2
    world_history  algebra           1
    world_history  gym               2
    world_history  world_history     6
    

    3 - Make a different calcul if classes are equals

    select c1.class src_class, c2.class dst_class, count(v.student_id) overlap
    from (select distinct class from classes) c1
    join (select distinct class from classes) c2
    left join classes v on
    (
        v.class = c1.class and
        (
            -- When classes are equals
            -- Students presents only in that class
            (c1.class = c2.class
             and 1 = (select count(*) from classes
                      where student_id = v.student_id))
        or
            -- When classes are differents
            -- Students present in both classes
            (c1.class != c2.class
             and v.student_id in (select student_id from classes
                                  where class = c2.class))
        )
    )
    group by src_class, dst_class
    order by src_class, dst_class
    

    Result :

    src_class      dst_class      overlap
    -------------------------------------
    algebra        algebra           5
    algebra        gym               2
    algebra        world_history     1
    gym            algebra           2
    gym            gym               2
    gym            world_history     2
    world_history  algebra           1
    world_history  gym               2
    world_history  world_history     4
    
    qid & accept id: (37173392, 37175703) query: PostgreSQL function check if field is CSV soup:

    PostgreSQL is an excellent choice and very versatile for things like this.

    \n

    First of, to determine if your sample_id is a single value or a list of values:

    \n
    -- (sample_id ~ '^ *\d\+ *$') returns true if there is one number only\nSELECT CASE WHEN sample_id ~ '^ *\d\+ *$' THEN sample_id::int END\n
    \n

    Then, to open up the list of ids in a comma-separated list of samples you can unnest the array returned by string_to_array:

    \n
    SELECT i\nFROM unnest(string_to_array(sample_id, ',')::int[]) i\n
    \n

    You can use that for either single or multiple numbers (since there is just one value, you'll get only one row).

    \n soup wrap:

    PostgreSQL is an excellent choice and very versatile for things like this.

    First of, to determine if your sample_id is a single value or a list of values:

    -- (sample_id ~ '^ *\d\+ *$') returns true if there is one number only
    SELECT CASE WHEN sample_id ~ '^ *\d\+ *$' THEN sample_id::int END
    

    Then, to open up the list of ids in a comma-separated list of samples you can unnest the array returned by string_to_array:

    SELECT i
    FROM unnest(string_to_array(sample_id, ',')::int[]) i
    

    You can use that for either single or multiple numbers (since there is just one value, you'll get only one row).

    qid & accept id: (37181919, 37182136) query: Oracle SQL replace Character soup:

    If you have a fixed character to replace ( 's' in your example) you can use this:

    \n
    with test(string) as ( select 'Root#root#abc#test#stest#s#beta#402' from dual)\nselect regexp_replace(string, '(.*)#(.*)#(.*)#(.*)#(.*)#s', '\1#\2#\3#\4#\5#S')\nfrom test\n
    \n

    This cuts the string in 5 blocks with and ending '#' and then replaces the 's' after the 5th block with its 'S'.

    \n

    You can even use regexp to count the occurrences for you:

    \n
    select regexp_replace(string, '(([^#]*#){5,5})s', '\1S')\nfrom test\n
    \n

    This counts exactly 5 occurrences of the block, without need to write it 5 times.

    \n

    With a different approach, without regexp, you can try:

    \n
    select substr(string, 1, instr(string, '#', 1, 5) ) ||\n       upper(substr(string, instr(string, '#', 1, 5)+1, 1)) ||\n       substr(string, instr(string, '#', 1, 5) + 2)\nfrom test\n
    \n

    This simply cuts the string in 3 parts (from the begigging to the 5th '#', the following character, the remaining part) and does upper of the character.\nThis can handle different characters, with no need to hardcode 's'

    \n soup wrap:

    If you have a fixed character to replace ( 's' in your example) you can use this:

    with test(string) as ( select 'Root#root#abc#test#stest#s#beta#402' from dual)
    select regexp_replace(string, '(.*)#(.*)#(.*)#(.*)#(.*)#s', '\1#\2#\3#\4#\5#S')
    from test
    

    This cuts the string in 5 blocks with and ending '#' and then replaces the 's' after the 5th block with its 'S'.

    You can even use regexp to count the occurrences for you:

    select regexp_replace(string, '(([^#]*#){5,5})s', '\1S')
    from test
    

    This counts exactly 5 occurrences of the block, without need to write it 5 times.

    With a different approach, without regexp, you can try:

    select substr(string, 1, instr(string, '#', 1, 5) ) ||
           upper(substr(string, instr(string, '#', 1, 5)+1, 1)) ||
           substr(string, instr(string, '#', 1, 5) + 2)
    from test
    

    This simply cuts the string in 3 parts (from the begigging to the 5th '#', the following character, the remaining part) and does upper of the character. This can handle different characters, with no need to hardcode 's'

    qid & accept id: (37247507, 37247576) query: Data Base Query - I have a two table A and B. I have to fetch the data only from soup:

    Try this, may work for you;)

    \n
    select *\nfrom tablea a\nwhere not exists(select 1 from tableb b where a.value = b.value)\n
    \n

    SQL Fiddle

    \n

    MySQL 5.6 Schema:

    \n
    CREATE TABLE IF NOT EXISTS `tablea` (\n  `value` int(11) DEFAULT NULL\n) ENGINE=InnoDB DEFAULT CHARSET=utf8;\n\nINSERT INTO `tablea` (`value`) VALUES\n    (1),\n    (2),\n    (3),\n    (4),\n    (5),\n    (6);\n\nCREATE TABLE IF NOT EXISTS `tableb` (\n  `value` int(11) DEFAULT NULL\n) ENGINE=InnoDB DEFAULT CHARSET=utf8 ROW_FORMAT=COMPACT;\n\nINSERT INTO `tableb` (`value`) VALUES\n    (1),\n    (2),\n    (7),\n    (8),\n    (9),\n    (0);\n
    \n

    Query 1:

    \n
    select *\nfrom tablea a\nwhere not exists(select 1 from tableb b where a.value = b.value)\n
    \n

    Results:

    \n
    | value |\n|-------|\n|     3 |\n|     4 |\n|     5 |\n|     6 |\n
    \n soup wrap:

    Try this, may work for you;)

    select *
    from tablea a
    where not exists(select 1 from tableb b where a.value = b.value)
    

    SQL Fiddle

    MySQL 5.6 Schema:

    CREATE TABLE IF NOT EXISTS `tablea` (
      `value` int(11) DEFAULT NULL
    ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
    
    INSERT INTO `tablea` (`value`) VALUES
        (1),
        (2),
        (3),
        (4),
        (5),
        (6);
    
    CREATE TABLE IF NOT EXISTS `tableb` (
      `value` int(11) DEFAULT NULL
    ) ENGINE=InnoDB DEFAULT CHARSET=utf8 ROW_FORMAT=COMPACT;
    
    INSERT INTO `tableb` (`value`) VALUES
        (1),
        (2),
        (7),
        (8),
        (9),
        (0);
    

    Query 1:

    select *
    from tablea a
    where not exists(select 1 from tableb b where a.value = b.value)
    

    Results:

    | value |
    |-------|
    |     3 |
    |     4 |
    |     5 |
    |     6 |
    
    qid & accept id: (37270805, 37271022) query: Oracle substring table based on hierarchy soup:

    Query - Use a recursive sub-query factoring clause:

    \n
    WITH table_name ( list ) AS (\n  SELECT '1,2,3,4,5,6' FROM DUAL\n),\nrsqfc ( list ) AS (\n  SELECT list FROM table_name\n  UNION ALL\n  SELECT SUBSTR( list, INSTR( list, ',', -1 ) -1 )\n  FROM   rsqfc\n  WHERE  INSTR( list, ',', -1 ) > 0\n)\nSELECT * FROM rsqfc;\n
    \n

    Query - Hierarchical Query:

    \n
    WITH table_name ( list ) AS (\n  SELECT '1,2,3,4,5,6' FROM DUAL\n)\nSELECT CASE LEVEL\n            WHEN 1 THEN list\n            ELSE SUBSTR( list, 1, INSTR( list, ',', -1, LEVEL - 1 ) - 1 )\n            END AS list\nFROM   table_name\nCONNECT BY INSTR( list, ',', -1, LEVEL - 1 ) > 0;\n
    \n

    Output:

    \n

    (Both output the same)

    \n
    list\n------------\n1,2,3,4,5,6\n1,2,3,4,5\n1,2,3,4\n1,2,3\n1,2\n1\n
    \n soup wrap:

    Query - Use a recursive sub-query factoring clause:

    WITH table_name ( list ) AS (
      SELECT '1,2,3,4,5,6' FROM DUAL
    ),
    rsqfc ( list ) AS (
      SELECT list FROM table_name
      UNION ALL
      SELECT SUBSTR( list, INSTR( list, ',', -1 ) -1 )
      FROM   rsqfc
      WHERE  INSTR( list, ',', -1 ) > 0
    )
    SELECT * FROM rsqfc;
    

    Query - Hierarchical Query:

    WITH table_name ( list ) AS (
      SELECT '1,2,3,4,5,6' FROM DUAL
    )
    SELECT CASE LEVEL
                WHEN 1 THEN list
                ELSE SUBSTR( list, 1, INSTR( list, ',', -1, LEVEL - 1 ) - 1 )
                END AS list
    FROM   table_name
    CONNECT BY INSTR( list, ',', -1, LEVEL - 1 ) > 0;
    

    Output:

    (Both output the same)

    list
    ------------
    1,2,3,4,5,6
    1,2,3,4,5
    1,2,3,4
    1,2,3
    1,2
    1
    
    qid & accept id: (37303429, 37303555) query: Postgres Order by multi columns with a condition soup:

    TRY

    \n
    ORDER BY COALESCE(clearedTime, alarmTime)\n
    \n

    OR something like this.

    \n
    ORDER BY CASE \n             WHEN clearedTime IS NULL THEN NULL\n                                      ELSE 1 \n         END NULLS FIRST,\n         alarm.alarmTime DESC,\n         alarm.clearedTime DESC -- optional to solve tie between 415068 and 415073       \n
    \n soup wrap:

    TRY

    ORDER BY COALESCE(clearedTime, alarmTime)
    

    OR something like this.

    ORDER BY CASE 
                 WHEN clearedTime IS NULL THEN NULL
                                          ELSE 1 
             END NULLS FIRST,
             alarm.alarmTime DESC,
             alarm.clearedTime DESC -- optional to solve tie between 415068 and 415073       
    
    qid & accept id: (37316901, 37316985) query: How do I append a row with specific values to the output of my T-SQL query? soup:

    Use a union all:

    \n
    select id, name from xxx -- is your query\nunion all\nselect 999 as id, 'XY Ltd' as name \n
    \n

    EDIT:\nIn addition to @ThorstenKettner 's comment: If ID is not nummeric or for any reason you can not use it for the sort, then you could do it like this:

    \n
    select ID, Name\nfrom \n(\n    select 1 as SpecialSort, ID, Name from xxx -- is your query\n    union all\n    select 2 as SpecialSort, 999 as ID, 'XY Ltd' as Name\n) AllData\norder by SpecialSort asc -- like this your manually added row will appear at the end\n
    \n soup wrap:

    Use a union all:

    select id, name from xxx -- is your query
    union all
    select 999 as id, 'XY Ltd' as name 
    

    EDIT: In addition to @ThorstenKettner 's comment: If ID is not nummeric or for any reason you can not use it for the sort, then you could do it like this:

    select ID, Name
    from 
    (
        select 1 as SpecialSort, ID, Name from xxx -- is your query
        union all
        select 2 as SpecialSort, 999 as ID, 'XY Ltd' as Name
    ) AllData
    order by SpecialSort asc -- like this your manually added row will appear at the end
    
    qid & accept id: (37405073, 37405128) query: SQL LIKE from multiple table soup:

    Remove $ from '%$Keurig%'. Try this

    \n
    SELECT I.itemName,P.firstName,P.lastName\n  FROM `ITEM` I, `PROFILE` P\n  WHERE I.pID=P.pID AND I.itemName LIKE '%Keurig%'\n
    \n

    With $searchtag try somthing like this

    \n
    $sql = "SELECT I.itemName,P.firstName,P.lastName\n  FROM `ITEM` I, `PROFILE` P\n  WHERE I.pID=P.pID AND I.itemName LIKE '%".$searchtag."%'";\n
    \n soup wrap:

    Remove $ from '%$Keurig%'. Try this

    SELECT I.itemName,P.firstName,P.lastName
      FROM `ITEM` I, `PROFILE` P
      WHERE I.pID=P.pID AND I.itemName LIKE '%Keurig%'
    

    With $searchtag try somthing like this

    $sql = "SELECT I.itemName,P.firstName,P.lastName
      FROM `ITEM` I, `PROFILE` P
      WHERE I.pID=P.pID AND I.itemName LIKE '%".$searchtag."%'";
    
    qid & accept id: (37449748, 37449951) query: SQL Joining and Update soup:

    Here is how you add a column to a table. I'm assuming projectNbr is an integer:

    \n
    ALTER TABLE dbo.WTS_EXT_Project ADD COLUMN projectNbr INT\n
    \n

    Then to fill your new column from the Importthis table, it would look something like this:

    \n
    UPDATE wep\nSET projectNbr = it.projectNbr\nFROM dbo.WTS_EXT_Project wep\nINNER JOIN dbo.Importthis it\n  ON wep.ProjectManager = it.ProjectManager\n
    \n soup wrap:

    Here is how you add a column to a table. I'm assuming projectNbr is an integer:

    ALTER TABLE dbo.WTS_EXT_Project ADD COLUMN projectNbr INT
    

    Then to fill your new column from the Importthis table, it would look something like this:

    UPDATE wep
    SET projectNbr = it.projectNbr
    FROM dbo.WTS_EXT_Project wep
    INNER JOIN dbo.Importthis it
      ON wep.ProjectManager = it.ProjectManager
    
    qid & accept id: (37452178, 37452367) query: How can I use AND condition using array parameter in a query soup:

    To find the persons, you can:

    \n
    SELECT ID, NAME\nFROM PEOPLE\nWHERE  INSTR(:COLORS, WARECOLOR) > 0\nGROUP ID, NAME\nHAVING COUNT(*) = regexp_count(:COLORS,'[|]') - 1 ;\n
    \n

    regexp_count(:COLORS,'[|]') just counts the | from :COLORS.\nIf your table is not properly normalized(you have duplicates) you may use count(distinct WARECOLOR) instead of count(*)

    \n

    Then it is simple to get the rows:

    \n
    SELECT ID, NAME, COLOR\nFROM PEOPLE\nWHERE  id, name in (\n    SELECT ID, NAME\n    FROM PEOPLE\n    WHERE  INSTR(:COLORS, WARECOLOR) > 0\n    GROUP ID, NAME\n    HAVING COUNT(*) = regexp_count(:COLORS,'[|]') - 1 ;\n )\n;\n
    \n soup wrap:

    To find the persons, you can:

    SELECT ID, NAME
    FROM PEOPLE
    WHERE  INSTR(:COLORS, WARECOLOR) > 0
    GROUP ID, NAME
    HAVING COUNT(*) = regexp_count(:COLORS,'[|]') - 1 ;
    

    regexp_count(:COLORS,'[|]') just counts the | from :COLORS. If your table is not properly normalized(you have duplicates) you may use count(distinct WARECOLOR) instead of count(*)

    Then it is simple to get the rows:

    SELECT ID, NAME, COLOR
    FROM PEOPLE
    WHERE  id, name in (
        SELECT ID, NAME
        FROM PEOPLE
        WHERE  INSTR(:COLORS, WARECOLOR) > 0
        GROUP ID, NAME
        HAVING COUNT(*) = regexp_count(:COLORS,'[|]') - 1 ;
     )
    ;
    
    qid & accept id: (37458399, 37458455) query: Count all rows by status soup:
    Select status, count(*)\nfrom Ticket\ngroup by status\n
    \n

    If you have to show status without any tickets also. Then I would follow the below steps.\nthere will be a table to store status details.With out the same we can't know which status is missing in Ticket table

    \n

    Let say the table is status as below

    \n
    CREATE TABLE _STATUS(\n  STATUS INTEGER,\n STATUS_NAME TEXT\n)\n;\n\nCREATE TABLE TICKET(\n ID INTEGER NOT NULL,\n TITLE TEXT,\n STATUS INTEGER,\n LAST_UPDATED DATE,\n CREATED DATE\n)\n;\n
    \n

    The query will be

    \n
    select  s.status,COUNT(t.*)\nfrom _status t left join ticket t\non s.status = t.status\ngroup by s.status\n
    \n soup wrap:
    Select status, count(*)
    from Ticket
    group by status
    

    If you have to show status without any tickets also. Then I would follow the below steps. there will be a table to store status details.With out the same we can't know which status is missing in Ticket table

    Let say the table is status as below

    CREATE TABLE _STATUS(
      STATUS INTEGER,
     STATUS_NAME TEXT
    )
    ;
    
    CREATE TABLE TICKET(
     ID INTEGER NOT NULL,
     TITLE TEXT,
     STATUS INTEGER,
     LAST_UPDATED DATE,
     CREATED DATE
    )
    ;
    

    The query will be

    select  s.status,COUNT(t.*)
    from _status t left join ticket t
    on s.status = t.status
    group by s.status
    
    qid & accept id: (37540721, 37540847) query: Concatenate in order by using decode in Oracle soup:

    Just use CASE EXPRESSION , I prefer it as it is easier to read:

    \n
    ORDER BY CASE WHEN FC = 'R' THEN 1\n              WHEN FC = 'Y' THEN 2\n              WHEN FC = 'G' THEN 3\n         END,\n         CASE WHEN FM = 'R' THEN 1\n              WHEN FM = 'Y' THEN 2\n              WHEN FM = 'G' THEN 3\n         END,\n         CASE WHEN MS = 'R' THEN 1\n              WHEN MS = 'Y' THEN 2\n              WHEN MS = 'G' THEN 3\n         END\n
    \n

    That is if I understood what you want to do, I'm not I followed the logic you intended to do, I think what it does is checking when FC||FM||MS is equal to R/Y/G , which I believe is not

    \n

    EDIT: If you want to order first by if one of them is 'R', then if one of the columns is 'Y' ...

    \n
    ORDER BY CASE WHEN 'R' IN(FC,FM,MS) THEN 1\n              ELSE 2\n         END,\n         CASE WHEN 'Y' IN(FC,FM,MS) THEN 1\n              ELSE 2\n         END,\n         CASE WHEN 'G' IN(FC,FM,MS) THEN 1\n              ELSE 2\n         END\n
    \n soup wrap:

    Just use CASE EXPRESSION , I prefer it as it is easier to read:

    ORDER BY CASE WHEN FC = 'R' THEN 1
                  WHEN FC = 'Y' THEN 2
                  WHEN FC = 'G' THEN 3
             END,
             CASE WHEN FM = 'R' THEN 1
                  WHEN FM = 'Y' THEN 2
                  WHEN FM = 'G' THEN 3
             END,
             CASE WHEN MS = 'R' THEN 1
                  WHEN MS = 'Y' THEN 2
                  WHEN MS = 'G' THEN 3
             END
    

    That is if I understood what you want to do, I'm not I followed the logic you intended to do, I think what it does is checking when FC||FM||MS is equal to R/Y/G , which I believe is not

    EDIT: If you want to order first by if one of them is 'R', then if one of the columns is 'Y' ...

    ORDER BY CASE WHEN 'R' IN(FC,FM,MS) THEN 1
                  ELSE 2
             END,
             CASE WHEN 'Y' IN(FC,FM,MS) THEN 1
                  ELSE 2
             END,
             CASE WHEN 'G' IN(FC,FM,MS) THEN 1
                  ELSE 2
             END
    
    qid & accept id: (37585047, 37585159) query: TSQL - Search a date in database soup:

    Use between:

    \n
    DECLARE @date date = '2015-03-30'\n\nSELECT [Signature]\nFROM YourTable\nWHERE @date between [From] and [To]\n
    \n

    Sample execution with the given sample data:

    \n
    DECLARE @DateTest TABLE (Id INT, [Signature] VARCHAR(5), [From] DATE, [To] DATE);\n\nINSERT INTO @DateTest (Id, [Signature], [From], [To])\nVALUES\n(1, 'S01', '2014-01-26', '2016-01-26'),\n(2, 'S02', '2016-01-26', '2016-02-26'),\n(3, 'S03', '2016-02-26', '2016-04-26');\n\nDECLARE @date DATE = '2015-03-30';\n\nSELECT [Signature]\nFROM @DateTest\nWHERE @date BETWEEN [From] AND [To]\n
    \n soup wrap:

    Use between:

    DECLARE @date date = '2015-03-30'
    
    SELECT [Signature]
    FROM YourTable
    WHERE @date between [From] and [To]
    

    Sample execution with the given sample data:

    DECLARE @DateTest TABLE (Id INT, [Signature] VARCHAR(5), [From] DATE, [To] DATE);
    
    INSERT INTO @DateTest (Id, [Signature], [From], [To])
    VALUES
    (1, 'S01', '2014-01-26', '2016-01-26'),
    (2, 'S02', '2016-01-26', '2016-02-26'),
    (3, 'S03', '2016-02-26', '2016-04-26');
    
    DECLARE @date DATE = '2015-03-30';
    
    SELECT [Signature]
    FROM @DateTest
    WHERE @date BETWEEN [From] AND [To]
    
    qid & accept id: (37611560, 37613383) query: SQL query for display one field only once which having multiple record soup:

    Using CASE Condition and Row_number we can achieve the above Output

    \n

    It's Purely based on your sample Data

    \n
    DECLARE @Table1 TABLE \n    (projName varchar(1), percentage int)\n;\n\nINSERT INTO @Table1\n    (projName, percentage)\nVALUES\n    ('A', 10),\n    ('A', 25),\n    ('B', 20),\n    ('B', 30)\n;\n\nSelect CASE WHEN RN = 1 THEN projName ELSE NULL END projName, percentage from (\nselect projName, percentage,ROW_NUMBER()OVER(PARTITION BY projName ORDER BY (SELECT NULL))RN from @Table1 )T\n
    \n

    In your query I have modified the Answer

    \n
    Select CASE WHEN T.RN = 1 THEN T.projName ELSE NULL END projName, T.percentage FROM  (select \ni.invoice_id,\npr.name as projname ,\nROW_NUMBER()OVER(PARTITION BY projName ORDER BY (SELECT NULL))RN\nfrom annexure a,\nproject pr,\nsow s,\ninvoice i \nwhere pr.project_id = s.project_id \nand a.sow_id = s.sow_id \nand i.annexure_id = a.annexure_id \ngroup by pr.name,i.invoice_date,i.invoice_id )T\n
    \n soup wrap:

    Using CASE Condition and Row_number we can achieve the above Output

    It's Purely based on your sample Data

    DECLARE @Table1 TABLE 
        (projName varchar(1), percentage int)
    ;
    
    INSERT INTO @Table1
        (projName, percentage)
    VALUES
        ('A', 10),
        ('A', 25),
        ('B', 20),
        ('B', 30)
    ;
    
    Select CASE WHEN RN = 1 THEN projName ELSE NULL END projName, percentage from (
    select projName, percentage,ROW_NUMBER()OVER(PARTITION BY projName ORDER BY (SELECT NULL))RN from @Table1 )T
    

    In your query I have modified the Answer

    Select CASE WHEN T.RN = 1 THEN T.projName ELSE NULL END projName, T.percentage FROM  (select 
    i.invoice_id,
    pr.name as projname ,
    ROW_NUMBER()OVER(PARTITION BY projName ORDER BY (SELECT NULL))RN
    from annexure a,
    project pr,
    sow s,
    invoice i 
    where pr.project_id = s.project_id 
    and a.sow_id = s.sow_id 
    and i.annexure_id = a.annexure_id 
    group by pr.name,i.invoice_date,i.invoice_id )T
    
    qid & accept id: (37620195, 37653165) query: SQL ROW_NUMBER() always return 1 for each row soup:

    It is common for analytic functions to be used through a derived table so that the column is produced and then accessed later by a subsequent clauses via the column alias. It is particularly common when needing to use row_number() results in a where clause. e.g.

    \n
    select * from (select *\n                  , row_number(partition by X order by Y) as rn\n               from table1\n               ) as d\nwhere d.rn = 1\n
    \n

    Here I believe the same logic applies, you want to calculate a sortorder column THEN place the data into an XML result. My guess is you want to partition by job number.

    \n
    FROM (\n  SELECT\n        *\n      , ROW_NUMBER() OVER (PARTITION BY V_CONSTAT_ACTUAL_DATES.JOB_NUMBER\n                            ORDER BY V_CONSTAT_ACTUAL_DATES.DATE_TO_END) AS 'SortOrder'\n  FROM homefront.dbo.V_CONSTAT_PROJ_DATES V_CONSTAT_PROJ_DATES\n        INNER JOIN homefront.dbo.V_CONSTAT_ACTUAL_DATES V_CONSTAT_ACTUAL_DATES ON V_CONSTAT_PROJ_DATES.JOB_NUMBER = V_CONSTAT_ACTUAL_DATES.JOB_NUMBER\n        INNER JOIN homefront.dbo.V_CONSTAT_BASE_DATES V_CONSTAT_BASE_DATES ON V_CONSTAT_ACTUAL_DATES.JOB_NUMBER = V_CONSTAT_BASE_DATES.JOB_NUMBER\n                    AND V_CONSTAT_PROJ_DATES.JOB_NUMBER = V_CONSTAT_BASE_DATES.JOB_NUMBER\n        INNER JOIN homefront.dbo.V_CONSTAT_SCH_DATES V_CONSTAT_SCH_DATES ON V_CONSTAT_BASE_DATES.JOB_NUMBER = V_CONSTAT_SCH_DATES.JOB_NUMBER\n                    AND V_CONSTAT_PROJ_DATES.JOB_NUMBER = V_CONSTAT_SCH_DATES.JOB_NUMBER\n                    AND V_CONSTAT_ACTUAL_DATES.JOB_NUMBER = V_CONSTAT_SCH_DATES.JOB_NUMBER\n  WHERE V_CONSTAT_ACTUAL_DATES.AREA_DESC = 'Ancaster Augusta Ph 4(A) Condos'\n        AND V_CONSTAT_ACTUAL_DATES.DATE_TO_END >= GETDATE()\n  ) AS d\n
    \n

    and as a full query:

    \n
    SELECT (\n        SELECT\n              CAST('<' + V_CONSTAT_ACTUAL_DATES.JOB_NUMBER + '>' +\n              CAST((\n                    SELECT (\n                                 SELECT\n                                       CONVERT(date, d.DATE_TO_END) AS 'closingDate'\n                                 FOR xml PATH (''), TYPE\n                           )\n                         , (\n                                 SELECT\n                                       DATEDIFF(dd, d.ID67, V_CONSTAT_ACTUAL_DATES.DATE_TO_END) - 1 AS 'DaysOfConstruction'\n                                 FOR xml PATH (''), TYPE\n                           )\n                         , (\n                                 SELECT\n                                       DATEDIFF(dd, GETDATE(), d.DATE_TO_END) AS 'DaysToClosing'\n                                 FOR xml PATH (''), TYPE\n                           )\n                         , (\n                                 SELECT\n                                       CASE\n                                             WHEN COALESCE(d.IDNOTES2, '') = '' THEN ' '\n                                             ELSE d.IDNOTES2\n                                       END AS 'notes'\n                                 FOR xml PATH (''), TYPE\n                           )\n                         , (\n                                 SELECT\n                                       DATEDIFF(dd, d.ID187, d.ID187) AS 'ScheduleVariance'\n                                 FOR xml PATH (''), TYPE\n                           )\n                         , (\n                                 SELECT\n                                       SortOrder\n                                 FROM (\n                                       SELECT\n                                             d.SortOrder\n                                 ) AS SubQuery\n                                 FOR xml PATH (''), TYPE\n                           )\n\n\n                    FOR xml PATH ('')\n              )\n              AS varchar(max)\n\n              )\n              + ''\n              AS xml)\n  )\nFROM (\n  SELECT\n        *\n      , ROW_NUMBER() OVER (PARTITION BY V_CONSTAT_ACTUAL_DATES.JOB_NUMBER\n                            ORDER BY V_CONSTAT_ACTUAL_DATES.DATE_TO_END) AS "SortOrder"\n  FROM homefront.dbo.V_CONSTAT_PROJ_DATES V_CONSTAT_PROJ_DATES\n        INNER JOIN homefront.dbo.V_CONSTAT_ACTUAL_DATES V_CONSTAT_ACTUAL_DATES ON V_CONSTAT_PROJ_DATES.JOB_NUMBER = V_CONSTAT_ACTUAL_DATES.JOB_NUMBER\n        INNER JOIN homefront.dbo.V_CONSTAT_BASE_DATES V_CONSTAT_BASE_DATES ON V_CONSTAT_ACTUAL_DATES.JOB_NUMBER = V_CONSTAT_BASE_DATES.JOB_NUMBER\n                    AND V_CONSTAT_PROJ_DATES.JOB_NUMBER = V_CONSTAT_BASE_DATES.JOB_NUMBER\n        INNER JOIN homefront.dbo.V_CONSTAT_SCH_DATES V_CONSTAT_SCH_DATES ON V_CONSTAT_BASE_DATES.JOB_NUMBER = V_CONSTAT_SCH_DATES.JOB_NUMBER\n                    AND V_CONSTAT_PROJ_DATES.JOB_NUMBER = V_CONSTAT_SCH_DATES.JOB_NUMBER\n                    AND V_CONSTAT_ACTUAL_DATES.JOB_NUMBER = V_CONSTAT_SCH_DATES.JOB_NUMBER\n  WHERE V_CONSTAT_ACTUAL_DATES.AREA_DESC = 'Ancaster Augusta Ph 4(A) Condos'\n        AND V_CONSTAT_ACTUAL_DATES.DATE_TO_END >= GETDATE()\n  ) AS d\nORDER BY\n      V_CONSTAT_ACTUAL_DATES.DATE_TO_END\nFOR xml PATH (''), ROOT ('Root')\n
    \n soup wrap:

    It is common for analytic functions to be used through a derived table so that the column is produced and then accessed later by a subsequent clauses via the column alias. It is particularly common when needing to use row_number() results in a where clause. e.g.

    select * from (select *
                      , row_number(partition by X order by Y) as rn
                   from table1
                   ) as d
    where d.rn = 1
    

    Here I believe the same logic applies, you want to calculate a sortorder column THEN place the data into an XML result. My guess is you want to partition by job number.

    FROM (
      SELECT
            *
          , ROW_NUMBER() OVER (PARTITION BY V_CONSTAT_ACTUAL_DATES.JOB_NUMBER
                                ORDER BY V_CONSTAT_ACTUAL_DATES.DATE_TO_END) AS 'SortOrder'
      FROM homefront.dbo.V_CONSTAT_PROJ_DATES V_CONSTAT_PROJ_DATES
            INNER JOIN homefront.dbo.V_CONSTAT_ACTUAL_DATES V_CONSTAT_ACTUAL_DATES ON V_CONSTAT_PROJ_DATES.JOB_NUMBER = V_CONSTAT_ACTUAL_DATES.JOB_NUMBER
            INNER JOIN homefront.dbo.V_CONSTAT_BASE_DATES V_CONSTAT_BASE_DATES ON V_CONSTAT_ACTUAL_DATES.JOB_NUMBER = V_CONSTAT_BASE_DATES.JOB_NUMBER
                        AND V_CONSTAT_PROJ_DATES.JOB_NUMBER = V_CONSTAT_BASE_DATES.JOB_NUMBER
            INNER JOIN homefront.dbo.V_CONSTAT_SCH_DATES V_CONSTAT_SCH_DATES ON V_CONSTAT_BASE_DATES.JOB_NUMBER = V_CONSTAT_SCH_DATES.JOB_NUMBER
                        AND V_CONSTAT_PROJ_DATES.JOB_NUMBER = V_CONSTAT_SCH_DATES.JOB_NUMBER
                        AND V_CONSTAT_ACTUAL_DATES.JOB_NUMBER = V_CONSTAT_SCH_DATES.JOB_NUMBER
      WHERE V_CONSTAT_ACTUAL_DATES.AREA_DESC = 'Ancaster Augusta Ph 4(A) Condos'
            AND V_CONSTAT_ACTUAL_DATES.DATE_TO_END >= GETDATE()
      ) AS d
    

    and as a full query:

    SELECT (
            SELECT
                  CAST('<' + V_CONSTAT_ACTUAL_DATES.JOB_NUMBER + '>' +
                  CAST((
                        SELECT (
                                     SELECT
                                           CONVERT(date, d.DATE_TO_END) AS 'closingDate'
                                     FOR xml PATH (''), TYPE
                               )
                             , (
                                     SELECT
                                           DATEDIFF(dd, d.ID67, V_CONSTAT_ACTUAL_DATES.DATE_TO_END) - 1 AS 'DaysOfConstruction'
                                     FOR xml PATH (''), TYPE
                               )
                             , (
                                     SELECT
                                           DATEDIFF(dd, GETDATE(), d.DATE_TO_END) AS 'DaysToClosing'
                                     FOR xml PATH (''), TYPE
                               )
                             , (
                                     SELECT
                                           CASE
                                                 WHEN COALESCE(d.IDNOTES2, '') = '' THEN ' '
                                                 ELSE d.IDNOTES2
                                           END AS 'notes'
                                     FOR xml PATH (''), TYPE
                               )
                             , (
                                     SELECT
                                           DATEDIFF(dd, d.ID187, d.ID187) AS 'ScheduleVariance'
                                     FOR xml PATH (''), TYPE
                               )
                             , (
                                     SELECT
                                           SortOrder
                                     FROM (
                                           SELECT
                                                 d.SortOrder
                                     ) AS SubQuery
                                     FOR xml PATH (''), TYPE
                               )
    
    
                        FOR xml PATH ('')
                  )
                  AS varchar(max)
    
                  )
                  + ''
                  AS xml)
      )
    FROM (
      SELECT
            *
          , ROW_NUMBER() OVER (PARTITION BY V_CONSTAT_ACTUAL_DATES.JOB_NUMBER
                                ORDER BY V_CONSTAT_ACTUAL_DATES.DATE_TO_END) AS "SortOrder"
      FROM homefront.dbo.V_CONSTAT_PROJ_DATES V_CONSTAT_PROJ_DATES
            INNER JOIN homefront.dbo.V_CONSTAT_ACTUAL_DATES V_CONSTAT_ACTUAL_DATES ON V_CONSTAT_PROJ_DATES.JOB_NUMBER = V_CONSTAT_ACTUAL_DATES.JOB_NUMBER
            INNER JOIN homefront.dbo.V_CONSTAT_BASE_DATES V_CONSTAT_BASE_DATES ON V_CONSTAT_ACTUAL_DATES.JOB_NUMBER = V_CONSTAT_BASE_DATES.JOB_NUMBER
                        AND V_CONSTAT_PROJ_DATES.JOB_NUMBER = V_CONSTAT_BASE_DATES.JOB_NUMBER
            INNER JOIN homefront.dbo.V_CONSTAT_SCH_DATES V_CONSTAT_SCH_DATES ON V_CONSTAT_BASE_DATES.JOB_NUMBER = V_CONSTAT_SCH_DATES.JOB_NUMBER
                        AND V_CONSTAT_PROJ_DATES.JOB_NUMBER = V_CONSTAT_SCH_DATES.JOB_NUMBER
                        AND V_CONSTAT_ACTUAL_DATES.JOB_NUMBER = V_CONSTAT_SCH_DATES.JOB_NUMBER
      WHERE V_CONSTAT_ACTUAL_DATES.AREA_DESC = 'Ancaster Augusta Ph 4(A) Condos'
            AND V_CONSTAT_ACTUAL_DATES.DATE_TO_END >= GETDATE()
      ) AS d
    ORDER BY
          V_CONSTAT_ACTUAL_DATES.DATE_TO_END
    FOR xml PATH (''), ROOT ('Root')
    
    qid & accept id: (37624090, 37628509) query: Adding an extra column that represents the difference between the closest difference of a previous column soup:

    One possible approach is to use window functions.

    \n
    import org.apache.spark.sql.expressions.Window\nimport org.apache.spark.sql.functions.{lag, min, abs}\n\nval df = Seq(\n  ("A", -10), ("A", 1), ("A", 5), ("B", 3), ("B", 9)\n).toDF("type", "time")\n
    \n

    First lets determine difference between consecutive rows sorted by time:

    \n
    // Partition by type and sort by time\nval w1 = Window.partitionBy($"Type").orderBy($"Time")\n\n// Difference between this and previous\nval diff = $"time" - lag($"time", 1).over(w1)\n
    \n

    Then find minimum over all diffs for a given type:

    \n
    // Partition by time unordered and take unbounded window\nval w2 = Window.partitionBy($"Type").rowsBetween(Long.MinValue, Long.MaxValue)\n\n// Minimum difference over type\nval minDiff = min(diff).over(w2)\n\ndf.withColumn("min_diff",  minDiff).show\n\n\n// +----+----+--------+\n// |type|time|min_diff|\n// +----+----+--------+\n// |   A| -10|       4|\n// |   A|   1|       4|\n// |   A|   5|       4|\n// |   B|   3|       6|\n// |   B|   9|       6|\n// +----+----+--------+\n
    \n

    If your goal is to find a minimum distance between current row and any other row in a group you can use a similar approach

    \n
    import org.apache.spark.sql.functions.{lead, when}\n\n// Diff to previous\nval diff_lag = $"time" - lag($"time", 1).over(w1)\n\n// Diff to next\nval diff_lead = lead($"time", 1).over(w1) - $"time"\n\nval diffToClosest = when(\n  diff_lag < diff_lead || diff_lead.isNull, \n  diff_lag\n).otherwise(diff_lead)\n\ndf.withColumn("diff_to_closest", diffToClosest)\n\n// +----+----+---------------+\n// |type|time|diff_to_closest|\n// +----+----+---------------+\n// |   A| -10|             11|\n// |   A|   1|              4|\n// |   A|   5|              4|\n// |   B|   3|              6|\n// |   B|   9|              6|\n// +----+----+---------------+\n
    \n soup wrap:

    One possible approach is to use window functions.

    import org.apache.spark.sql.expressions.Window
    import org.apache.spark.sql.functions.{lag, min, abs}
    
    val df = Seq(
      ("A", -10), ("A", 1), ("A", 5), ("B", 3), ("B", 9)
    ).toDF("type", "time")
    

    First lets determine difference between consecutive rows sorted by time:

    // Partition by type and sort by time
    val w1 = Window.partitionBy($"Type").orderBy($"Time")
    
    // Difference between this and previous
    val diff = $"time" - lag($"time", 1).over(w1)
    

    Then find minimum over all diffs for a given type:

    // Partition by time unordered and take unbounded window
    val w2 = Window.partitionBy($"Type").rowsBetween(Long.MinValue, Long.MaxValue)
    
    // Minimum difference over type
    val minDiff = min(diff).over(w2)
    
    df.withColumn("min_diff",  minDiff).show
    
    
    // +----+----+--------+
    // |type|time|min_diff|
    // +----+----+--------+
    // |   A| -10|       4|
    // |   A|   1|       4|
    // |   A|   5|       4|
    // |   B|   3|       6|
    // |   B|   9|       6|
    // +----+----+--------+
    

    If your goal is to find a minimum distance between current row and any other row in a group you can use a similar approach

    import org.apache.spark.sql.functions.{lead, when}
    
    // Diff to previous
    val diff_lag = $"time" - lag($"time", 1).over(w1)
    
    // Diff to next
    val diff_lead = lead($"time", 1).over(w1) - $"time"
    
    val diffToClosest = when(
      diff_lag < diff_lead || diff_lead.isNull, 
      diff_lag
    ).otherwise(diff_lead)
    
    df.withColumn("diff_to_closest", diffToClosest)
    
    // +----+----+---------------+
    // |type|time|diff_to_closest|
    // +----+----+---------------+
    // |   A| -10|             11|
    // |   A|   1|              4|
    // |   A|   5|              4|
    // |   B|   3|              6|
    // |   B|   9|              6|
    // +----+----+---------------+
    
    qid & accept id: (37648860, 37649025) query: Summary data even when department is missing for a day soup:

    Assuming that your main table is:

    \n
    create table mydata\n(ReportDate date,\n department varchar2(20),\n Employee varchar2(20));\n
    \n

    We can use the below query:

    \n
     with dates (reportDate) as\n(select to_date('01-05-2016','dd-mm-yyyy') + rownum -1\n     from all_objects\n     where rownum <= \nto_date('03-05-2016','dd-mm-yyyy')-to_date('01-05-2016','dd-mm-yyyy')+1 ),\n departments( department) as \n( select 'First' from dual\n union all \n select 'Second' from dual) ,\nAllReports ( reportDate, Department) as \n(select dt.reportDate, \n   dp.department  \n from dates dt\ncross join \n departments dp )\n select  ar.reportDate, ar.department, count(md.employee)  \n from AllReports ar\n left join myData md\n on ar.ReportDate = md.reportDate and\n    ar.department = md.department\n  group by ar.reportDate, ar.department\n  order by 1, 2\n
    \n

    First we generate dates that we are interested in. In our sample between 01-05-2016 and 03-05-2016. It's in dates WITH.

    \n

    Next we generate list of departments - Departments WITH.

    \n

    We cross join them to generate all possible reports - AllReports WITH.

    \n

    And we use LEFT JOIN to your main table to figure out which data exists and which are missing.

    \n soup wrap:

    Assuming that your main table is:

    create table mydata
    (ReportDate date,
     department varchar2(20),
     Employee varchar2(20));
    

    We can use the below query:

     with dates (reportDate) as
    (select to_date('01-05-2016','dd-mm-yyyy') + rownum -1
         from all_objects
         where rownum <= 
    to_date('03-05-2016','dd-mm-yyyy')-to_date('01-05-2016','dd-mm-yyyy')+1 ),
     departments( department) as 
    ( select 'First' from dual
     union all 
     select 'Second' from dual) ,
    AllReports ( reportDate, Department) as 
    (select dt.reportDate, 
       dp.department  
     from dates dt
    cross join 
     departments dp )
     select  ar.reportDate, ar.department, count(md.employee)  
     from AllReports ar
     left join myData md
     on ar.ReportDate = md.reportDate and
        ar.department = md.department
      group by ar.reportDate, ar.department
      order by 1, 2
    

    First we generate dates that we are interested in. In our sample between 01-05-2016 and 03-05-2016. It's in dates WITH.

    Next we generate list of departments - Departments WITH.

    We cross join them to generate all possible reports - AllReports WITH.

    And we use LEFT JOIN to your main table to figure out which data exists and which are missing.

    qid & accept id: (37661098, 37661198) query: Concatenate several columns as comma-separated string soup:

    By using NULLIF you can achieve it.

    \n
    SELECT  Id, STUFF(COALESCE(N',' + NULLIF(Name1, ''), N'') + COALESCE(N',' + NULLIF(Name2, ''), N'')\n              + COALESCE(N',' + NULLIF(Name3, ''), N''), 1, 1, '') AS ConcateStuff\nFROM    #Temp;\n
    \n

    Result

    \n
    Id  ConcateStuff\n-----------------\n1   Name1,Name3\n2   Name1,Name2,Name3\n3   Name3\n4   Name3\n
    \n soup wrap:

    By using NULLIF you can achieve it.

    SELECT  Id, STUFF(COALESCE(N',' + NULLIF(Name1, ''), N'') + COALESCE(N',' + NULLIF(Name2, ''), N'')
                  + COALESCE(N',' + NULLIF(Name3, ''), N''), 1, 1, '') AS ConcateStuff
    FROM    #Temp;
    

    Result

    Id  ConcateStuff
    -----------------
    1   Name1,Name3
    2   Name1,Name2,Name3
    3   Name3
    4   Name3
    
    qid & accept id: (37662540, 37674923) query: Order by (parent, child group) and values alphabetically soup:

    You can do this by self-joining to generate a list of values from the hierarchy to order on, as shown in the code below. I've expanded to add an extra level of hierarchy to the original example to show how this would work. Clearly this depends on knowing the number of hierarchy levels to generate a reasonable plan (you could always do 10 levels, for example, but that will be a big performance hit if you only have 3 levels of hierarchy in your example).

    \n

    With further thought I imagine you could use an EXEC statement to generate the SQL needed for a particular hierarchy level, rather than generating manually as below (which will have some optimisations as e.g. we know if an entry does not have anything at L3 it won't have anything at L4 either).

    \n
    WITH resultset (resultid, parentid, valuex) AS (\nSELECT 1,0,'Grandparent' UNION ALL\nSELECT 2,1,'Parent1' UNION ALL\nSELECT 3,1,'Parent2' UNION ALL\nSELECT 4,2,'Child1' UNION ALL\nSELECT 5,2,'Child2' UNION ALL\nSELECT 6,3,'Child3' UNION ALL\nSELECT 7,3,'Child4' UNION ALL\nSELECT 8,4,'Child1_Child1' UNION ALL\nSELECT 9,7,'Child4_Child1' UNION ALL\nSELECT 10,6,'Child3_Child1')\nSELECT l1.resultid , l1.parentid, l1.valuex, l2.resultid l2val, l3.resultid l3val,l4.resultid l4val,\n\n-- rewrite COALESCE so clearer how this matches the pattern below\nCASE WHEN l4.resultid IS NULL THEN\nCASE WHEN l3.resultid IS NULL THEN\nCASE WHEN l2.resultid IS NULL THEN l1.valuex \nELSE l2.valuex END\nELSE l3.valuex END\nELSE l4.valuex END o1,\n\nCASE WHEN l4.resultid IS NULL THEN \nCASE WHEN l3.resultid IS NULL THEN \nCASE WHEN l2.resultid IS NULL THEN '' \nELSE l1.valuex END\nELSE COALESCE (l2.valuex, l1.valuex, '') END\nELSE COALESCE (l3.valuex, l2.valuex, l1.valuex, '') END o2,\n\nCASE WHEN l3.resultid IS NULL THEN ''\nWHEN l4.valuex IS NULL THEN l1.valuex\nELSE l2.valuex END o3,\n\nCASE WHEN l2.valuex IS NULL THEN '' \nWHEN l4.valuex IS NULL THEN '' ELSE l1.valuex END o4\n\nFROM resultset l1\nleft join resultset l2 on l1.parentid = l2.resultid\nleft join resultset l3 on l2.parentid = l3.resultid\nleft join resultset l4 on l3.parentid = l4.resultid\nORDER BY o1, o2, o3, o4\n
    \n

    Results (apologies for bad formatting):

    \n
    RESULTID    PARENTID    VALUEX          L2VAL   L3VAL   L4VAL   O1          O2      O3      O4\n    1       0           Grandparent     (null)  (null)  (null)  Grandparent         \n    2       1           Parent1         1       (null)  (null)  Grandparent Parent1     \n    4       2           Child1          2       1       (null)  Grandparent Parent1 Child1  \n    8       4           Child1_Child1   4       2       1       Grandparent Parent1 Child1  Child1_Child1\n    5       2           Child2          2       1       (null)  Grandparent Parent1 Child2  \n    3       1           Parent2         1       (null)  (null)  Grandparent Parent2     \n    6       3           Child3          3       1       (null)  Grandparent Parent2 Child3  \n    10      6           Child3_Child1   6       3       1       Grandparent Parent2 Child3  Child3_Child1\n    7       3           Child4          3       1       (null)  Grandparent Parent2 Child4  \n    9       7           Child4_Child1   7       3       1       Grandparent Parent2 Child4  Child4_Child1\n
    \n soup wrap:

    You can do this by self-joining to generate a list of values from the hierarchy to order on, as shown in the code below. I've expanded to add an extra level of hierarchy to the original example to show how this would work. Clearly this depends on knowing the number of hierarchy levels to generate a reasonable plan (you could always do 10 levels, for example, but that will be a big performance hit if you only have 3 levels of hierarchy in your example).

    With further thought I imagine you could use an EXEC statement to generate the SQL needed for a particular hierarchy level, rather than generating manually as below (which will have some optimisations as e.g. we know if an entry does not have anything at L3 it won't have anything at L4 either).

    WITH resultset (resultid, parentid, valuex) AS (
    SELECT 1,0,'Grandparent' UNION ALL
    SELECT 2,1,'Parent1' UNION ALL
    SELECT 3,1,'Parent2' UNION ALL
    SELECT 4,2,'Child1' UNION ALL
    SELECT 5,2,'Child2' UNION ALL
    SELECT 6,3,'Child3' UNION ALL
    SELECT 7,3,'Child4' UNION ALL
    SELECT 8,4,'Child1_Child1' UNION ALL
    SELECT 9,7,'Child4_Child1' UNION ALL
    SELECT 10,6,'Child3_Child1')
    SELECT l1.resultid , l1.parentid, l1.valuex, l2.resultid l2val, l3.resultid l3val,l4.resultid l4val,
    
    -- rewrite COALESCE so clearer how this matches the pattern below
    CASE WHEN l4.resultid IS NULL THEN
    CASE WHEN l3.resultid IS NULL THEN
    CASE WHEN l2.resultid IS NULL THEN l1.valuex 
    ELSE l2.valuex END
    ELSE l3.valuex END
    ELSE l4.valuex END o1,
    
    CASE WHEN l4.resultid IS NULL THEN 
    CASE WHEN l3.resultid IS NULL THEN 
    CASE WHEN l2.resultid IS NULL THEN '' 
    ELSE l1.valuex END
    ELSE COALESCE (l2.valuex, l1.valuex, '') END
    ELSE COALESCE (l3.valuex, l2.valuex, l1.valuex, '') END o2,
    
    CASE WHEN l3.resultid IS NULL THEN ''
    WHEN l4.valuex IS NULL THEN l1.valuex
    ELSE l2.valuex END o3,
    
    CASE WHEN l2.valuex IS NULL THEN '' 
    WHEN l4.valuex IS NULL THEN '' ELSE l1.valuex END o4
    
    FROM resultset l1
    left join resultset l2 on l1.parentid = l2.resultid
    left join resultset l3 on l2.parentid = l3.resultid
    left join resultset l4 on l3.parentid = l4.resultid
    ORDER BY o1, o2, o3, o4
    

    Results (apologies for bad formatting):

    RESULTID    PARENTID    VALUEX          L2VAL   L3VAL   L4VAL   O1          O2      O3      O4
        1       0           Grandparent     (null)  (null)  (null)  Grandparent         
        2       1           Parent1         1       (null)  (null)  Grandparent Parent1     
        4       2           Child1          2       1       (null)  Grandparent Parent1 Child1  
        8       4           Child1_Child1   4       2       1       Grandparent Parent1 Child1  Child1_Child1
        5       2           Child2          2       1       (null)  Grandparent Parent1 Child2  
        3       1           Parent2         1       (null)  (null)  Grandparent Parent2     
        6       3           Child3          3       1       (null)  Grandparent Parent2 Child3  
        10      6           Child3_Child1   6       3       1       Grandparent Parent2 Child3  Child3_Child1
        7       3           Child4          3       1       (null)  Grandparent Parent2 Child4  
        9       7           Child4_Child1   7       3       1       Grandparent Parent2 Child4  Child4_Child1
    
    qid & accept id: (37685156, 37685263) query: How to limit the number of rows returned for each group soup:

    So I'm attempting to use the row_number() analytic to assign a row number to for each file in a group. starting at 1 going to X and then use a where clause to limit the row_number to just the 2 files desired... Since the row_number has to materialize before we can apply a where clause to it, I need to use a subselect or CTE.

    \n

    Not sure how well a CTE and connect by prior along with a row_number will play together... May have to use 2 CTE's

    \n

    I doubt I have the syntax perfect without testing; but this convey's a general concept.

    \n

    1st attempt:

    \n
    With CTE AS (\nselect id, name, date, connect_by_root name as "Group",\nROW_NUMBER() over (partition by connect_by_root name order by ID ) RN\nfrom myTable\nwhere connect_by_isleaf = 1\nstart with parentid = 0\nconnect by prior id = parentid)\nSelect * from cte where RN <= 2\n
    \n

    Second attempt:

    \n
    With CTE AS (\nselect id, name, date, connect_by_root name as "Group" from myTable\nwhere connect_by_isleaf = 1\nstart with parentid = 0\nconnect by prior id = parentid),\n\nCTE2 as (Select A.*, \n        Row_number() over (partition by Group order by ID) RN from CTE A)\nSelect * from cte2 where RN <= 2\n
    \n soup wrap:

    So I'm attempting to use the row_number() analytic to assign a row number to for each file in a group. starting at 1 going to X and then use a where clause to limit the row_number to just the 2 files desired... Since the row_number has to materialize before we can apply a where clause to it, I need to use a subselect or CTE.

    Not sure how well a CTE and connect by prior along with a row_number will play together... May have to use 2 CTE's

    I doubt I have the syntax perfect without testing; but this convey's a general concept.

    1st attempt:

    With CTE AS (
    select id, name, date, connect_by_root name as "Group",
    ROW_NUMBER() over (partition by connect_by_root name order by ID ) RN
    from myTable
    where connect_by_isleaf = 1
    start with parentid = 0
    connect by prior id = parentid)
    Select * from cte where RN <= 2
    

    Second attempt:

    With CTE AS (
    select id, name, date, connect_by_root name as "Group" from myTable
    where connect_by_isleaf = 1
    start with parentid = 0
    connect by prior id = parentid),
    
    CTE2 as (Select A.*, 
            Row_number() over (partition by Group order by ID) RN from CTE A)
    Select * from cte2 where RN <= 2
    
    qid & accept id: (37691287, 37691538) query: SQL: Wordpress users ordered by the date of their latest post (CPT) soup:

    You need to order better by ID than post_date or post_author excluding admin post_author>1:

    \n
    SELECT *\nFROM `wp_posts` \nWHERE `post_author` != 1 \nAND `post_status` = 'publish'\nAND `post_type` = 'my_custom_type' \nGROUP BY `post_author` \nORDER BY `ID` DESC, `post_author` ASC LIMIT 5\n
    \n
    \n

    Update:

    \n

    You will get now last 5 authors list ordered by ascendant date (ASC 'ID') that have 'published' posts (custom post type = 'my_custom_type'), excluding Admin (user ID = 1). And at the end the total post count for each author.

    \n

    Here is the query:

    \n
    select t1.*, t2.author_count\nfrom `wp_posts` t1\ninner join (\n    select max(`ID`) as `ID`, `post_author`, count(1) as author_count\n    from `wp_posts`\n    where `post_author` != '1'\n    and `post_status` = 'publish'\n    and `post_type` = 'my_custom_type'\n    group by `post_author`\n) t2 on t1.`ID` = t2.`ID` and t1.`post_author` = t2.`post_author` \norder by t1.`ID` desc limit 5\n
    \n

    author_count is the generated column that counts total 'published' posts, with a 'post_type' = 'my_custom_type' for each selected author.

    \n

    Based on this answer.

    \n soup wrap:

    You need to order better by ID than post_date or post_author excluding admin post_author>1:

    SELECT *
    FROM `wp_posts` 
    WHERE `post_author` != 1 
    AND `post_status` = 'publish'
    AND `post_type` = 'my_custom_type' 
    GROUP BY `post_author` 
    ORDER BY `ID` DESC, `post_author` ASC LIMIT 5
    

    Update:

    You will get now last 5 authors list ordered by ascendant date (ASC 'ID') that have 'published' posts (custom post type = 'my_custom_type'), excluding Admin (user ID = 1). And at the end the total post count for each author.

    Here is the query:

    select t1.*, t2.author_count
    from `wp_posts` t1
    inner join (
        select max(`ID`) as `ID`, `post_author`, count(1) as author_count
        from `wp_posts`
        where `post_author` != '1'
        and `post_status` = 'publish'
        and `post_type` = 'my_custom_type'
        group by `post_author`
    ) t2 on t1.`ID` = t2.`ID` and t1.`post_author` = t2.`post_author` 
    order by t1.`ID` desc limit 5
    

    author_count is the generated column that counts total 'published' posts, with a 'post_type' = 'my_custom_type' for each selected author.

    Based on this answer.

    qid & accept id: (37712305, 37712461) query: How do I convert a decimal representation of Days into the number of hours and minutes in Oracle SQL? soup:

    Oracle Setup:

    \n
    CREATE TABLE table_name ( value ) AS\nSELECT 0.004722222222222222222222222222222222222223 FROM DUAL UNION ALL\nSELECT 3.12383101851851851851851851851851851851 FROM DUAL UNION ALL\nSELECT 0.000856481481481481481481481481481481481479 FROM DUAL UNION ALL\nSELECT 0.002592592592592592592592592592592592592593 FROM DUAL UNION ALL\nSELECT 0.001041666666666666666666666666666666666667 FROM DUAL;\n
    \n

    Query:

    \n
    SELECT NUMTODSINTERVAL( value, 'DAY' ) FROM table_name;\n
    \n

    Output:

    \n
    NUMTODSINTERVAL(VALUE,'DAY')\n----------------------------\n0 0:6:48.0                   \n3 2:58:19.0                  \n0 0:1:14.0                   \n0 0:3:44.0                   \n0 0:1:30.0                   \n
    \n

    Query 2:

    \n
    SELECT TRIM( BOTH FROM\n         CASE WHEN dd <> 0 THEN dd || ' Days' END\n         || CASE WHEN hh <> 0 THEN ' ' || hh || ' Hours' END\n         || CASE WHEN mm <> 0 THEN ' ' || mm || ' Minutes' END\n         || CASE WHEN ss <> 0 THEN ' ' || ss || ' Seconds' END\n       ) AS period\nFROM   (\n  SELECT EXTRACT( DAY    FROM period ) AS dd,\n         EXTRACT( HOUR   FROM period ) AS hh,\n         EXTRACT( MINUTE FROM period ) AS mm,\n         EXTRACT( SECOND FROM period ) AS ss\n  FROM   (\n    SELECT NUMTODSINTERVAL( value, 'DAY' ) AS period\n    FROM   table_name\n  )\n);\n
    \n

    Output:

    \n
    PERIOD\n------------------------------------\n6 Minutes 48 Seconds\n3 Days 2 Hours 58 Minutes 19 Seconds\n1 Minutes 14 Seconds\n3 Minutes 44 Seconds\n1 Minutes 30 Seconds\n
    \n soup wrap:

    Oracle Setup:

    CREATE TABLE table_name ( value ) AS
    SELECT 0.004722222222222222222222222222222222222223 FROM DUAL UNION ALL
    SELECT 3.12383101851851851851851851851851851851 FROM DUAL UNION ALL
    SELECT 0.000856481481481481481481481481481481481479 FROM DUAL UNION ALL
    SELECT 0.002592592592592592592592592592592592592593 FROM DUAL UNION ALL
    SELECT 0.001041666666666666666666666666666666666667 FROM DUAL;
    

    Query:

    SELECT NUMTODSINTERVAL( value, 'DAY' ) FROM table_name;
    

    Output:

    NUMTODSINTERVAL(VALUE,'DAY')
    ----------------------------
    0 0:6:48.0                   
    3 2:58:19.0                  
    0 0:1:14.0                   
    0 0:3:44.0                   
    0 0:1:30.0                   
    

    Query 2:

    SELECT TRIM( BOTH FROM
             CASE WHEN dd <> 0 THEN dd || ' Days' END
             || CASE WHEN hh <> 0 THEN ' ' || hh || ' Hours' END
             || CASE WHEN mm <> 0 THEN ' ' || mm || ' Minutes' END
             || CASE WHEN ss <> 0 THEN ' ' || ss || ' Seconds' END
           ) AS period
    FROM   (
      SELECT EXTRACT( DAY    FROM period ) AS dd,
             EXTRACT( HOUR   FROM period ) AS hh,
             EXTRACT( MINUTE FROM period ) AS mm,
             EXTRACT( SECOND FROM period ) AS ss
      FROM   (
        SELECT NUMTODSINTERVAL( value, 'DAY' ) AS period
        FROM   table_name
      )
    );
    

    Output:

    PERIOD
    ------------------------------------
    6 Minutes 48 Seconds
    3 Days 2 Hours 58 Minutes 19 Seconds
    1 Minutes 14 Seconds
    3 Minutes 44 Seconds
    1 Minutes 30 Seconds